[
  {
    "path": ".gitignore",
    "content": "__pycache__/\n.idea/\n.vscode/\n.tmp\n.cache\ntests/\n/*.json\n*.config.json"
  },
  {
    "path": "LICENSE",
    "content": "The use of this software or any derivative work for the purpose of \nproviding a commercial service, such as (but not limited to) an\nAI image generation service, is strictly prohibited without obtaining \npermission and/or a separate commercial license from the copyright holder. \nThis includes any service that charges users directly or indirectly for \naccess to this software's functionality, whether standalone or integrated \ninto a larger product.\n\n                        GNU AFFERO GENERAL PUBLIC \n                       Version 3, 19 November 2007\n\n Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The GNU Affero General Public License is a free, copyleft license for\nsoftware and other kinds of works, specifically designed to ensure\ncooperation with the community in the case of network server software.\n\n  The licenses for most software and other practical works are designed\nto take away your freedom to share and change the works.  By contrast,\nour General Public Licenses are intended to guarantee your freedom to\nshare and change all versions of a program--to make sure it remains free\nsoftware for all its users.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthem if you wish), that you receive source code or can get it if you\nwant it, that you can change the software or use pieces of it in new\nfree programs, and that you know you can do these things.\n\n  Developers that use our General Public Licenses protect your rights\nwith two steps: (1) assert copyright on the software, and (2) offer\nyou this License which gives you legal permission to copy, distribute\nand/or modify the software.\n\n  A secondary benefit of defending all users' freedom is that\nimprovements made in alternate versions of the program, if they\nreceive widespread use, become available for other developers to\nincorporate.  Many developers of free software are heartened and\nencouraged by the resulting cooperation.  However, in the case of\nsoftware used on network servers, this result may fail to come about.\nThe GNU General Public License permits making a modified version and\nletting the public access it on a server without ever releasing its\nsource code to the public.\n\n  The GNU Affero General Public License is designed specifically to\nensure that, in such cases, the modified source code becomes available\nto the community.  It requires the operator of a network server to\nprovide the source code of the modified version running there to the\nusers of that server.  Therefore, public use of a modified version, on\na publicly accessible server, gives the public access to the source\ncode of the modified version.\n\n  An older license, called the Affero General Public License and\npublished by Affero, was designed to accomplish similar goals.  This is\na different license, not a version of the Affero GPL, but Affero has\nreleased a new version of the Affero GPL which permits relicensing under\nthis license.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                       TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to version 3 of the GNU Affero General Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\nworks, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\n\"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work\nin a fashion requiring copyright permission, other than the making of an\nexact copy.  The resulting work is called a \"modified version\" of the\nearlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based\non the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\npermission, would make you directly or secondarily liable for\ninfringement under applicable copyright law, except executing it on a\ncomputer or modifying a private copy.  Propagation includes copying,\ndistribution (with or without modification), making available to the\npublic, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\nparties to make or receive copies.  Mere interaction with a user through\na computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\"\nto the extent that it includes a convenient and prominently visible\nfeature that (1) displays an appropriate copyright notice, and (2)\ntells the user that there is no warranty for the work (except to the\nextent that warranties are provided), that licensees may convey the\nwork under this License, and how to view a copy of this License.  If\nthe interface presents a list of user commands or options, such as a\nmenu, a prominent item in the list meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work\nfor making modifications to it.  \"Object code\" means any non-source\nform of a work.\n\n  A \"Standard Interface\" means an interface that either is an official\nstandard defined by a recognized standards body, or, in the case of\ninterfaces specified for a particular programming language, one that\nis widely used among developers working in that language.\n\n  The \"System Libraries\" of an executable work include anything, other\nthan the work as a whole, that (a) is included in the normal form of\npackaging a Major Component, but which is not part of that Major\nComponent, and (b) serves only to enable use of the work with that\nMajor Component, or to implement a Standard Interface for which an\nimplementation is available to the public in source code form.  A\n\"Major Component\", in this context, means a major essential component\n(kernel, window system, and so on) of the specific operating system\n(if any) on which the executable work runs, or a compiler used to\nproduce the work, or an object code interpreter used to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all\nthe source code needed to generate, install, and (for an executable\nwork) run the object code and to modify the work, including scripts to\ncontrol those activities.  However, it does not include the work's\nSystem Libraries, or general-purpose tools or generally available free\nprograms which are used unmodified in performing those activities but\nwhich are not part of the work.  For example, Corresponding Source\nincludes interface definition files associated with source files for\nthe work, and the source code for shared libraries and dynamically\nlinked subprograms that the work is specifically designed to require,\nsuch as by intimate data communication or control flow between those\nsubprograms and other parts of the work.\n\n  The Corresponding Source need not include anything that users\ncan regenerate automatically from other parts of the Corresponding\nSource.\n\n  The Corresponding Source for a work in source code form is that\nsame work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\ncopyright on the Program, and are irrevocable provided the stated\nconditions are met.  This License explicitly affirms your unlimited\npermission to run the unmodified Program.  The output from running a\ncovered work is covered by this License only if the output, given its\ncontent, constitutes a covered work.  This License acknowledges your\nrights of fair use or other equivalent, as provided by copyright law.\n\n  You may make, run and propagate covered works that you do not\nconvey, without conditions so long as your license otherwise remains\nin force.  You may convey covered works to others for the sole purpose\nof having them make modifications exclusively for you, or provide you\nwith facilities for running those works, provided that you comply with\nthe terms of this License in conveying all material for which you do\nnot control copyright.  Those thus making or running the covered works\nfor you must do so exclusively on your behalf, under your direction\nand control, on terms that prohibit them from making any copies of\nyour copyrighted material outside their relationship with you.\n\n  Conveying under any other circumstances is permitted solely under\nthe conditions stated below.  Sublicensing is not allowed; section 10\nmakes it unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\nmeasure under any applicable law fulfilling obligations under article\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\nsimilar laws prohibiting or restricting circumvention of such\nmeasures.\n\n  When you convey a covered work, you waive any legal power to forbid\ncircumvention of technological measures to the extent such circumvention\nis effected by exercising rights under this License with respect to\nthe covered work, and you disclaim any intention to limit operation or\nmodification of the work as a means of enforcing, against the work's\nusers, your or third parties' legal rights to forbid circumvention of\ntechnological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\nreceive it, in any medium, provided that you conspicuously and\nappropriately publish on each copy an appropriate copyright notice;\nkeep intact all notices stating that this License and any\nnon-permissive terms added in accord with section 7 apply to the code;\nkeep intact all notices of the absence of any warranty; and give all\nrecipients a copy of this License along with the Program.\n\n  You may charge any price or no price for each copy that you convey,\nand you may offer support or warranty protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\nproduce it from the Program, in the form of source code under the\nterms of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified\n    it, and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is\n    released under this License and any conditions added under section\n    7.  This requirement modifies the requirement in section 4 to\n    \"keep intact all notices\".\n\n    c) You must license the entire work, as a whole, under this\n    License to anyone who comes into possession of a copy.  This\n    License will therefore apply, along with any applicable section 7\n    additional terms, to the whole of the work, and all its parts,\n    regardless of how they are packaged.  This License gives no\n    permission to license the work in any other way, but it does not\n    invalidate such permission if you have separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your\n    work need not make them do so.\n\n  A compilation of a covered work with other separate and independent\nworks, which are not by their nature extensions of the covered work,\nand which are not combined with it such as to form a larger program,\nin or on a volume of a storage or distribution medium, is called an\n\"aggregate\" if the compilation and its resulting copyright are not\nused to limit the access or legal rights of the compilation's users\nbeyond what the individual works permit.  Inclusion of a covered work\nin an aggregate does not cause this License to apply to the other\nparts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms\nof sections 4 and 5, provided that you also convey the\nmachine-readable Corresponding Source under the terms of this License,\nin one of these ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium\n    customarily used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a\n    written offer, valid for at least three years and valid for as\n    long as you offer spare parts or customer support for that product\n    model, to give anyone who possesses the object code either (1) a\n    copy of the Corresponding Source for all the software in the\n    product that is covered by this License, on a durable physical\n    medium customarily used for software interchange, for a price no\n    more than your reasonable cost of physically performing this\n    conveying of source, or (2) access to copy the\n    Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source.  This\n    alternative is allowed only occasionally and noncommercially, and\n    only if you received the object code with such an offer, in accord\n    with subsection 6b.\n\n    d) Convey the object code by offering access from a designated\n    place (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge.  You need not require recipients to copy the\n    Corresponding Source along with the object code.  If the place to\n    copy the object code is a network server, the Corresponding Source\n    may be on a different server (operated by you or a third party)\n    that supports equivalent copying facilities, provided you maintain\n    clear directions next to the object code saying where to find the\n    Corresponding Source.  Regardless of what server hosts the\n    Corresponding Source, you remain obligated to ensure that it is\n    available for as long as needed to satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided\n    you inform other peers where the object code and Corresponding\n    Source of the work are being offered to the general public at no\n    charge under subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\nfrom the Corresponding Source as a System Library, need not be\nincluded in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\ntangible personal property which is normally used for personal, family,\nor household purposes, or (2) anything designed or sold for incorporation\ninto a dwelling.  In determining whether a product is a consumer product,\ndoubtful cases shall be resolved in favor of coverage.  For a particular\nproduct received by a particular user, \"normally used\" refers to a\ntypical or common use of that class of product, regardless of the status\nof the particular user or of the way in which the particular user\nactually uses, or expects or is expected to use, the product.  A product\nis a consumer product regardless of whether the product has substantial\ncommercial, industrial or non-consumer uses, unless such uses represent\nthe only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\nprocedures, authorization keys, or other information required to install\nand execute modified versions of a covered work in that User Product from\na modified version of its Corresponding Source.  The information must\nsuffice to ensure that the continued functioning of the modified object\ncode is in no case prevented or interfered with solely because\nmodification has been made.\n\n  If you convey an object code work under this section in, or with, or\nspecifically for use in, a User Product, and the conveying occurs as\npart of a transaction in which the right of possession and use of the\nUser Product is transferred to the recipient in perpetuity or for a\nfixed term (regardless of how the transaction is characterized), the\nCorresponding Source conveyed under this section must be accompanied\nby the Installation Information.  But this requirement does not apply\nif neither you nor any third party retains the ability to install\nmodified object code on the User Product (for example, the work has\nbeen installed in ROM).\n\n  The requirement to provide Installation Information does not include a\nrequirement to continue to provide support service, warranty, or updates\nfor a work that has been modified or installed by the recipient, or for\nthe User Product in which it has been modified or installed.  Access to a\nnetwork may be denied when the modification itself materially and\nadversely affects the operation of the network or violates the rules and\nprotocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided,\nin accord with this section must be in a format that is publicly\ndocumented (and with an implementation available to the public in\nsource code form), and must require no special password or key for\nunpacking, reading or copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\nLicense by making exceptions from one or more of its conditions.\nAdditional permissions that are applicable to the entire Program shall\nbe treated as though they were included in this License, to the extent\nthat they are valid under applicable law.  If additional permissions\napply only to part of the Program, that part may be used separately\nunder those permissions, but the entire Program remains governed by\nthis License without regard to the additional permissions.\n\n  When you convey a copy of a covered work, you may at your option\nremove any additional permissions from that copy, or from any part of\nit.  (Additional permissions may be written to require their own\nremoval in certain cases when you modify the work.)  You may place\nadditional permissions on material, added by you to a covered work,\nfor which you have or can give appropriate copyright permission.\n\n  Notwithstanding any other provision of this License, for material you\nadd to a covered work, you may (if authorized by the copyright holders of\nthat material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some\n    trade names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that\n    material by anyone who conveys the material (or modified versions of\n    it) with contractual assumptions of liability to the recipient, for\n    any liability that these contractual assumptions directly impose on\n    those licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\nrestrictions\" within the meaning of section 10.  If the Program as you\nreceived it, or any part of it, contains a notice stating that it is\ngoverned by this License along with a term that is a further\nrestriction, you may remove that term.  If a license document contains\na further restriction but permits relicensing or conveying under this\nLicense, you may add to a covered work material governed by the terms\nof that license document, provided that the further restriction does\nnot survive such relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you\nmust place, in the relevant source files, a statement of the\nadditional terms that apply to those files, or a notice indicating\nwhere to find the applicable terms.\n\n  Additional terms, permissive or non-permissive, may be stated in the\nform of a separately written license, or stated as exceptions;\nthe above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\nprovided under this License.  Any attempt otherwise to propagate or\nmodify it is void, and will automatically terminate your rights under\nthis License (including any patent licenses granted under the third\nparagraph of section 11).\n\n  However, if you cease all violation of this License, then your\nlicense from a particular copyright holder is reinstated (a)\nprovisionally, unless and until the copyright holder explicitly and\nfinally terminates your license, and (b) permanently, if the copyright\nholder fails to notify you of the violation by some reasonable means\nprior to 60 days after the cessation.\n\n  Moreover, your license from a particular copyright holder is\nreinstated permanently if the copyright holder notifies you of the\nviolation by some reasonable means, this is the first time you have\nreceived notice of violation of this License (for any work) from that\ncopyright holder, and you cure the violation prior to 30 days after\nyour receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\nlicenses of parties who have received copies or rights from you under\nthis License.  If your rights have been terminated and not permanently\nreinstated, you do not qualify to receive new licenses for the same\nmaterial under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or\nrun a copy of the Program.  Ancillary propagation of a covered work\noccurring solely as a consequence of using peer-to-peer transmission\nto receive a copy likewise does not require acceptance.  However,\nnothing other than this License grants you permission to propagate or\nmodify any covered work.  These actions infringe copyright if you do\nnot accept this License.  Therefore, by modifying or propagating a\ncovered work, you indicate your acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically\nreceives a license from the original licensors, to run, modify and\npropagate that work, subject to this License.  You are not responsible\nfor enforcing compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\norganization, or substantially all assets of one, or subdividing an\norganization, or merging organizations.  If propagation of a covered\nwork results from an entity transaction, each party to that\ntransaction who receives a copy of the work also receives whatever\nlicenses to the work the party's predecessor in interest had or could\ngive under the previous paragraph, plus a right to possession of the\nCorresponding Source of the work from the predecessor in interest, if\nthe predecessor has it or can get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the\nrights granted or affirmed under this License.  For example, you may\nnot impose a license fee, royalty, or other charge for exercise of\nrights granted under this License, and you may not initiate litigation\n(including a cross-claim or counterclaim in a lawsuit) alleging that\nany patent claim is infringed by making, using, selling, offering for\nsale, or importing the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\nLicense of the Program or a work on which the Program is based.  The\nwork thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims\nowned or controlled by the contributor, whether already acquired or\nhereafter acquired, that would be infringed by some manner, permitted\nby this License, of making, using, or selling its contributor version,\nbut do not include claims that would be infringed only as a\nconsequence of further modification of the contributor version.  For\npurposes of this definition, \"control\" includes the right to grant\npatent sublicenses in a manner consistent with the requirements of\nthis License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\npatent license under the contributor's essential patent claims, to\nmake, use, sell, offer for sale, import and otherwise run, modify and\npropagate the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\nagreement or commitment, however denominated, not to enforce a patent\n(such as an express permission to practice a patent or covenant not to\nsue for patent infringement).  To \"grant\" such a patent license to a\nparty means to make such an agreement or commitment not to enforce a\npatent against the party.\n\n  If you convey a covered work, knowingly relying on a patent license,\nand the Corresponding Source of the work is not available for anyone\nto copy, free of charge and under the terms of this License, through a\npublicly available network server or other readily accessible means,\nthen you must either (1) cause the Corresponding Source to be so\navailable, or (2) arrange to deprive yourself of the benefit of the\npatent license for this particular work, or (3) arrange, in a manner\nconsistent with the requirements of this License, to extend the patent\nlicense to downstream recipients.  \"Knowingly relying\" means you have\nactual knowledge that, but for the patent license, your conveying the\ncovered work in a country, or your recipient's use of the covered work\nin a country, would infringe one or more identifiable patents in that\ncountry that you have reason to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\narrangement, you convey, or propagate by procuring conveyance of, a\ncovered work, and grant a patent license to some of the parties\nreceiving the covered work authorizing them to use, propagate, modify\nor convey a specific copy of the covered work, then the patent license\nyou grant is automatically extended to all recipients of the covered\nwork and works based on it.\n\n  A patent license is \"discriminatory\" if it does not include within\nthe scope of its coverage, prohibits the exercise of, or is\nconditioned on the non-exercise of one or more of the rights that are\nspecifically granted under this License.  You may not convey a covered\nwork if you are a party to an arrangement with a third party that is\nin the business of distributing software, under which you make payment\nto the third party based on the extent of your activity of conveying\nthe work, and under which the third party grants, to any of the\nparties who would receive the covered work from you, a discriminatory\npatent license (a) in connection with copies of the covered work\nconveyed by you (or copies made from those copies), or (b) primarily\nfor and in connection with specific products or compilations that\ncontain the covered work, unless you entered into that arrangement,\nor that patent license was granted, prior to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting\nany implied license or other defenses to infringement that may\notherwise be available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot convey a\ncovered work so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you may\nnot convey it at all.  For example, if you agree to terms that obligate you\nto collect a royalty for further conveying from those to whom you convey\nthe Program, the only way you could satisfy both those terms and this\nLicense would be to refrain entirely from conveying the Program.\n\n  13. Remote Network Interaction; Use with the GNU General Public License.\n\n  Notwithstanding any other provision of this License, if you modify the\nProgram, your modified version must prominently offer all users\ninteracting with it remotely through a computer network (if your version\nsupports such interaction) an opportunity to receive the Corresponding\nSource of your version by providing access to the Corresponding Source\nfrom a network server at no charge, through some standard or customary\nmeans of facilitating copying of software.  This Corresponding Source\nshall include the Corresponding Source for any work covered by version 3\nof the GNU General Public License that is incorporated pursuant to the\nfollowing paragraph.\n\n  Notwithstanding any other provision of this License, you have\npermission to link or combine any covered work with a work licensed\nunder version 3 of the GNU General Public License into a single\ncombined work, and to convey the resulting work.  The terms of this\nLicense will continue to apply to the part which is the covered work,\nbut the work with which it is combined will remain governed by version\n3 of the GNU General Public License.\n\n  14. Revised Versions of this License.\n\n  The Free Software Foundation may publish revised and/or new versions of\nthe GNU Affero General Public License from time to time.  Such new versions\nwill be similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\n  Each version is given a distinguishing version number.  If the\nProgram specifies that a certain numbered version of the GNU Affero General\nPublic License \"or any later version\" applies to it, you have the\noption of following the terms and conditions either of that numbered\nversion or of any later version published by the Free Software\nFoundation.  If the Program does not specify a version number of the\nGNU Affero General Public License, you may choose any version ever published\nby the Free Software Foundation.\n\n  If the Program specifies that a proxy can decide which future\nversions of the GNU Affero General Public License can be used, that proxy's\npublic statement of acceptance of a version permanently authorizes you\nto choose that version for the Program.\n\n  Later license versions may give you additional or different\npermissions.  However, no additional obligations are imposed on any\nauthor or copyright holder as a result of your choosing to follow a\nlater version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided\nabove cannot be given local legal effect according to their terms,\nreviewing courts shall apply local law that most closely approximates\nan absolute waiver of all civil liability in connection with the\nProgram, unless a warranty or assumption of liability accompanies a\ncopy of the Program in return for a fee.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nstate the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software: you can redistribute it and/or modify\n    it under the terms of the GNU Affero General Public License as published\n    by the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU Affero General Public License for more details.\n\n    You should have received a copy of the GNU Affero General Public License\n    along with this program.  If not, see <https://www.gnu.org/licenses/>.\n\nAlso add information on how to contact you by electronic and paper mail.\n\n  If your software can interact with users remotely through a computer\nnetwork, you should also make sure that it provides a way for users to\nget its source.  For example, if your program is a web application, its\ninterface could display a \"Source\" link that leads users to an archive\nof the code.  There are many ways you could offer source, and different\nsolutions will be better for different programs; see section 13 for the\nspecific requirements.\n\n  You should also get your employer (if you work as a programmer) or school,\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\nFor more information on this, and how to apply and follow the GNU AGPL, see\n<https://www.gnu.org/licenses/>.\n"
  },
  {
    "path": "README.md",
    "content": "# SUPERIOR SAMPLING WITH RES4LYF: THE POWER OF BONGMATH\n\nRES_3M vs. Uni-PC (WAN). Typically only 20 steps are needed with RES samplers. Far more are needed with Uni-PC and other common samplers, and they never reach the same level of quality.\n\n![res_3m_vs_unipc_1](https://github.com/user-attachments/assets/9321baf9-2d68-4fe8-9427-fcf0609bd02b)\n![res_3m_vs_unipc_2](https://github.com/user-attachments/assets/d7ab48e4-51dd-4fa7-8622-160c8f9e33d6)\n\n\n# INSTALLATION\n\nIf you are using a venv, you will need to first run from within your ComfyUI folder (that contains your \"venv\" folder):\n\n_Linux:_\n\nsource venv/bin/activate\n\n_Windows:_\n\nvenv\\Scripts\\activate\n\n_Then, \"cd\" into your \"custom_nodes\" folder and run the following commands:_\n\ngit clone https://github.com/ClownsharkBatwing/RES4LYF/\n\ncd RES4LYF\n\n_If you are using a venv, run these commands:_\n\npip install -r requirements.txt\n\n_Alternatively, if you are using the portable version of ComfyUI you will need to replace \"pip\" with the path to your embedded pip executable. For example, on Windows:_\n\nX:\\path\\to\\your\\comfy_portable_folder\\python_embedded\\Scripts\\pip.exe install -r requirements.txt\n\n\n# IMPORTANT UPDATE INFO\n\nThe previous versions will remain available but with \"Legacy\" prepended to their names.\n\nIf you wish to use the sampler menu shown below, you will need to install https://github.com/rgthree/rgthree-comfy (which I highly recommend you have regardless).\n\n![image](https://github.com/user-attachments/assets/b36360bb-a59e-4654-aed7-6b6f53673826)\n\nIf these menus do not show up after restarting ComfyUI and refreshing the page (hit F5, not just \"r\") verify that these menus are enabled in the rgthree settings (click the gear in the bottom left of ComfyUI, select rgthree, and ensure \"Auto Nest Subdirectories\" is checked):\n\n![image](https://github.com/user-attachments/assets/db46fc90-df1a-4d1c-b6ed-c44d26b8a9b3)\n\n\n# NEW VERSION DOCUMENTATION\n\nI have prepared a detailed explanation of many of the concepts of sampling with exmaples in this workflow. There's also many tips, explanations of parameters, and all of the most important nodes are laid out for you to see. Some new workflow-enhancing tricks like \"chainsamplers\" are demonstrated, and **regional AND temporal prompting** are explained (supporting Flux, HiDream, SD3.5, AuraFlow, and WAN - you can even change the conditioning on a frame-by-frame basis!).\n\n[[example_workflows/intro to clownsampling.json\n]((https://github.com/ClownsharkBatwing/RES4LYF/blob/main/example_workflows/intro%20to%20clownsampling.json))](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/example_workflows/intro%20to%20clownsampling.json)\n\n![intro to clownsampling](https://github.com/user-attachments/assets/40c23993-c70e-4a71-9207-4cee4b7e71e0)\n\n\n\n# STYLE TRANSFER\n\nSupported models: HiDream, Flux, Chroma, AuraFlow, SD1.5, SDXL, SD3.5, Stable Cascade, LTXV, and WAN. Also supported: Stable Cascade (and UltraPixel) which has an excellent understanding of style (https://github.com/ClownsharkBatwing/UltraCascade).\n\nCurrently, best results are with HiDream or Chroma, or Flux with a style lora (Flux Dev is very lacking with style knowledge). Include some mention of the style you wish to use in the prompt. (Try with the guide off to confirm the prompt is not doing the heavy lifting!)\n\n![image](https://github.com/user-attachments/assets/a62593fa-b104-4347-bf69-e1e50217ce2d)\n\n\nFor example, the prompt for the below was simply \"a gritty illustration of a japanese woman with traditional hair in traditional clothes\". Mostly you just need to make clear whether it's supposed to be a photo or an illustration, etc. so that the conditioning isn't fighting the style guide (every model has its inherent biases).\n\n![image](https://github.com/user-attachments/assets/e872e258-c786-4475-8369-c8487ee5ec72)\n\n**COMPOSITION GUIDE; OUTPUT; STYLE GUIDE**\n\n![style example](https://github.com/user-attachments/assets/4970c6ea-d142-4e4e-967a-59ff93528840)\n\n![image](https://github.com/user-attachments/assets/fb071885-48b8-4698-9288-63a2866cb67b)\n\n# KILL FLUX BLUR (and HiDream blur)\n\n**Consecutive seeds, no cherrypicking.**\n\n![antiblur](https://github.com/user-attachments/assets/5bc0e1e3-82e1-4ccc-8d39-64a939815e57)\n\n\n# REGIONAL CONDITIONING\n\nUnlimited zones! Over 10 zones have been used in one image before. \n\nCurrently supported models: HiDream, Flux, Chroma, SD3.5, SD1.5, SDXL, AuraFlow, and WAN.\n\nMasks can be drawn freely, or more traditional rigid ones may be used, such as in this example:\n\n![image](https://github.com/user-attachments/assets/edfb076a-78e2-4077-b53f-3e8bab07040a)\n\n![ComfyUI_16020_](https://github.com/user-attachments/assets/5f45cdcb-f879-43ca-bcf4-bcae60aa4bbc)\n\n![ComfyUI_12157_](https://github.com/user-attachments/assets/b9e385d2-3359-4a13-99b9-4a7243863b0d)\n\n![ComfyUI_12039_](https://github.com/user-attachments/assets/6d36ae62-ce8c-41e3-b52c-823e9c1b1d50)\n\n\n# TEMPORAL CONDITIONING\n\nUnlimited zones! Ability to change the prompt for each frame.\n\nCurrently supported models: WAN.\n\n![image](https://github.com/user-attachments/assets/743bc972-cfbf-45a8-8745-d6ca1a6b0bab)\n\n![temporal conditioning 09580](https://github.com/user-attachments/assets/eef0e04c-d1b2-49b7-a1ca-f8cb651dd3a7)\n\n# VIDEO 2 VIDEO EDITING\n\nViable with any video model, demo with WAN:\n\n![wan vid2vid compressed](https://github.com/user-attachments/assets/431c30f7-339e-4b86-8d02-6180b09b15b2)\n\n# PREVIOUS VERSION NODE DOCUMENTATION\n\nAt the heart of this repository is the \"ClownsharKSampler\", which was specifically designed to support both rectified flow and probability flow models. It features 69 different selectible samplers (44 explicit, 18 fully implicit, 7 diagonally implicit) all available in both ODE or SDE modes with 20 noise types, 9 noise scaling modes, and options for implicit Runge-Kutta sampling refinement steps. Several new explicit samplers are implemented, most notably RES_2M, RES_3S, and RES_5S. Additionally, img2img capabilities include both latent image guidance and unsampling/resampling (via new forms of rectified noise inversion). \n\nA particular emphasis of this project has been to facilitate modulating parameters vs. time, which can facilitate large gains in image quality from the sampling process. To this end, a wide variety of sigma, latent, and noise manipulation nodes are included. \n\nMuch of this work remains experimental and is subject to further changes.\n\n# ClownSampler\n![image](https://github.com/user-attachments/assets/f787ad74-0d95-4d8f-84b6-af4c4c1ac5e5)\n\n# SharkSampler\n![image](https://github.com/user-attachments/assets/299c9285-b298-4452-b0dd-48ae425ce30a)\n\n# ClownsharKSampler\n![image](https://github.com/user-attachments/assets/430fb77a-7353-4b40-acb6-cbd33392f7fc)\n\nThis is an all-in-one sampling node designed for convenience without compromising on control or quality. \n\nThere are several key sections to the parameters which will be explained below.\n\n## INPUTS\n![image](https://github.com/user-attachments/assets/e8fe825d-2fb1-4e93-874c-89fb73ba68f7)\n\nThe only two mandatory inputs here are \"model\" and \"latent_image\". \n\n**POSITIVE and NEGATIVE:** If you connect nothing to either of these inputs, the node will automatically generate null conditioning. If you are unsampling, you actually don't need to hook up any conditioning at all (and will set CFG = 1.0). In most cases, merely using the positive conditioning will suffice, unless you really need to use a specific negative prompt.\n\n**SIGMAS:** If a sigmas scheduler node is connected to this input, it will override the scheduler and steps settings chosen within the node.\n\n## NOISE SETTINGS\n![image](https://github.com/user-attachments/assets/caaa41a4-5afa-4c3c-8fb2-003b9a6b2578)\n\n**NOISE_TYPE_INIT:** This sets the initial noise type applied to the latent image. \n\n**NOISE_TYPE_SDE:** This sets the noise type used during SDE sampling. Note that SDE sampling is identical to ODE sampling in most ways - the difference is that noise is added after each step. It's like a form of carefully controlled continuous noise injection.\n\n**NOISE_MODE_SDE:** This determines what method is used for scaling the amount of noise to be added based on the \"eta\" setting below. They are listed in order of strength of the effect. \n\n**ETA:** This controls how much noise is added after each step. Note that for most of the noise modes, anything equal to or greater than 1.0 will trigger internal scaling to prevent NaN errors. The exception is the noise mode \"exp\" which allows for settings far above 1.0. \n\n**NOISE_SEED:** Largely identical to the setting in KSampler. Set to -1 to have it increment the most recently used seed (by the workflow) by 1.\n\n**CONTROL_AFTER_GENERATE:** Self-explanatory. I recommend setting to \"fixed\" or \"increment\" (as you don't have to reload the workflow to regenerate something, you can just decement it by one).\n\n## SAMPLER SETTINGS\n![image](https://github.com/user-attachments/assets/d5ef0bef-7388-44f0-a119-220beec9883d)\n\n**SAMPLER_MODE:** In virtually all situations, use \"standard\". However, if you are unsampling, set to \"unsample\", and if you are resampling (the stage after unsampling), set to \"resample\". Both of these modes will disable noise addition within ComfyUI, which is essential for these methods to work properly. \n\n**SAMPLER_NAME:** This is used similarly to the KSampler setting. This selects the explicit sampler type. Note the use of numbers and letters at the end of each sampler name: \"2m, 3m, 2s, 3s, 5s, etc.\" \n\nSamplers that end in \"s\" use substeps between each step. One ending with \"2s\" has two stages per step, therefore costs two model calls per step (Euler costs one - model calls are what determine inference time). \"3s\" would take three model calls per step, and therefore take three times as long to run as Euler. However, the increase in accuracy can be very dramatic, especially when using noise (SDE sampling). The \"res\" family of samplers are particularly notable (they are effectively refinements of the dpmpp family, with new, higher order, much more accurate versions implemented here).\n\nSamplers that end in \"m\" are \"multistep\" samplers, which instead of issuing new model calls for substeps, recycle previous steps as estimations for these substeps. They're less accurate, but all run at Euler speed (one model call per step). Sometimes this can be an advantage, as multistep samplers tend to converge more linearly toward a target image. This can be useful for img2img transformations, unsampling, or when using latent image guides.\n\n**IMPLICIT_SAMPLER_NAME:** This is very useful with SD3.5 Medium for improving coherence, reducing artifacts and mutations, etc. It may be difficult to use with a model like Flux unless you plan on setting up a queue of generations and walking away. It will use the explicit step type as a predictor for each of the implicit substeps, so if you choose a slow explicit sampler, you will be waiting a long time. Euler, res_2m, deis_2m, etc. will often suffice as a predictor for implicit sampling, though any sampler may be used. Try \"res_5s\" as your explicit sampler type, and \"gauss-legendre_5s\", if you wish to demonstrate your commitment to climate change (and image quality).\n\nSetting this to \"none\" has the same effect as setting implicit_steps = 0.\n\n## SCHEDULER AND DENOISE SETTINGS\n![image](https://github.com/user-attachments/assets/b89d3956-1734-4368-8bb4-429b9989cd4d)\n\nThese are identical in most ways to the settings by the same name in KSampler. \n\n**SCHEDULER:** There is one extra sigma scheduler offered by default: \"beta57\" which is the beta schedule with modified parameters (alpha = 0.5, beta = 0.7).\n\n**IMPLICIT_STEPS:** This controls the number of implicit steps to run. Note that it will double, triple, etc. the runtime as you increase the stepcount. Typically, gains diminish quickly after 2-3 implicit steps.\n\n**DENOISE:** This is identical to the KSampler setting. Controls the amount of noise removed from the image. Note that with this method, the effect will change significantly depending on your choice of scheduler.\n\n**DENOISE_ALT:** Instead of splitting the sigma schedule like \"denoise\", this multiplies them. The results are different, but track more closely from one scheduler to another when using the same value. This can be particularly useful for img2img workflows.\n\n**CFG:** This is identical to the KSampler setting. Typically, you'll set this to 1.0 (to disable it) when using Flux, if you're using Flux guidance. However, the effect is quite nice when using dedistilled models if you use \"CLIP Text Encode\" without any Flux guidance, and set CFG to 3.0. \n\nIf you've never quite understood CFG, you can think of it this way. Imagine you're walking down the street and see what looks like an enticing music festival in the distance (your positive conditioning). You're on the fence about attending, but then, suddenly, a horde of pickleshark cannibals come storming out of a nearby bar (your negative conditioning). Together, the two team up to drive you toward the music festival. That's CFG.\n\n## SHIFT SETTINGS\n![image](https://github.com/user-attachments/assets/e9a2e2d7-be5c-4b63-8647-275409600b56)\n\nThese are present for convenience as they are used in virtually every workflow.\n\n**SHIFT:** This is the same as \"shift\" for the ModelSampling nodes for SD3.5, AuraFlow, etc., and is equivalent to \"max_shift\" for Flux. Set this value to -1 to disable setting shift (or max_shift) within the node.\n\n**BASE_SHIFT:** This is only used by Flux. Set this value to -1 to disable setting base_shift within the node.\n\n**SHIFT_SCALING:** This changes how the shift values are calculated. \"exponential\" is the default used by Flux, whereas \"linear\" is the default used by SD3.5 and AuraFlow. In most cases, \"exponential\" leads to better results, though \"linear\" has some niche uses. \n\n# Sampler and noise mode list\n\n## Explicit samplers\nBolded samplers are added as options to the sampler dropdown in ComyfUI (an ODE and SDE version for each).\n\n**res_2m**\n\n**res_2/3/5s**\n\n**deis_2/3/4m**\n\nralston_2/3/4s\n\ndpmpp_2/3m\n\ndpmpp_sde_2s\n\ndpmpp_2/3s\n\nmidpoint_2s\n\nheun_2/3s\n\nhouwen-wray_3s\n\nkutta_3s\n\nssprk3_3s\n\nrk38_4s\n\nrk4_4s\n\ndormand-prince_6s\n\ndormand-prince_13s\n\nbogacki-shampine_7s\n\nddim\n\neuler\n\n## Fully Implicit Samplers\n\ngauss-legendre_2/3/4/5s\n\nradau_(i/ii)a_2/3s\n\nlobatto_iii(a/b/c/d/star)_2/3s\n\n## Diagonally Implicit Samplers\n\nkraaijevanger_spijker_2s\n\nqin_zhang_2s\n\npareschi_russo_2s\n\npareschi_russo_alt_2s\n\ncrouzeix_2/3s\n\nirk_exp_diag_2s (features an exponential integrator)\n\n# PREVIOUS FLUX WORKFLOWS\n\n## TXT2IMG:\nThis uses my amateur cell phone lora, which is freely available (https://huggingface.co/ClownsharkBatwing/CSBW_Style/blob/main/amateurphotos_1_amateurcellphonephoto_recapt2.safetensors). It significantly reduces the plastic, blurred look of Flux Dev.\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20flux.png)\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20WF%20flux.png)\n\n## INPAINTING:\n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/inpainting%20flux.png)\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/inpainting%20WF%20flux.png)\n\n## UNSAMPLING (Dual guides with masks):\n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20masked%20flux.png)\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20masked%20WF%20flux.png)\n\n# PREVIOUS WORKFLOWS\n**THE FOLLOWING WORKFLOWS ARE FOR A PREVIOUS VERSION OF THE NODE.** \nThese will still work! You will, however, need to manually delete and recreate the sampler and guide nodes and input the settings as they appear in the screenshots. The layout of the nodes has been changed slightly. To replicate their behavior precisely, add to the new extra_options box in ClownsharKSampler: truncate_conditioning=true (if that setting was used in the screenshot for the node).\n\n![image](https://github.com/user-attachments/assets/a55ec484-1339-45a2-bcc4-76934f4648d4)\n\n**TXT2IMG Workflow:** \n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20SD35M%20output.png)\n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20SD35M.png)\n\n**TXT2IMG Workflow (Latent Image Guides):**\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20guided%20SD35M%20output.png)\n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20guided%20SD35M.png)\n\nInput image:\nhttps://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20guided%20SD35M%20input.png\n\n**TXT2IMG Workflow (Dual Guides with Masking):**\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20with%20mask%20SD35M%20output.png)\n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20with%20mask%20SD35M.png)\n\nInput images and mask:\nhttps://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20with%20mask%20SD35M%20input1.png\nhttps://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20with%20mask%20SD35M%20input2.png\nhttps://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20with%20mask%20SD35M%20mask.png\n\n**IMG2IMG Workflow (Unsampling):** \n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20unsampling%20SD35L%20output.png)\n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20unsampling%20SD35L.png)\n\nInput image:\nhttps://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20unsampling%20SD35L%20input.png\n\n**IMG2IMG Workflow (Unsampling with SDXL):**\n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20unsampling%20SDXL%20output.png)\n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20unsampling%20SDXL.png)\n\nInput image:\nhttps://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20unsampling%20SDXL%20input.png\n\n**IMG2IMG Workflow (Unsampling with latent image guide):**\n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20guided%20unsampling%20SD35M%20output.png)\n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20guided%20unsampling%20SD35M.png)\n\nInput image:\nhttps://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20guided%20unsampling%20SD35M%20input.png\n\n**IMG2IMG Workflow (Unsampling with dual latent image guides and masking):**\n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20dual%20guided%20masked%20unsampling%20SD35M%20output.png)\n\n![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20dual%20guided%20masked%20unsampling%20SD35M.png)\n\nInput images and mask:\nhttps://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20dual%20guided%20masked%20unsampling%20SD35M%20input1.png\nhttps://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20dual%20guided%20masked%20unsampling%20SD35M%20input2.png\nhttps://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20dual%20guided%20masked%20unsampling%20SD35M%20mask.png\n"
  },
  {
    "path": "__init__.py",
    "content": "import importlib\r\nimport os\r\n\r\nfrom . import loaders\r\nfrom . import sigmas\r\nfrom . import conditioning\r\nfrom . import images\r\nfrom . import models\r\nfrom . import helper_sigma_preview_image_preproc\r\nfrom . import nodes_misc\r\n\r\nfrom . import nodes_latents\r\nfrom . import nodes_precision\r\n\r\n\r\nimport torch\r\nfrom math import *\r\n\r\n\r\nfrom comfy.samplers import SchedulerHandler, SCHEDULER_HANDLERS, SCHEDULER_NAMES\r\nnew_scheduler_name = \"bong_tangent\"\r\nif new_scheduler_name not in SCHEDULER_HANDLERS:\r\n    bong_tangent_handler = SchedulerHandler(handler=sigmas.bong_tangent_scheduler, use_ms=True)\r\n    SCHEDULER_HANDLERS[new_scheduler_name] = bong_tangent_handler\r\n    SCHEDULER_NAMES.append(new_scheduler_name)\r\n\r\n\r\nfrom .res4lyf import RESplain\r\n\r\n#torch.use_deterministic_algorithms(True)\r\n#torch.backends.cudnn.deterministic = True\r\n#torch.backends.cudnn.benchmark = False\r\n\r\nres4lyf.init()\r\n\r\ndiscard_penultimate_sigma_samplers = set((\r\n))\r\n\r\n\r\ndef add_samplers():\r\n    from comfy.samplers import KSampler, k_diffusion_sampling\r\n    if hasattr(KSampler, \"DISCARD_PENULTIMATE_SIGMA_SAMPLERS\"):\r\n        KSampler.DISCARD_PENULTIMATE_SIGMA_SAMPLERS |= discard_penultimate_sigma_samplers\r\n    added = 0\r\n    for sampler in extra_samplers: #getattr(self, \"sample_{}\".format(extra_samplers))\r\n        if sampler not in KSampler.SAMPLERS:\r\n            try:\r\n                idx = KSampler.SAMPLERS.index(\"uni_pc_bh2\") # *should* be last item in samplers list\r\n                KSampler.SAMPLERS.insert(idx+1, sampler) # add custom samplers (presumably) to end of list\r\n                setattr(k_diffusion_sampling, \"sample_{}\".format(sampler), extra_samplers[sampler])\r\n                added += 1\r\n            except ValueError as _err:\r\n                pass\r\n    if added > 0:\r\n        import importlib\r\n        importlib.reload(k_diffusion_sampling)\r\n\r\nextra_samplers = {}\r\n\r\nextra_samplers = dict(reversed(extra_samplers.items()))\r\n\r\nNODE_CLASS_MAPPINGS = {\r\n\r\n    \"FluxLoader\"                          : loaders.FluxLoader,\r\n    \"SD35Loader\"                          : loaders.SD35Loader,\r\n    \"ClownModelLoader\"                    : loaders.RES4LYFModelLoader,\r\n    \r\n\r\n    \"TextBox1\"                            : nodes_misc.TextBox1,\r\n    \"TextBox2\"                            : nodes_misc.TextBox2,\r\n    \"TextBox3\"                            : nodes_misc.TextBox3,\r\n    \r\n    \"TextConcatenate\"                     : nodes_misc.TextConcatenate,\r\n    \"TextBoxConcatenate\"                  : nodes_misc.TextBoxConcatenate,\r\n    \r\n    \"TextLoadFile\"                        : nodes_misc.TextLoadFile,\r\n    \"TextShuffle\"                         : nodes_misc.TextShuffle,\r\n    \"TextShuffleAndTruncate\"              : nodes_misc.TextShuffleAndTruncate,\r\n    \"TextTruncateTokens\"                  : nodes_misc.TextTruncateTokens,\r\n\r\n    \"SeedGenerator\"                       : nodes_misc.SeedGenerator,\r\n    \r\n    \"ClownRegionalConditioning\"           : conditioning.ClownRegionalConditioning,\r\n    \"ClownRegionalConditionings\"          : conditioning.ClownRegionalConditionings,\r\n    \r\n    \"ClownRegionalConditioning2\"          : conditioning.ClownRegionalConditioning2,\r\n    \"ClownRegionalConditioning3\"          : conditioning.ClownRegionalConditioning3,\r\n    \r\n    \"ClownRegionalConditioning_AB\"        : conditioning.ClownRegionalConditioning_AB,\r\n    \"ClownRegionalConditioning_ABC\"       : conditioning.ClownRegionalConditioning_ABC,\r\n\r\n    \"CLIPTextEncodeFluxUnguided\"          : conditioning.CLIPTextEncodeFluxUnguided,\r\n    \"ConditioningOrthoCollin\"             : conditioning.ConditioningOrthoCollin,\r\n\r\n    \"ConditioningAverageScheduler\"        : conditioning.ConditioningAverageScheduler,\r\n    \"ConditioningMultiply\"                : conditioning.ConditioningMultiply,\r\n    \"ConditioningAdd\"                     : conditioning.ConditioningAdd,\r\n    \"Conditioning Recast FP64\"            : conditioning.Conditioning_Recast64,\r\n    \"StableCascade_StageB_Conditioning64\" : conditioning.StableCascade_StageB_Conditioning64,\r\n    \"ConditioningZeroAndTruncate\"         : conditioning.ConditioningZeroAndTruncate,\r\n    \"ConditioningTruncate\"                : conditioning.ConditioningTruncate,\r\n    \"StyleModelApplyStyle\"                : conditioning.StyleModelApplyStyle,\r\n    \"CrossAttn_EraseReplace_HiDream\"      : conditioning.CrossAttn_EraseReplace_HiDream,\r\n\r\n    \"ConditioningDownsample (T5)\"         : conditioning.ConditioningDownsampleT5,\r\n\r\n    \"ConditioningToBase64\"                : conditioning.ConditioningToBase64,\r\n    \"Base64ToConditioning\"                : conditioning.Base64ToConditioning,\r\n    \r\n    \"ConditioningBatch4\"                  : conditioning.ConditioningBatch4,\r\n    \"ConditioningBatch8\"                  : conditioning.ConditioningBatch8,\r\n    \r\n    \"TemporalMaskGenerator\"               : conditioning.TemporalMaskGenerator,\r\n    \"TemporalSplitAttnMask\"               : conditioning.TemporalSplitAttnMask,\r\n    \"TemporalSplitAttnMask (Midframe)\"    : conditioning.TemporalSplitAttnMask_Midframe,\r\n    \"TemporalCrossAttnMask\"               : conditioning.TemporalCrossAttnMask,\r\n\r\n\r\n\r\n    \"Set Precision\"                       : nodes_precision.set_precision,\r\n    \"Set Precision Universal\"             : nodes_precision.set_precision_universal,\r\n    \"Set Precision Advanced\"              : nodes_precision.set_precision_advanced,\r\n    \r\n    \"LatentUpscaleWithVAE\"                : helper_sigma_preview_image_preproc.LatentUpscaleWithVAE,\r\n    \r\n    \"LatentNoised\"                        : nodes_latents.LatentNoised,\r\n    \"LatentNoiseList\"                     : nodes_latents.LatentNoiseList,\r\n    \"AdvancedNoise\"                       : nodes_latents.AdvancedNoise,\r\n\r\n    \"LatentNoiseBatch_perlin\"             : nodes_latents.LatentNoiseBatch_perlin,\r\n    \"LatentNoiseBatch_fractal\"            : nodes_latents.LatentNoiseBatch_fractal,\r\n    \"LatentNoiseBatch_gaussian\"           : nodes_latents.LatentNoiseBatch_gaussian,\r\n    \"LatentNoiseBatch_gaussian_channels\"  : nodes_latents.LatentNoiseBatch_gaussian_channels,\r\n    \r\n    \"LatentBatch_channels\"                : nodes_latents.LatentBatch_channels,\r\n    \"LatentBatch_channels_16\"             : nodes_latents.LatentBatch_channels_16,\r\n    \r\n    \"Latent Get Channel Means\"            : nodes_latents.latent_get_channel_means,\r\n    \r\n    \"Latent Match Channelwise\"            : nodes_latents.latent_channelwise_match,\r\n    \r\n    \"Latent to RawX\"                      : nodes_latents.latent_to_raw_x,\r\n    \"Latent Clear State Info\"             : nodes_latents.latent_clear_state_info,\r\n    \"Latent Replace State Info\"           : nodes_latents.latent_replace_state_info,\r\n    \"Latent Display State Info\"           : nodes_latents.latent_display_state_info,\r\n    \"Latent Transfer State Info\"          : nodes_latents.latent_transfer_state_info,\r\n    \"Latent TrimVideo State Info\"         : nodes_latents.TrimVideoLatent_state_info,\r\n    \"Latent to Cuda\"                      : nodes_latents.latent_to_cuda,\r\n    \"Latent Batcher\"                      : nodes_latents.latent_batch,\r\n    \"Latent Normalize Channels\"           : nodes_latents.latent_normalize_channels,\r\n    \"Latent Channels From To\"             : nodes_latents.latent_mean_channels_from_to,\r\n\r\n\r\n\r\n    \"LatentPhaseMagnitude\"                : nodes_latents.LatentPhaseMagnitude,\r\n    \"LatentPhaseMagnitudeMultiply\"        : nodes_latents.LatentPhaseMagnitudeMultiply,\r\n    \"LatentPhaseMagnitudeOffset\"          : nodes_latents.LatentPhaseMagnitudeOffset,\r\n    \"LatentPhaseMagnitudePower\"           : nodes_latents.LatentPhaseMagnitudePower,\r\n    \r\n    \"MaskFloatToBoolean\"                  : nodes_latents.MaskFloatToBoolean,\r\n    \r\n    \"MaskToggle\"                          : nodes_latents.MaskToggle,\r\n    \"MaskEdge\"                            : nodes_latents.MaskEdge,\r\n    #\"MaskEdgeRatio\"                       : nodes_latents.MaskEdgeRatio,\r\n\r\n    \"Frames Masks Uninterpolate\"          : nodes_latents.Frames_Masks_Uninterpolate,\r\n    \"Frames Masks ZeroOut\"                : nodes_latents.Frames_Masks_ZeroOut,\r\n    \"Frames Latent ReverseOrder\"          : nodes_latents.Frames_Latent_ReverseOrder,\r\n\r\n    \r\n    \"EmptyLatentImage64\"                  : nodes_latents.EmptyLatentImage64,\r\n    \"EmptyLatentImageCustom\"              : nodes_latents.EmptyLatentImageCustom,\r\n    \"StableCascade_StageC_VAEEncode_Exact\": nodes_latents.StableCascade_StageC_VAEEncode_Exact,\r\n    \r\n    \r\n    \r\n    \"PrepForUnsampling\"                   : helper_sigma_preview_image_preproc.VAEEncodeAdvanced,\r\n    \"VAEEncodeAdvanced\"                   : helper_sigma_preview_image_preproc.VAEEncodeAdvanced,\r\n    \"VAEStyleTransferLatent\"              : helper_sigma_preview_image_preproc.VAEStyleTransferLatent,\r\n    \r\n    \"SigmasPreview\"                       : helper_sigma_preview_image_preproc.SigmasPreview,\r\n    \"SigmasSchedulePreview\"               : helper_sigma_preview_image_preproc.SigmasSchedulePreview,\r\n\r\n\r\n    \"TorchCompileModelFluxAdv\"            : models.TorchCompileModelFluxAdvanced,\r\n    \"TorchCompileModelAura\"               : models.TorchCompileModelAura,\r\n    \"TorchCompileModelSD35\"               : models.TorchCompileModelSD35,\r\n    \"TorchCompileModels\"                  : models.TorchCompileModels,\r\n    \"ClownpileModelWanVideo\"              : models.ClownpileModelWanVideo,\r\n\r\n\r\n    \"ModelTimestepPatcher\"                : models.ModelSamplingAdvanced,\r\n    \"ModelSamplingAdvanced\"               : models.ModelSamplingAdvanced,\r\n    \"ModelSamplingAdvancedResolution\"     : models.ModelSamplingAdvancedResolution,\r\n    \"FluxGuidanceDisable\"                 : models.FluxGuidanceDisable,\r\n\r\n    \"ReWanPatcher\"                        : models.ReWanPatcher,\r\n    \"ReFluxPatcher\"                       : models.ReFluxPatcher,\r\n    \"ReChromaPatcher\"                     : models.ReChromaPatcher,\r\n    \"ReSD35Patcher\"                       : models.ReSD35Patcher,\r\n    \"ReAuraPatcher\"                       : models.ReAuraPatcher,\r\n    \"ReLTXVPatcher\"                       : models.ReLTXVPatcher,\r\n    \"ReHiDreamPatcher\"                    : models.ReHiDreamPatcher,\r\n    \"ReSDPatcher\"                         : models.ReSDPatcher,\r\n    \"ReReduxPatcher\"                      : models.ReReduxPatcher,\r\n    \r\n    \"ReWanPatcherAdvanced\"                : models.ReWanPatcherAdvanced,\r\n    \"ReFluxPatcherAdvanced\"               : models.ReFluxPatcherAdvanced,\r\n    \"ReChromaPatcherAdvanced\"             : models.ReChromaPatcherAdvanced,\r\n    \"ReSD35PatcherAdvanced\"               : models.ReSD35PatcherAdvanced,\r\n    \"ReAuraPatcherAdvanced\"               : models.ReAuraPatcherAdvanced,\r\n    \"ReLTXVPatcherAdvanced\"               : models.ReLTXVPatcherAdvanced,\r\n\r\n    \r\n    \"ReHiDreamPatcherAdvanced\"            : models.ReHiDreamPatcherAdvanced,\r\n    \r\n    \"LayerPatcher\"                        : loaders.LayerPatcher,\r\n    \r\n    \"FluxOrthoCFGPatcher\"                 : models.FluxOrthoCFGPatcher,\r\n\r\n    \r\n    \"UNetSave\"                            : models.UNetSave,\r\n\r\n\r\n\r\n    \"Sigmas Recast\"                       : sigmas.set_precision_sigmas,\r\n    \"Sigmas Noise Inversion\"              : sigmas.sigmas_noise_inversion,\r\n    \"Sigmas From Text\"                    : sigmas.sigmas_from_text, \r\n\r\n    \"Sigmas Variance Floor\"               : sigmas.sigmas_variance_floor,\r\n    \"Sigmas Truncate\"                     : sigmas.sigmas_truncate,\r\n    \"Sigmas Start\"                        : sigmas.sigmas_start,\r\n    \"Sigmas Split\"                        : sigmas.sigmas_split,\r\n    \"Sigmas Split Value\"                  : sigmas.sigmas_split_value,\r\n    \"Sigmas Concat\"                       : sigmas.sigmas_concatenate,\r\n    \"Sigmas Pad\"                          : sigmas.sigmas_pad,\r\n    \"Sigmas Unpad\"                        : sigmas.sigmas_unpad,\r\n    \r\n    \"Sigmas SetFloor\"                     : sigmas.sigmas_set_floor,\r\n    \"Sigmas DeleteBelowFloor\"             : sigmas.sigmas_delete_below_floor,\r\n    \"Sigmas DeleteDuplicates\"             : sigmas.sigmas_delete_consecutive_duplicates,\r\n    \"Sigmas Cleanup\"                      : sigmas.sigmas_cleanup,\r\n    \r\n    \"Sigmas Mult\"                         : sigmas.sigmas_mult,\r\n    \"Sigmas Modulus\"                      : sigmas.sigmas_modulus,\r\n    \"Sigmas Quotient\"                     : sigmas.sigmas_quotient,\r\n    \"Sigmas Add\"                          : sigmas.sigmas_add,\r\n    \"Sigmas Power\"                        : sigmas.sigmas_power,\r\n    \"Sigmas Abs\"                          : sigmas.sigmas_abs,\r\n    \r\n    \"Sigmas2 Mult\"                        : sigmas.sigmas2_mult,\r\n    \"Sigmas2 Add\"                         : sigmas.sigmas2_add,\r\n    \r\n    \"Sigmas Rescale\"                      : sigmas.sigmas_rescale,\r\n    \"Sigmas Count\"                        : sigmas.sigmas_count,\r\n    \"Sigmas Resample\"                     : sigmas.sigmas_interpolate,\r\n\r\n    \"Sigmas Math1\"                        : sigmas.sigmas_math1,\r\n    \"Sigmas Math3\"                        : sigmas.sigmas_math3,\r\n\r\n    \"Sigmas Iteration Karras\"             : sigmas.sigmas_iteration_karras,\r\n    \"Sigmas Iteration Polyexp\"            : sigmas.sigmas_iteration_polyexp,\r\n\r\n    # New Sigma Nodes\r\n    \"Sigmas Lerp\"                         : sigmas.sigmas_lerp,\r\n    \"Sigmas InvLerp\"                      : sigmas.sigmas_invlerp,\r\n    \"Sigmas ArcSine\"                      : sigmas.sigmas_arcsine,\r\n    \"Sigmas LinearSine\"                   : sigmas.sigmas_linearsine,\r\n    \"Sigmas Append\"                       : sigmas.sigmas_append,\r\n    \"Sigmas ArcCosine\"                    : sigmas.sigmas_arccosine,\r\n    \"Sigmas ArcTangent\"                   : sigmas.sigmas_arctangent,\r\n    \"Sigmas CrossProduct\"                 : sigmas.sigmas_crossproduct,\r\n    \"Sigmas DotProduct\"                   : sigmas.sigmas_dotproduct,\r\n    \"Sigmas Fmod\"                         : sigmas.sigmas_fmod,\r\n    \"Sigmas Frac\"                         : sigmas.sigmas_frac,\r\n    \"Sigmas If\"                           : sigmas.sigmas_if,\r\n    \"Sigmas Logarithm2\"                   : sigmas.sigmas_logarithm2,\r\n    \"Sigmas SmoothStep\"                   : sigmas.sigmas_smoothstep,\r\n    \"Sigmas SquareRoot\"                   : sigmas.sigmas_squareroot,\r\n    \"Sigmas TimeStep\"                     : sigmas.sigmas_timestep,\r\n    \"Sigmas Sigmoid\"                      : sigmas.sigmas_sigmoid,\r\n    \"Sigmas Easing\"                       : sigmas.sigmas_easing,\r\n    \"Sigmas Hyperbolic\"                   : sigmas.sigmas_hyperbolic,\r\n    \"Sigmas Gaussian\"                     : sigmas.sigmas_gaussian,\r\n    \"Sigmas Percentile\"                   : sigmas.sigmas_percentile,\r\n    \"Sigmas KernelSmooth\"                 : sigmas.sigmas_kernel_smooth,\r\n    \"Sigmas QuantileNorm\"                 : sigmas.sigmas_quantile_norm,\r\n    \"Sigmas AdaptiveStep\"                 : sigmas.sigmas_adaptive_step,\r\n    \"Sigmas Chaos\"                        : sigmas.sigmas_chaos,\r\n    \"Sigmas ReactionDiffusion\"            : sigmas.sigmas_reaction_diffusion,\r\n    \"Sigmas Attractor\"                    : sigmas.sigmas_attractor,\r\n    \"Sigmas CatmullRom\"                   : sigmas.sigmas_catmull_rom,\r\n    \"Sigmas LambertW\"                     : sigmas.sigmas_lambert_w,\r\n    \"Sigmas ZetaEta\"                      : sigmas.sigmas_zeta_eta,\r\n    \"Sigmas GammaBeta\"                    : sigmas.sigmas_gamma_beta,\r\n    \r\n    \r\n    \"Sigmas GaussianCDF\"                  : sigmas.sigmas_gaussian_cdf,\r\n    \"Sigmas StepwiseMultirate\"            : sigmas.sigmas_stepwise_multirate,\r\n    \"Sigmas HarmonicDecay\"                : sigmas.sigmas_harmonic_decay,\r\n    \"Sigmas AdaptiveNoiseFloor\"           : sigmas.sigmas_adaptive_noise_floor,\r\n    \"Sigmas CollatzIteration\"             : sigmas.sigmas_collatz_iteration,\r\n    \"Sigmas ConwaySequence\"               : sigmas.sigmas_conway_sequence,\r\n    \"Sigmas GilbreathSequence\"            : sigmas.sigmas_gilbreath_sequence,\r\n    \"Sigmas CNFInverse\"                   : sigmas.sigmas_cnf_inverse,\r\n    \"Sigmas RiemannianFlow\"               : sigmas.sigmas_riemannian_flow,\r\n    \"Sigmas LangevinDynamics\"             : sigmas.sigmas_langevin_dynamics,\r\n    \"Sigmas PersistentHomology\"           : sigmas.sigmas_persistent_homology,\r\n    \"Sigmas NormalizingFlows\"             : sigmas.sigmas_normalizing_flows,\r\n    \r\n    \"ClownScheduler\"                      : sigmas.ClownScheduler, # for modulating parameters\r\n    \"Tan Scheduler\"                       : sigmas.tan_scheduler,\r\n    \"Tan Scheduler 2\"                     : sigmas.tan_scheduler_2stage,\r\n    \"Tan Scheduler 2 Simple\"              : sigmas.tan_scheduler_2stage_simple,\r\n    \"Constant Scheduler\"                  : sigmas.constant_scheduler,\r\n    \"Linear Quadratic Advanced\"           : sigmas.linear_quadratic_advanced,\r\n    \r\n    \"SetImageSizeWithScale\"               : nodes_misc.SetImageSizeWithScale,\r\n    \"SetImageSize\"                        : nodes_misc.SetImageSize,\r\n    \r\n    \"Mask Bounding Box Aspect Ratio\"      : images.MaskBoundingBoxAspectRatio,\r\n    \r\n    \r\n    \"Image Get Color Swatches\"            : images.Image_Get_Color_Swatches,\r\n    \"Masks From Color Swatches\"           : images.Masks_From_Color_Swatches,\r\n    \"Masks From Colors\"                   : images.Masks_From_Colors,\r\n    \r\n    \"Masks Unpack 4\"                      : images.Masks_Unpack4,\r\n    \"Masks Unpack 8\"                      : images.Masks_Unpack8,\r\n    \"Masks Unpack 16\"                     : images.Masks_Unpack16,\r\n\r\n    \r\n    \"Image Sharpen FS\"                    : images.ImageSharpenFS,\r\n    \"Image Channels LAB\"                  : images.Image_Channels_LAB,\r\n    \"Image Median Blur\"                   : images.ImageMedianBlur,\r\n    \"Image Gaussian Blur\"                 : images.ImageGaussianBlur,\r\n\r\n    \"Image Pair Split\"                    : images.Image_Pair_Split,\r\n    \"Image Crop Location Exact\"           : images.Image_Crop_Location_Exact,\r\n    \"Film Grain\"                          : images.Film_Grain,\r\n    \"Frequency Separation Linear Light\"   : images.Frequency_Separation_Linear_Light,\r\n    \"Frequency Separation Hard Light\"     : images.Frequency_Separation_Hard_Light,\r\n    \"Frequency Separation Hard Light LAB\" : images.Frequency_Separation_Hard_Light_LAB,\r\n    \r\n    \"Frame Select\"                        : images.Frame_Select,\r\n    \"Frames Slice\"                        : images.Frames_Slice,\r\n    \"Frames Concat\"                       : images.Frames_Concat,\r\n    \r\n    \"Mask Sketch\"                         : images.MaskSketch,\r\n    \r\n    \"Image Grain Add\"                     : images.Image_Grain_Add,\r\n    \"Image Repeat Tile To Size\"           : images.ImageRepeatTileToSize,\r\n\r\n    \"Frames Concat Masks\"                 : nodes_latents.Frames_Concat_Masks,\r\n\r\n\r\n    \"Frame Select Latent\"                 : nodes_latents.Frame_Select_Latent,\r\n    \"Frames Slice Latent\"                 : nodes_latents.Frames_Slice_Latent,\r\n    \"Frames Concat Latent\"                : nodes_latents.Frames_Concat_Latent,\r\n\r\n\r\n    \"Frame Select Latent Raw\"             : nodes_latents.Frame_Select_Latent_Raw,\r\n    \"Frames Slice Latent Raw\"             : nodes_latents.Frames_Slice_Latent_Raw,\r\n    \"Frames Concat Latent Raw\"            : nodes_latents.Frames_Concat_Latent_Raw,\r\n\r\n\r\n\r\n}\r\n\r\n\r\nNODE_DISPLAY_NAME_MAPPINGS = {\r\n    \r\n}\r\n\r\n\r\nWEB_DIRECTORY = \"./web/js\"\r\n\r\n\r\n\r\nflags = {\r\n    \"zampler\"        : False,\r\n    \"beta_samplers\"  : False,\r\n    \"legacy_samplers\": False,\r\n}\r\n\r\n\r\nfile_path = os.path.join(os.path.dirname(__file__), \"zampler_test_code.txt\")\r\nif os.path.exists(file_path):\r\n    try:\r\n        from .zampler import add_zamplers\r\n        NODE_CLASS_MAPPINGS, extra_samplers = add_zamplers(NODE_CLASS_MAPPINGS, extra_samplers)\r\n        flags[\"zampler\"] = True\r\n        RESplain(\"Importing zampler.\")\r\n    except ImportError:\r\n        try:\r\n            import importlib\r\n            for module_name in [\"RES4LYF.zampler\", \"res4lyf.zampler\"]:\r\n                try:\r\n                    zampler_module = importlib.import_module(module_name)\r\n                    add_zamplers = zampler_module.add_zamplers\r\n                    NODE_CLASS_MAPPINGS, extra_samplers = add_zamplers(NODE_CLASS_MAPPINGS, extra_samplers)\r\n                    flags[\"zampler\"] = True\r\n                    RESplain(f\"Importing zampler via {module_name}.\")\r\n                    break\r\n                except ImportError:\r\n                    continue\r\n            else:\r\n                raise ImportError(\"Zampler module not found in any path\")\r\n        except Exception as e:\r\n            print(f\"(RES4LYF) Failed to import zamplers: {e}\")\r\n\r\n\r\n\r\ntry:\r\n    from .beta import add_beta\r\n    NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers = add_beta(NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers)\r\n    flags[\"beta_samplers\"] = True\r\n    RESplain(\"Importing beta samplers.\")\r\nexcept ImportError:\r\n    try:\r\n        import importlib\r\n        for module_name in [\"RES4LYF.beta\", \"res4lyf.beta\"]:\r\n            try:\r\n                beta_module = importlib.import_module(module_name)\r\n                add_beta = beta_module.add_beta\r\n                NODE_CLASS_MAPPINGS, extra_samplers = add_beta(NODE_CLASS_MAPPINGS, extra_samplers)\r\n                flags[\"beta_samplers\"] = True\r\n                RESplain(f\"Importing beta samplers via {module_name}.\")\r\n                break\r\n            except ImportError:\r\n                continue\r\n        else:\r\n            raise ImportError(\"Beta module not found in any path\")\r\n    except Exception as e:\r\n        print(f\"(RES4LYF) Failed to import beta samplers: {e}\")\r\n\r\n\r\n\r\ntry:\r\n    from .legacy import add_legacy\r\n    NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers = add_legacy(NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers)\r\n    flags[\"legacy_samplers\"] = True\r\n    RESplain(\"Importing legacy samplers.\")\r\nexcept ImportError:\r\n    try:\r\n        import importlib\r\n        for module_name in [\"RES4LYF.legacy\", \"res4lyf.legacy\"]:\r\n            try:\r\n                legacy_module = importlib.import_module(module_name)\r\n                add_legacy = legacy_module.add_legacy\r\n                NODE_CLASS_MAPPINGS, extra_samplers = add_legacy(NODE_CLASS_MAPPINGS, extra_samplers)\r\n                flags[\"legacy_samplers\"] = True\r\n                RESplain(f\"Importing legacy samplers via {module_name}.\")\r\n                break\r\n            except ImportError:\r\n                continue\r\n        else:\r\n            raise ImportError(\"Legacy module not found in any path\")\r\n    except Exception as e:\r\n        print(f\"(RES4LYF) Failed to import legacy samplers: {e}\")\r\n\r\n\r\nadd_samplers()\r\n\r\n\r\n__all__ = [\"NODE_CLASS_MAPPINGS\", \"NODE_DISPLAY_NAME_MAPPINGS\", \"WEB_DIRECTORY\"]\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "attention_masks.py",
    "content": "import torch\r\nimport torch.nn.functional as F\r\n\r\nfrom torch  import Tensor\r\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKING, TypeVar\r\n\r\nfrom einops import rearrange\r\n\r\nimport copy\r\nimport base64\r\n\r\nimport comfy.supported_models\r\nimport node_helpers\r\nimport gc\r\n\r\n\r\nfrom .sigmas  import get_sigmas\r\n\r\nfrom .helper  import initialize_or_scale, precision_tool, get_res4lyf_scheduler_list\r\nfrom .latents import get_orthogonal, get_collinear, get_edge_mask, checkerboard_variable\r\nfrom .res4lyf import RESplain\r\nfrom .beta.constants import MAX_STEPS\r\n\r\n\r\n\r\ndef fp_not(tensor):\r\n    return 1 - tensor\r\n\r\ndef fp_or(tensor1, tensor2):\r\n    return torch.maximum(tensor1, tensor2)\r\n\r\ndef fp_and(tensor1, tensor2):\r\n    return torch.minimum(tensor1, tensor2)\r\n\r\ndef fp_and2(tensor1, tensor2):\r\n    triu = torch.triu(torch.ones_like(tensor1))\r\n    tril = torch.tril(torch.ones_like(tensor2))\r\n    triu.diagonal().fill_(0.0)\r\n    tril.diagonal().fill_(0.0)\r\n    new_tensor = tensor1 * triu + tensor2 * tril\r\n    new_tensor.diagonal().fill_(1.0)\r\n    \r\n    return new_tensor\r\n\r\n\r\n\r\nclass CoreAttnMask:\r\n    def __init__(self, mask, mask_type=None, start_sigma=None, end_sigma=None, start_block=0, end_block=-1, idle_device='cpu', work_device='cuda'):\r\n        self.mask        = mask.to(idle_device)\r\n        self.start_sigma = start_sigma\r\n        self.end_sigma   = end_sigma\r\n        self.start_block = start_block\r\n        self.end_block   = end_block\r\n        self.work_device = work_device\r\n        self.idle_device = idle_device\r\n        self.mask_type   = mask_type\r\n    \r\n    def set_sigma_range(self, start_sigma, end_sigma):\r\n        self.start_sigma = start_sigma\r\n        self.end_sigma   = end_sigma\r\n        \r\n    def set_block_range(self, start_block, end_block):\r\n        self.start_block = start_block\r\n        self.end_block   = end_block\r\n\r\n    def __call__(self, weight=1.0, mask_type=None, transformer_options=None, block_idx=0):\r\n        \"\"\" \r\n        Return mask if block_idx is in range, sigma passed via transformer_options is in range, else return None. If no range is specified, return mask.\r\n        \"\"\"\r\n        if block_idx < self.start_block:\r\n            return None\r\n        if block_idx > self.end_block and self.end_block > 0:\r\n            return None\r\n        \r\n        mask_type = self.mask_type if mask_type is None else mask_type\r\n        \r\n        if transformer_options is None:\r\n            return self.mask.to(self.work_device) * weight if mask_type.startswith(\"gradient\") else self.mask.to(self.work_device) > 0\r\n\r\n        sigma = transformer_options['sigmas'][0].to(self.start_sigma.device)\r\n        \r\n        if self.start_sigma is not None and self.end_sigma is not None:\r\n            if self.start_sigma >= sigma > self.end_sigma:\r\n                return self.mask.to(self.work_device) * weight if mask_type.startswith(\"gradient\") else self.mask.to(self.work_device) > 0\r\n        else:\r\n            return self.mask.to(self.work_device) * weight if mask_type.startswith(\"gradient\") else self.mask.to(self.work_device) > 0\r\n        \r\n        return None\r\n\r\n\r\n\r\nclass BaseAttentionMask:\r\n    def __init__(self, mask_type=\"gradient\", edge_width=0, edge_width_list=None, use_self_attn_mask_list=None, dtype=torch.float16):\r\n        self.t                    = 1\r\n        self.img_len              = 0\r\n        self.text_len             = 0\r\n        self.text_off             = 0\r\n\r\n        self.h                    = 0\r\n        self.w                    = 0\r\n    \r\n        self.text_register_tokens = 0\r\n        \r\n        self.context_lens         = []\r\n        self.context_lens_list    = []\r\n        self.masks                = []\r\n        \r\n        self.num_regions          = 0\r\n        \r\n        self.attn_mask            = None\r\n        self.mask_type            = mask_type\r\n        self.edge_width           = edge_width\r\n        \r\n        self.edge_width_list      = edge_width_list\r\n        self.use_self_attn_mask_list = use_self_attn_mask_list\r\n        \r\n        if mask_type == \"gradient\":\r\n            self.dtype            = dtype\r\n        else:\r\n            self.dtype            = torch.bool\r\n\r\n\r\n    def set_latent(self, latent):\r\n        if latent.ndim == 4:\r\n            self.b, self.c, self.h, self.w = latent.shape\r\n            \r\n        elif latent.ndim == 5:\r\n            self.b, self.c, self.t, self.h, self.w = latent.shape\r\n            \r\n        #if not isinstance(self.model_config, comfy.supported_models.Stable_Cascade_C):\r\n        self.h //= 2  # 16x16 PE      patch_size = 2  1024x1024 rgb -> 128x128 16ch latent -> 64x64 img\r\n        self.w //= 2\r\n        \r\n        self.img_len = self.h * self.w        \r\n\r\n    def add_region(self, context, mask):\r\n        self.context_lens.append(context.shape[-2])\r\n        self.masks       .append(mask)\r\n        \r\n        self.text_len = sum(self.context_lens)\r\n        self.text_off = self.text_len\r\n        \r\n        self.num_regions += 1\r\n        \r\n    def add_region_sizes(self, context_size_list, mask):\r\n        \r\n        self.context_lens     .append(sum(context_size_list))\r\n        self.context_lens_list.append(    context_size_list)\r\n        self.masks            .append(mask)\r\n        \r\n        self.text_len = sum(sum(sublist) for sublist in self.context_lens_list)\r\n        self.text_off = self.text_len\r\n\r\n        self.num_regions += 1\r\n        \r\n    def add_regions(self, contexts, masks):\r\n        for context, mask in zip(contexts, masks):\r\n            self.add_region(context, mask)\r\n    \r\n    def clear_regions(self):\r\n        self.context_lens  = []\r\n        self.masks         = []\r\n        self.text_len      = 0\r\n        self.text_off      = 0\r\n        self.num_regions   = 0\r\n        \r\n    def generate(self):\r\n        print(\"Initializing ergosphere.\")\r\n        \r\n    def get(self, **kwargs):\r\n        return self.attn_mask(**kwargs)\r\n    \r\n    def attn_mask_recast(self, dtype):\r\n        if self.attn_mask.mask.dtype != dtype:\r\n            self.attn_mask.mask = self.attn_mask.mask.to(dtype)\r\n\r\n\r\n\r\n\r\nclass FullAttentionMask(BaseAttentionMask):\r\n    def generate(self, mask_type=None, dtype=None):\r\n        mask_type = self.mask_type if mask_type is None else mask_type\r\n        dtype     = self.dtype     if dtype     is None else dtype\r\n        text_off  = self.text_off\r\n        text_len  = self.text_len\r\n        img_len   = self.img_len\r\n        t         = self.t\r\n        h         = self.h\r\n        w         = self.w\r\n        \r\n        if self.edge_width_list is None:\r\n            self.edge_width_list = [self.edge_width] * self.num_regions\r\n        \r\n        attn_mask = torch.zeros((text_off+t*img_len, text_len+t*img_len), dtype=dtype)\r\n        \r\n        #cross_self_mask = torch.zeros((t*img_len, t*img_len), dtype=torch.float16)\r\n        \r\n        prev_len = 0\r\n        for context_len, mask in zip(self.context_lens, self.masks):\r\n            \r\n            img2txt_mask    = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, context_len)\r\n            img2txt_mask_sq = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n\r\n            curr_len = prev_len + context_len\r\n            \r\n            attn_mask[prev_len:curr_len, prev_len:curr_len] = 1.0                                         # self             TXT 2 TXT\r\n            attn_mask[prev_len:curr_len, text_len:        ] = img2txt_mask.transpose(-1, -2).repeat(1,t)  # cross            TXT 2 regional IMG    # txt2img_mask\r\n            attn_mask[text_off:        , prev_len:curr_len] = img2txt_mask.repeat(t,1)                    # cross   regional IMG 2 TXT\r\n\r\n            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], fp_and(img2txt_mask_sq.repeat(t,t), img2txt_mask_sq.transpose(-1, -2).repeat(t,t))) # img2txt_mask_sq, txt2img_mask_sq\r\n            \r\n            #cross_self_mask[:,:] = fp_or(cross_self_mask, fp_and(img2txt_mask_sq.repeat(t,t), (1-img2txt_mask_sq).transpose(-1, -2).repeat(t,t)))\r\n            \r\n            prev_len = curr_len\r\n            \r\n        if self.mask_type.endswith(\"_masked\") or self.mask_type.endswith(\"_A\") or self.mask_type.endswith(\"_AB\") or self.mask_type.endswith(\"_AC\") or self.mask_type.endswith(\"_A,unmasked\"):\r\n            img2txt_mask_sq = F.interpolate(self.masks[0].unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n        \r\n        if self.mask_type.endswith(\"_unmasked\") or self.mask_type.endswith(\"_C\") or self.mask_type.endswith(\"_BC\") or self.mask_type.endswith(\"_AC\") or self.mask_type.endswith(\"_B,unmasked\") or self.mask_type.endswith(\"_A,unmasked\"):\r\n            img2txt_mask_sq = F.interpolate(self.masks[-1].unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n            \r\n        if self.mask_type.endswith(\"_B\") or self.mask_type.endswith(\"_AB\") or self.mask_type.endswith(\"_BC\") or self.mask_type.endswith(\"_B,unmasked\"):\r\n            img2txt_mask_sq = F.interpolate(self.masks[1].unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n            \r\n        if self.edge_width > 0:\r\n            edge_mask = torch.zeros_like(self.masks[0])\r\n            for mask in self.masks:\r\n                edge_mask = fp_or(edge_mask, get_edge_mask(mask, dilation=self.edge_width))\r\n                \r\n            img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n            \r\n        elif self.edge_width_list is not None:\r\n            edge_mask = torch.zeros_like(self.masks[0])\r\n            \r\n            for mask, edge_width in zip(self.masks, self.edge_width_list):\r\n                if edge_width != 0:\r\n                    edge_mask_new = get_edge_mask(mask, dilation=abs(edge_width))\r\n                    edge_mask     = fp_or(edge_mask, fp_and(edge_mask_new, mask)) #fp_and here is to ensure edge_mask only grows into the region for current mask\r\n                    \r\n                    img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n                    attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n            \r\n        if self.use_self_attn_mask_list is not None:\r\n            for mask, use_self_attn_mask in zip(self.masks, self.use_self_attn_mask_list):\r\n                if not use_self_attn_mask:\r\n                    img2txt_mask_sq = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n                    attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n        \r\n        \r\n        #cmask = torch.zeros((text_len+t*img_len), dtype=torch.bfloat16)\r\n        #cmask[text_len:] = cross_self_mask #cmask[text_len:] + 0.25 * cross_self_mask\r\n        \r\n        #self.cross_self_mask = CoreAttnMask(cmask[None,None,...,None],     mask_type=mask_type)   # shape: 1, 1, txt_len+img_len, 1\r\n        #self.cross_self_mask = CoreAttnMask(cross_self_mask[None,None,...,None],     mask_type=mask_type)   # shape: 1, 1, txt_len+img_len, 1\r\n        #self.cross_self_mask = CoreAttnMask(cross_self_mask[None,None,...,None],     mask_type=mask_type)   # shape: 1, 1, txt_len+img_len, 1\r\n        \r\n        \"\"\"\r\n        cross_self_mask = F.interpolate(self.masks[0].unsqueeze(0).to(torch.bfloat16), (h, w), mode='nearest-exact').to(torch.bfloat16).flatten()#.unsqueeze(1) # .repeat(1, img_len)\r\n\r\n        edge_mask = get_edge_mask(self.masks[0], dilation=80)\r\n        edge_mask = F.interpolate(edge_mask.unsqueeze(0).to(torch.bfloat16), (h, w), mode='nearest-exact').flatten().unsqueeze(1).repeat(1, img_len)\r\n\r\n        attn_mask[text_off:, text_len:] = F.interpolate((1-self.masks[0]).unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n        attn_mask = attn_mask.to(torch.bfloat16)\r\n\r\n        edge_mask = edge_mask.to(torch.bfloat16)\"\"\"\r\n\r\n        self.cross_self_mask = CoreAttnMask(torch.zeros_like(img2txt_mask_sq).to(torch.bfloat16).squeeze(),     mask_type=mask_type)\r\n        \r\n        self.attn_mask       = CoreAttnMask(attn_mask, mask_type=mask_type)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass FullAttentionMaskHiDream(BaseAttentionMask):\r\n    def generate(self, mask_type=None, dtype=None):\r\n        mask_type = self.mask_type if mask_type is None else mask_type\r\n        dtype     = self.dtype     if dtype     is None else dtype\r\n        text_off  = self.text_off\r\n        text_len  = self.text_len\r\n        img_len   = self.img_len\r\n        t         = self.t\r\n        h         = self.h\r\n        w         = self.w\r\n        \r\n        if self.edge_width_list is None:\r\n            self.edge_width_list = [self.edge_width] * self.num_regions\r\n        \r\n        attn_mask = torch.zeros((text_off+t*img_len, text_len+t*img_len), dtype=dtype)\r\n        reg_num  = 0\r\n        prev_len = 0\r\n        for context_len, mask in zip(self.context_lens, self.masks):\r\n\r\n            img2txt_mask_sq = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n\r\n            curr_len = prev_len + context_len\r\n            \r\n            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], fp_and(img2txt_mask_sq.repeat(t,t), img2txt_mask_sq.transpose(-1,-2).repeat(t,t))) # img2txt_mask_sq, txt2img_mask_sq\r\n            \r\n            prev_len = curr_len\r\n            reg_num += 1\r\n        \r\n        self.self_attn_mask = attn_mask[text_off:, text_len:].clone()\r\n        \r\n        if self.mask_type.endswith(\"_masked\") or self.mask_type.endswith(\"_A\") or self.mask_type.endswith(\"_AB\") or self.mask_type.endswith(\"_AC\") or self.mask_type.endswith(\"_A,unmasked\"):\r\n            img2txt_mask_sq = F.interpolate(self.masks[0].unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n        \r\n        if self.mask_type.endswith(\"_unmasked\") or self.mask_type.endswith(\"_C\") or self.mask_type.endswith(\"_BC\") or self.mask_type.endswith(\"_AC\") or self.mask_type.endswith(\"_B,unmasked\") or self.mask_type.endswith(\"_A,unmasked\"):\r\n            img2txt_mask_sq = F.interpolate(self.masks[-1].unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n            \r\n        if self.mask_type.endswith(\"_B\") or self.mask_type.endswith(\"_AB\") or self.mask_type.endswith(\"_BC\") or self.mask_type.endswith(\"_B,unmasked\"):\r\n            img2txt_mask_sq = F.interpolate(self.masks[1].unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n        \r\n        if   self.edge_width > 0:\r\n            edge_mask = torch.zeros_like(self.masks[0])\r\n            for mask in self.masks:\r\n                edge_mask_new = get_edge_mask(mask, dilation=abs(self.edge_width))\r\n                edge_mask = fp_or(edge_mask, edge_mask_new)\r\n                #edge_mask = fp_or(edge_mask, get_edge_mask(mask, dilation=self.edge_width))\r\n                \r\n            img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n            \r\n        elif self.edge_width < 0: # edge masks using cross-attn too\r\n            edge_mask = torch.zeros_like(self.masks[0])\r\n            for mask in self.masks:\r\n                edge_mask = fp_or(edge_mask, get_edge_mask(mask, dilation=abs(self.edge_width)))\r\n                \r\n            img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n        \r\n        elif self.edge_width_list is not None:\r\n            edge_mask = torch.zeros_like(self.masks[0])\r\n            \r\n            for mask, edge_width in zip(self.masks, self.edge_width_list):\r\n                if edge_width != 0:\r\n                    edge_mask_new = get_edge_mask(mask, dilation=abs(edge_width))\r\n                    edge_mask     = fp_or(edge_mask, fp_and(edge_mask_new, mask)) #fp_and here is to ensure edge_mask only grows into the region for current mask\r\n                    \r\n                    img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n                    attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n            \r\n        if self.use_self_attn_mask_list is not None:\r\n            for mask, use_self_attn_mask in zip(self.masks, self.use_self_attn_mask_list):\r\n                if not use_self_attn_mask:\r\n                    img2txt_mask_sq = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n                    attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n\r\n        text_len_t5     = sum(sublist[0] for sublist in self.context_lens_list)\r\n        img2txt_mask_t5 = torch.empty((img_len, text_len_t5)).to(attn_mask)\r\n        offset_t5_start = 0\r\n        reg_num_slice   = 0\r\n        for context_len, mask_slice, edge_width in zip(self.context_lens, self.masks, self.edge_width_list):\r\n            if self.edge_width < 0: # edge masks using cross-attn too\r\n                mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(self.edge_width)))\r\n            if edge_width < 0: # edge masks using cross-attn too\r\n                mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(edge_width)))\r\n            \r\n            slice_len     = self.context_lens_list[reg_num_slice][0]\r\n            offset_t5_end = offset_t5_start + slice_len\r\n            \r\n            img2txt_mask_slice = F.interpolate(mask_slice.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, slice_len)\r\n            \r\n            img2txt_mask_t5[:, offset_t5_start:offset_t5_end] = img2txt_mask_slice\r\n            \r\n            offset_t5_start = offset_t5_end\r\n            reg_num_slice += 1\r\n        \r\n        text_len_llama     = sum(sublist[1] for sublist in self.context_lens_list)\r\n        img2txt_mask_llama = torch.empty((img_len, text_len_llama)).to(attn_mask)\r\n        offset_llama_start = 0\r\n        reg_num_slice      = 0\r\n        for context_len, mask_slice, edge_width in zip(self.context_lens, self.masks, self.edge_width_list):\r\n            if self.edge_width < 0: # edge masks using cross-attn too\r\n                mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(self.edge_width)))\r\n            if edge_width < 0: # edge masks using cross-attn too\r\n                mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(edge_width)))\r\n                \r\n            slice_len        = self.context_lens_list[reg_num_slice][1]\r\n            offset_llama_end = offset_llama_start + slice_len\r\n            \r\n            img2txt_mask_slice = F.interpolate(mask_slice.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, slice_len)\r\n            \r\n            img2txt_mask_llama[:, offset_llama_start:offset_llama_end] = img2txt_mask_slice\r\n            \r\n            offset_llama_start = offset_llama_end\r\n            reg_num_slice += 1\r\n        \r\n        img2txt_mask = torch.cat([img2txt_mask_t5, img2txt_mask_llama.repeat(1,2)], dim=-1)\r\n        \r\n        attn_mask[:-text_off , :-text_len ] = attn_mask[text_off:, text_len:].clone()\r\n        attn_mask[:-text_off ,  -text_len:] = img2txt_mask\r\n        attn_mask[ -text_off:, :-text_len ] = img2txt_mask.transpose(-2,-1)\r\n\r\n        attn_mask[img_len:,img_len:] = 1.0   # txt -> txt \"self-cross\" attn is critical with hidream in most cases. checkerboard strategies are generally poo\r\n        \r\n        # mask cross attention between text embeds\r\n        flat = [v for group in zip(*self.context_lens_list) for v in group]\r\n        checkvar = checkerboard_variable(flat)\r\n        attn_mask[img_len:, img_len:] = checkvar\r\n        \r\n        self.attn_mask = CoreAttnMask(attn_mask, mask_type=mask_type)\r\n\r\n\r\n        #flat = [v for group in zip(*self.context_lens_list) for v in group]\r\n\r\n    def gen_edge_mask(self, block_idx):\r\n        mask_type = self.mask_type\r\n        dtype     = self.dtype     \r\n        text_off  = self.text_off\r\n        text_len  = self.text_len\r\n        img_len   = self.img_len\r\n        t         = self.t\r\n        h         = self.h\r\n        w         = self.w\r\n        \r\n        if self.edge_width_list is None:\r\n            return self.attn_mask.mask\r\n        else:\r\n            #attn_mask = self.attn_mask.mask.clone()\r\n            attn_mask = torch.zeros_like(self.attn_mask.mask)\r\n            attn_mask[text_off:, text_len:] = self.self_attn_mask.clone()\r\n            edge_mask = torch.zeros_like(self.masks[0])\r\n            \r\n            for mask, edge_width in zip(self.masks, self.edge_width_list):\r\n                #edge_width *= (block_idx/48)\r\n                edge_width *= torch.rand(1).item()\r\n                edge_width = int(edge_width)\r\n                if edge_width != 0:\r\n                    #edge_width *= (block_idx/48)\r\n                    #edge_width = int(edge_width)\r\n                    edge_mask_new = get_edge_mask(mask, dilation=abs(edge_width))\r\n                    edge_mask     = fp_or(edge_mask, fp_and(edge_mask_new, mask)) #fp_and here is to ensure edge_mask only grows into the region for current mask\r\n                    \r\n                    img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n                    \r\n                    attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n\r\n\r\n            if self.use_self_attn_mask_list is not None:\r\n                for mask, use_self_attn_mask in zip(self.masks, self.use_self_attn_mask_list):\r\n                    if not use_self_attn_mask:\r\n                        img2txt_mask_sq = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)\r\n                        attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)\r\n\r\n            text_len_t5     = sum(sublist[0] for sublist in self.context_lens_list)\r\n            img2txt_mask_t5 = torch.empty((img_len, text_len_t5)).to(attn_mask)\r\n            offset_t5_start = 0\r\n            reg_num_slice   = 0\r\n            for context_len, mask_slice, edge_width in zip(self.context_lens, self.masks, self.edge_width_list):\r\n                if self.edge_width < 0: # edge masks using cross-attn too\r\n                    mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(self.edge_width)))\r\n                if edge_width < 0: # edge masks using cross-attn too\r\n                    mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(edge_width)))\r\n                \r\n                slice_len     = self.context_lens_list[reg_num_slice][0]\r\n                offset_t5_end = offset_t5_start + slice_len\r\n                \r\n                img2txt_mask_slice = F.interpolate(mask_slice.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, slice_len)\r\n                \r\n                img2txt_mask_t5[:, offset_t5_start:offset_t5_end] = img2txt_mask_slice\r\n                \r\n                offset_t5_start = offset_t5_end\r\n                reg_num_slice += 1\r\n            \r\n            text_len_llama     = sum(sublist[1] for sublist in self.context_lens_list)\r\n            img2txt_mask_llama = torch.empty((img_len, text_len_llama)).to(attn_mask)\r\n            offset_llama_start = 0\r\n            reg_num_slice      = 0\r\n            for context_len, mask_slice, edge_width in zip(self.context_lens, self.masks, self.edge_width_list):\r\n                if self.edge_width < 0: # edge masks using cross-attn too\r\n                    mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(self.edge_width)))\r\n                if edge_width < 0: # edge masks using cross-attn too\r\n                    mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(edge_width)))\r\n                    \r\n                slice_len        = self.context_lens_list[reg_num_slice][1]\r\n                offset_llama_end = offset_llama_start + slice_len\r\n                \r\n                img2txt_mask_slice = F.interpolate(mask_slice.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, slice_len)\r\n                \r\n                img2txt_mask_llama[:, offset_llama_start:offset_llama_end] = img2txt_mask_slice\r\n                \r\n                offset_llama_start = offset_llama_end\r\n                reg_num_slice += 1\r\n            \r\n            img2txt_mask = torch.cat([img2txt_mask_t5, img2txt_mask_llama.repeat(1,2)], dim=-1)\r\n            \r\n            attn_mask[:-text_off , :-text_len ] = attn_mask[text_off:, text_len:].clone()\r\n            attn_mask[:-text_off ,  -text_len:] = img2txt_mask\r\n            attn_mask[ -text_off:, :-text_len ] = img2txt_mask.transpose(-2,-1)\r\n\r\n            attn_mask[img_len:,img_len:] = 1.0   # txt -> txt \"self-cross\" attn is critical with hidream in most cases. checkerboard strategies are generally poo\r\n            \r\n            # mask cross attention between text embeds\r\n            flat = [v for group in zip(*self.context_lens_list) for v in group]\r\n            checkvar = checkerboard_variable(flat)\r\n            attn_mask[img_len:, img_len:] = checkvar\r\n            \r\n            return attn_mask.to('cuda')\r\n        \r\n        \r\nclass RegionalContext:\r\n    def __init__(self, idle_device='cpu', work_device='cuda'):\r\n        self.context  = None\r\n        self.clip_fea = None\r\n        self.llama3   = None\r\n        self.context_list = []\r\n        self.clip_fea_list = []\r\n        self.clip_pooled_list = []\r\n        self.llama3_list = []\r\n        self.t5_list     = []\r\n        self.pooled_output = None\r\n        self.idle_device = idle_device\r\n        self.work_device = work_device\r\n    \r\n    def add_region(self, context, pooled_output=None, clip_fea=None):\r\n        if self.context is not None:\r\n            self.context = torch.cat([self.context, context], dim=1)\r\n        else:\r\n            self.context = context\r\n        self.context_list.append(context)\r\n            \r\n        if pooled_output is not None:\r\n            self.clip_pooled_list.append(pooled_output)\r\n        \r\n        if clip_fea is not None:\r\n            if self.clip_fea is not None:\r\n                self.clip_fea = torch.cat([self.clip_fea, clip_fea], dim=1)\r\n            else:\r\n                self.clip_fea = clip_fea\r\n            self.clip_fea_list.append(clip_fea)\r\n        \r\n\r\n\r\n    def add_region_clip_fea(self, clip_fea):\r\n        if self.clip_fea is not None:\r\n            self.clip_fea = torch.cat([self.clip_fea, clip_fea], dim=1)\r\n        else:\r\n            self.clip_fea = clip_fea\r\n        self.clip_fea_list.append(clip_fea)\r\n\r\n    def add_region_llama3(self, llama3):\r\n        if self.llama3 is not None:\r\n            self.llama3 = torch.cat([self.llama3, llama3], dim=-2)   # base shape 1,32,128,4096\r\n        else:\r\n            self.llama3 = llama3\r\n            \r\n    def add_region_hidream(self, t5, llama3):\r\n        self.t5_list    .append(t5)\r\n        self.llama3_list.append(llama3)\r\n\r\n    def clear_regions(self):\r\n        if self.context is not None:\r\n            del self.context\r\n            self.context = None\r\n        if self.clip_fea is not None:\r\n            del self.clip_fea\r\n            self.clip_fea = None\r\n        if self.llama3 is not None:\r\n            del self.llama3\r\n            self.llama3 = None\r\n            \r\n        del self.t5_list\r\n        del self.llama3_list\r\n        self.t5_list     = []\r\n        self.llama3_list = []\r\n\r\n    def get(self):\r\n        return self.context.to(self.work_device)\r\n\r\n    def get_clip_fea(self):\r\n        if self.clip_fea is not None:\r\n            return self.clip_fea.to(self.work_device)\r\n        else:\r\n            return None\r\n\r\n    def get_llama3(self):\r\n        if self.llama3 is not None:\r\n            return self.llama3.to(self.work_device)\r\n        else:\r\n            return None\r\n\r\n\r\nclass CrossAttentionMask(BaseAttentionMask):\r\n    def generate(self, mask_type=None, dtype=None):\r\n        mask_type = self.mask_type if mask_type is None else mask_type\r\n        dtype     = self.dtype     if dtype     is None else dtype\r\n        text_off  = self.text_off\r\n        text_len  = self.text_len\r\n        img_len   = self.img_len\r\n        t         = self.t\r\n        h         = self.h\r\n        w         = self.w\r\n        \r\n        cross_attn_mask = torch.zeros((t * img_len,    text_len), dtype=dtype)\r\n    \r\n        prev_len = 0\r\n        for context_len, mask in zip(self.context_lens, self.masks):\r\n\r\n            cross_mask, self_mask = None, None\r\n            if mask.ndim == 6:\r\n                mask.squeeze_(0)\r\n            if mask.ndim == 3:\r\n                t_mask = mask.shape[0]\r\n            elif mask.ndim == 4:\r\n                if mask.shape[0] > 1:\r\n\r\n                    cross_mask = mask[0]\r\n                    if cross_mask.shape[-3] > self.t:\r\n                        cross_mask = cross_mask[:self.t,...]\r\n                    elif cross_mask.shape[-3] < self.t:\r\n                        cross_mask = F.pad(cross_mask.permute(1,2,0), [0,self.t-cross_mask.shape[-3]], value=0).permute(2,0,1)\r\n\r\n                    t_mask = self.t\r\n                else:\r\n                    t_mask = mask.shape[-3]\r\n                    mask.squeeze_(0)\r\n            elif mask.ndim == 5:\r\n                t_mask = mask.shape[-3]\r\n            else:\r\n                t_mask = 1\r\n                mask.unsqueeze_(0)\r\n                \r\n            if cross_mask is not None:\r\n                img2txt_mask    = F.interpolate(cross_mask.unsqueeze(0).unsqueeze(0).to(torch.float16), (t_mask, h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1)\r\n            else:\r\n                img2txt_mask    = F.interpolate(      mask.unsqueeze(0).unsqueeze(0).to(torch.float16), (t_mask, h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1)\r\n            \r\n            if t_mask == 1: # ...why only if == 1?\r\n                img2txt_mask = img2txt_mask.repeat(1, context_len)   \r\n\r\n            curr_len = prev_len + context_len\r\n            \r\n            if t_mask == 1:\r\n                cross_attn_mask[:, prev_len:curr_len] = img2txt_mask.repeat(t,1)\r\n            else:\r\n                cross_attn_mask[:, prev_len:curr_len] = img2txt_mask\r\n            \r\n            prev_len = curr_len\r\n                            \r\n        self.attn_mask = CoreAttnMask(cross_attn_mask, mask_type=mask_type)\r\n\r\n\r\n\r\n\r\nclass SplitAttentionMask(BaseAttentionMask):\r\n    def generate(self, mask_type=None, dtype=None):\r\n        mask_type = self.mask_type if mask_type is None else mask_type\r\n        dtype     = self.dtype     if dtype     is None else dtype\r\n        text_off  = self.text_off\r\n        text_len  = self.text_len\r\n        img_len   = self.img_len\r\n        t         = self.t\r\n        h         = self.h\r\n        w         = self.w\r\n        \r\n        if self.edge_width_list is None:\r\n            self.edge_width_list = [self.edge_width] * self.num_regions\r\n        \r\n        cross_attn_mask = torch.zeros((t * img_len,    text_len), dtype=dtype)\r\n        self_attn_mask  = torch.zeros((t * img_len, t * img_len), dtype=dtype)\r\n    \r\n        prev_len = 0\r\n        self_masks = []\r\n        for context_len, mask in zip(self.context_lens, self.masks):\r\n\r\n            cross_mask, self_mask = None, None\r\n            if mask.ndim == 6:\r\n                mask.squeeze_(0)\r\n            if mask.ndim == 3:\r\n                t_mask = mask.shape[0]\r\n            elif mask.ndim == 4:\r\n\r\n                if mask.shape[0] > 1:\r\n                    cross_mask = mask[0]\r\n                    if cross_mask.shape[-3] > self.t:\r\n                        cross_mask = cross_mask[:self.t,...]\r\n                    elif cross_mask.shape[-3] < self.t:\r\n                        cross_mask = F.pad(cross_mask.permute(1,2,0), [0,self.t-cross_mask.shape[-3]], value=0).permute(2,0,1)\r\n\r\n                    self_mask = mask[1]\r\n                    if self_mask.shape[-3] > self.t:\r\n                        self_mask = self_mask[:self.t,...]\r\n                    elif self_mask.shape[-3] < self.t:\r\n                        self_mask = F.pad(self_mask.permute(1,2,0), [0,self.t-self_mask.shape[-3]], value=0).permute(2,0,1)\r\n\r\n                    t_mask = self.t\r\n                else:\r\n                    t_mask = mask.shape[-3]\r\n                    mask.squeeze_(0)\r\n            elif mask.ndim == 5:\r\n                t_mask = mask.shape[-3]\r\n            else:\r\n                t_mask = 1\r\n                mask.unsqueeze_(0)\r\n                \r\n            if cross_mask is not None:\r\n                img2txt_mask    = F.interpolate(cross_mask.unsqueeze(0).unsqueeze(0).to(torch.float16), (t_mask, h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1)\r\n            else:\r\n                img2txt_mask    = F.interpolate(      mask.unsqueeze(0).unsqueeze(0).to(torch.float16), (t_mask, h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1)\r\n            \r\n            if t_mask == 1: # ...why only if == 1?\r\n                img2txt_mask = img2txt_mask.repeat(1, context_len)   \r\n\r\n            curr_len = prev_len + context_len\r\n            \r\n            if t_mask == 1:\r\n                cross_attn_mask[:, prev_len:curr_len] = img2txt_mask.repeat(t,1)\r\n            else:\r\n                cross_attn_mask[:, prev_len:curr_len] = img2txt_mask\r\n            \r\n            if self_mask is not None:\r\n                img2txt_mask_sq = F.interpolate(self_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, t_mask * img_len)\r\n            else:\r\n                img2txt_mask_sq = F.interpolate(     mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, t_mask * img_len)\r\n            self_masks.append(img2txt_mask_sq)\r\n            \r\n            if t_mask > 1:\r\n                self_attn_mask = fp_or(self_attn_mask, fp_and(img2txt_mask_sq, img2txt_mask_sq.transpose(-1,-2)))\r\n            else:\r\n                self_attn_mask = fp_or(self_attn_mask, fp_and(img2txt_mask_sq.repeat(t,t), img2txt_mask_sq.transpose(-1,-2)).repeat(t,t))\r\n            \r\n            prev_len = curr_len\r\n\r\n        if self.mask_type.endswith(\"_masked\") or self.mask_type.endswith(\"_A\") or self.mask_type.endswith(\"_AB\") or self.mask_type.endswith(\"_AC\") or self.mask_type.endswith(\"_A,unmasked\"):\r\n            self_attn_mask = fp_or(self_attn_mask, self_masks[0])\r\n        \r\n        if self.mask_type.endswith(\"_unmasked\") or self.mask_type.endswith(\"_C\") or self.mask_type.endswith(\"_BC\") or self.mask_type.endswith(\"_AC\") or self.mask_type.endswith(\"_B,unmasked\") or self.mask_type.endswith(\"_A,unmasked\"):\r\n            self_attn_mask = fp_or(self_attn_mask, self_masks[-1])\r\n            \r\n        if self.mask_type.endswith(\"_B\") or self.mask_type.endswith(\"_AB\") or self.mask_type.endswith(\"_BC\") or self.mask_type.endswith(\"_B,unmasked\"):\r\n            self_attn_mask = fp_or(self_attn_mask, self_masks[1])\r\n            \r\n        if   self.edge_width > 0:\r\n            edge_mask = torch.zeros_like(self.masks[0])\r\n            for mask in self.masks:\r\n                edge_mask_new = get_edge_mask(mask, dilation=abs(self.edge_width))\r\n                edge_mask = fp_or(edge_mask, edge_mask_new)\r\n                #edge_mask = fp_or(edge_mask, get_edge_mask(mask, dilation=self.edge_width))\r\n            \r\n            img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, t_mask * img_len)\r\n            self_attn_mask = fp_or(self_attn_mask, img2txt_mask_sq)\r\n            \r\n        elif self.edge_width_list is not None:\r\n            edge_mask = torch.zeros_like(self.masks[0])\r\n            \r\n            for mask, edge_width in zip(self.masks, self.edge_width_list):\r\n                if edge_width != 0:\r\n                    edge_mask_new = get_edge_mask(mask, dilation=abs(edge_width))\r\n                    edge_mask     = fp_or(edge_mask, fp_and(edge_mask_new, mask)) #fp_and here is to ensure edge_mask only grows into the region for current mask\r\n                    \r\n                    img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, t_mask * img_len)\r\n                    self_attn_mask = fp_or(self_attn_mask, img2txt_mask_sq)\r\n            \r\n        if self.use_self_attn_mask_list is not None:\r\n            for mask, use_self_attn_mask in zip(self.masks, self.use_self_attn_mask_list):\r\n                if not use_self_attn_mask:\r\n                    img2txt_mask_sq = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, t_mask * img_len)\r\n                    self_attn_mask = fp_or(self_attn_mask, img2txt_mask_sq)\r\n        \r\n        \r\n        attn_mask = torch.cat([cross_attn_mask, self_attn_mask], dim=1)\r\n        \r\n        self.attn_mask = CoreAttnMask(attn_mask, mask_type=mask_type)\r\n\r\n"
  },
  {
    "path": "aura/mmdit.py",
    "content": "#AuraFlow MMDiT\n#Originally written by the AuraFlow Authors\n\nimport math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n#from comfy.ldm.modules.attention import optimized_attention\nfrom comfy.ldm.modules.attention import attention_pytorch\n\nimport comfy.ops\nimport comfy.ldm.common_dit\n\nfrom ..helper import ExtraOptions\n\nfrom typing import Dict, Optional, Tuple, List\nfrom ..latents import slerp_tensor, interpolate_spd, tile_latent, untile_latent, gaussian_blur_2d, median_blur_2d\nfrom ..style_transfer import apply_scattersort_masked, apply_scattersort_tiled, adain_seq_inplace, adain_patchwise_row_batch_med, adain_patchwise_row_batch\nfrom einops import rearrange\ndef modulate(x, shift, scale):\n    return x * (1 + scale.unsqueeze(1)) + shift.unsqueeze(1)\n\n\ndef find_multiple(n: int, k: int) -> int:\n    if n % k == 0:\n        return n\n    return n + k - (n % k)\n\n\nclass MLP(nn.Module): # not executed directly with ReAura?\n    def __init__(self, dim, hidden_dim=None, dtype=None, device=None, operations=None) -> None:\n        super().__init__()\n        if hidden_dim is None:\n            hidden_dim = 4 * dim\n\n        n_hidden = int(2 * hidden_dim / 3)\n        n_hidden = find_multiple(n_hidden, 256)\n\n        self.c_fc1 = operations.Linear(dim, n_hidden, bias=False, dtype=dtype, device=device)\n        self.c_fc2 = operations.Linear(dim, n_hidden, bias=False, dtype=dtype, device=device)\n        self.c_proj = operations.Linear(n_hidden, dim, bias=False, dtype=dtype, device=device)\n\n    #@torch.compile(mode=\"default\", dynamic=False, fullgraph=False, backend=\"inductor\")\n    def forward(self, x: torch.Tensor) -> torch.Tensor:\n        x = F.silu(self.c_fc1(x)) * self.c_fc2(x)\n        x = self.c_proj(x)\n        return x\n\n\nclass MultiHeadLayerNorm(nn.Module):\n    def __init__(self, hidden_size=None, eps=1e-5, dtype=None, device=None):\n        # Copy pasta from https://github.com/huggingface/transformers/blob/e5f71ecaae50ea476d1e12351003790273c4b2ed/src/transformers/models/cohere/modeling_cohere.py#L78\n\n        super().__init__()\n        self.weight = nn.Parameter(torch.empty(hidden_size, dtype=dtype, device=device))\n        self.variance_epsilon = eps\n\n    #@torch.compile(mode=\"default\", dynamic=False, fullgraph=False, backend=\"inductor\")\n    def forward(self, hidden_states):\n        input_dtype   =  hidden_states.dtype\n        hidden_states =  hidden_states.to(torch.float32)\n        mean          =  hidden_states.mean(-1,                keepdim=True)\n        variance      = (hidden_states - mean).pow(2).mean(-1, keepdim=True)\n        hidden_states = (hidden_states - mean) * torch.rsqrt(\n            variance + self.variance_epsilon\n        )\n        hidden_states = self.weight.to(torch.float32) * hidden_states\n        return hidden_states.to(input_dtype)\n\nclass ReSingleAttention(nn.Module):\n    def __init__(self, dim, n_heads, mh_qknorm=False, dtype=None, device=None, operations=None):\n        super().__init__()\n\n        self.n_heads = n_heads\n        self.head_dim = dim // n_heads\n\n        # this is for cond\n        self.w1q = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)\n        self.w1k = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)\n        self.w1v = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)\n        self.w1o = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)\n\n        self.q_norm1 = (\n            MultiHeadLayerNorm((self.n_heads, self.head_dim), dtype=dtype, device=device)\n            if mh_qknorm\n            else operations.LayerNorm(self.head_dim, elementwise_affine=False, dtype=dtype, device=device)\n        )\n        self.k_norm1 = (\n            MultiHeadLayerNorm((self.n_heads, self.head_dim), dtype=dtype, device=device)\n            if mh_qknorm\n            else operations.LayerNorm(self.head_dim, elementwise_affine=False, dtype=dtype, device=device)\n        )\n\n    #@torch.compile(mode=\"default\", dynamic=False, fullgraph=False, backend=\"inductor\")              # c = 1,4552,3072      #operations.Linear = torch.nn.Linear with recast\n    def forward(self, c, mask=None):\n\n        bsz, seqlen1, _ = c.shape\n\n        q, k, v = self.w1q(c), self.w1k(c), self.w1v(c)\n        q = q.view(bsz, seqlen1, self.n_heads, self.head_dim)\n        k = k.view(bsz, seqlen1, self.n_heads, self.head_dim)\n        v = v.view(bsz, seqlen1, self.n_heads, self.head_dim)\n        q, k = self.q_norm1(q), self.k_norm1(k)\n\n        output = attention_pytorch(q.permute(0, 2, 1, 3), k.permute(0, 2, 1, 3), v.permute(0, 2, 1, 3), self.n_heads, skip_reshape=True, mask=mask)\n        c = self.w1o(output)\n        return c\n\n\n\nclass ReDoubleAttention(nn.Module):\n    def __init__(self, dim, n_heads, mh_qknorm=False, dtype=None, device=None, operations=None):\n        super().__init__()\n\n        self.n_heads  = n_heads\n        self.head_dim = dim // n_heads\n\n        # this is for cond   1 (one) not l (L)\n        self.w1q = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)\n        self.w1k = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)\n        self.w1v = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)\n        self.w1o = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)\n\n        # this is for x\n        self.w2q = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)\n        self.w2k = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)\n        self.w2v = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)\n        self.w2o = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)\n\n        self.q_norm1 = (\n            MultiHeadLayerNorm((self.n_heads, self.head_dim), dtype=dtype, device=device)\n            if mh_qknorm\n            else operations.LayerNorm(self.head_dim, elementwise_affine=False, dtype=dtype, device=device)\n        )\n        self.k_norm1 = (\n            MultiHeadLayerNorm((self.n_heads, self.head_dim), dtype=dtype, device=device)\n            if mh_qknorm\n            else operations.LayerNorm(self.head_dim, elementwise_affine=False, dtype=dtype, device=device)\n        )\n\n        self.q_norm2 = (\n            MultiHeadLayerNorm((self.n_heads, self.head_dim), dtype=dtype, device=device)\n            if mh_qknorm\n            else operations.LayerNorm(self.head_dim, elementwise_affine=False, dtype=dtype, device=device)\n        )\n        self.k_norm2 = (\n            MultiHeadLayerNorm((self.n_heads, self.head_dim), dtype=dtype, device=device)\n            if mh_qknorm\n            else operations.LayerNorm(self.head_dim, elementwise_affine=False, dtype=dtype, device=device)\n        )\n\n\n    #@torch.compile(mode=\"default\", dynamic=False, fullgraph=False, backend=\"inductor\")         # c.shape 1,264,3072    x.shape 1,4032,3072   \n    def forward(self, c, x, mask=None):\n\n        bsz, seqlen1, _ = c.shape\n        bsz, seqlen2, _ = x.shape\n\n        cq, ck, cv = self.w1q(c), self.w1k(c), self.w1v(c)\n        cq         = cq.view(bsz, seqlen1, self.n_heads, self.head_dim)\n        ck         = ck.view(bsz, seqlen1, self.n_heads, self.head_dim)\n        cv         = cv.view(bsz, seqlen1, self.n_heads, self.head_dim)\n        cq, ck     = self.q_norm1(cq), self.k_norm1(ck)\n\n        xq, xk, xv = self.w2q(x), self.w2k(x), self.w2v(x)\n        xq         = xq.view(bsz, seqlen2, self.n_heads, self.head_dim)\n        xk         = xk.view(bsz, seqlen2, self.n_heads, self.head_dim)\n        xv         = xv.view(bsz, seqlen2, self.n_heads, self.head_dim)\n        xq, xk     = self.q_norm2(xq), self.k_norm2(xk)\n\n        # concat all     q,k,v.shape 1,4299,12,256           cq 1,267,12,256   xq 1,4032,12,256      self.n_heads 12      \n        q, k, v = (\n            torch.cat([cq, xq], dim=1),\n            torch.cat([ck, xk], dim=1),\n            torch.cat([cv, xv], dim=1),\n        )\n        # attn mask would be 4299,4299\n        if mask is not None:\n            pass\n        \n        output = attention_pytorch(q.permute(0, 2, 1, 3), k.permute(0, 2, 1, 3), v.permute(0, 2, 1, 3), self.n_heads, skip_reshape=True, mask=mask)\n\n        c, x = output.split([seqlen1, seqlen2], dim=1)\n        c    = self.w1o(c)\n        x    = self.w2o(x)\n\n        return c, x\n\n\nclass ReMMDiTBlock(nn.Module):\n    def __init__(self, dim, heads=8, global_conddim=1024, is_last=False, dtype=None, device=None, operations=None):\n        super().__init__()\n\n        self.normC1 = operations.LayerNorm(dim, elementwise_affine=False, dtype=dtype, device=device)\n        self.normC2 = operations.LayerNorm(dim, elementwise_affine=False, dtype=dtype, device=device)\n        if not is_last:\n            self.mlpC = MLP(dim, hidden_dim=dim * 4, dtype=dtype, device=device, operations=operations)\n            self.modC = nn.Sequential(\n                nn.SiLU(),\n                operations.Linear(global_conddim, 6 * dim, bias=False, dtype=dtype, device=device),\n            )\n        else:\n            self.modC = nn.Sequential(\n                nn.SiLU(),\n                operations.Linear(global_conddim, 2 * dim, bias=False, dtype=dtype, device=device),\n            )\n\n        self.normX1 = operations.LayerNorm(dim, elementwise_affine=False, dtype=dtype, device=device)\n        self.normX2 = operations.LayerNorm(dim, elementwise_affine=False, dtype=dtype, device=device)\n        self.mlpX = MLP(dim, hidden_dim=dim * 4, dtype=dtype, device=device, operations=operations)\n        self.modX = nn.Sequential(\n            nn.SiLU(),\n            operations.Linear(global_conddim, 6 * dim, bias=False, dtype=dtype, device=device),\n        )\n\n        self.attn = ReDoubleAttention(dim, heads, dtype=dtype, device=device, operations=operations)\n        self.is_last = is_last\n\n    #@torch.compile(mode=\"default\", dynamic=False, fullgraph=False, backend=\"inductor\")                    # MAIN BLOCK\n    def forward(self, c, x, global_cond, mask=None, **kwargs):\n\n        cres, xres = c, x\n\n        cshift_msa, cscale_msa, cgate_msa, cshift_mlp, cscale_mlp, cgate_mlp = (\n            self.modC(global_cond).chunk(6, dim=1)\n        )\n\n        c = modulate(self.normC1(c), cshift_msa, cscale_msa)\n\n        # xpath\n        xshift_msa, xscale_msa, xgate_msa, xshift_mlp, xscale_mlp, xgate_mlp = (\n            self.modX(global_cond).chunk(6, dim=1)\n        )\n\n        x = modulate(self.normX1(x), xshift_msa, xscale_msa)\n\n        # attention    c.shape 1,520,3072   x.shape 1,6144,3072\n        c, x = self.attn(c, x, mask=mask)\n\n        c = self.normC2(cres + cgate_msa.unsqueeze(1) * c)\n        c = cgate_mlp.unsqueeze(1) * self.mlpC(modulate(c, cshift_mlp, cscale_mlp))\n        c = cres + c\n\n        x = self.normX2(xres + xgate_msa.unsqueeze(1) * x)\n        x = xgate_mlp.unsqueeze(1) * self.mlpX(modulate(x, xshift_mlp, xscale_mlp))\n        x = xres + x\n\n        return c, x\n\nclass ReDiTBlock(nn.Module):\n    # like MMDiTBlock, but it only has X\n    def __init__(self, dim, heads=8, global_conddim=1024, dtype=None, device=None, operations=None):\n        super().__init__()\n\n        self.norm1 = operations.LayerNorm(dim, elementwise_affine=False, dtype=dtype, device=device)\n        self.norm2 = operations.LayerNorm(dim, elementwise_affine=False, dtype=dtype, device=device)\n\n        self.modCX = nn.Sequential(\n            nn.SiLU(),\n            operations.Linear(global_conddim, 6 * dim, bias=False, dtype=dtype, device=device),\n        )\n\n        self.attn = ReSingleAttention(dim, heads, dtype=dtype, device=device, operations=operations)\n        self.mlp = MLP(dim, hidden_dim=dim * 4, dtype=dtype, device=device, operations=operations)\n\n    #@torch.compile(mode=\"default\", dynamic=False, fullgraph=False, backend=\"inductor\")         # cx.shape 1,6664,3072   global_cond.shape 1,3072   mlpout.shape 1,6664,3072       float16\n    def forward(self, cx, global_cond, mask=None, **kwargs):\n        cxres = cx   \n        shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.modCX(\n            global_cond\n        ).chunk(6, dim=1)\n        cx = modulate(self.norm1(cx), shift_msa, scale_msa)\n        cx = self.attn(cx, mask=mask)\n        cx = self.norm2(cxres + gate_msa.unsqueeze(1) * cx)\n        mlpout = self.mlp(modulate(cx, shift_mlp, scale_mlp))\n        cx = gate_mlp.unsqueeze(1) * mlpout\n\n        cx = cxres + cx    # residual connection\n\n        return cx\n\n\n\nclass TimestepEmbedder(nn.Module):\n    def __init__(self, hidden_size, frequency_embedding_size=256, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.mlp = nn.Sequential(\n            operations.Linear(frequency_embedding_size, hidden_size, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Linear(hidden_size, hidden_size, dtype=dtype, device=device),\n        )\n        self.frequency_embedding_size = frequency_embedding_size\n\n    @staticmethod\n    def timestep_embedding(t, dim, max_period=10000):\n        half = dim // 2\n        freqs = 1000 * torch.exp(\n            -math.log(max_period) * torch.arange(start=0, end=half) / half\n        ).to(t.device)\n        args = t[:, None] * freqs[None]\n        embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)\n        if dim % 2:\n            embedding = torch.cat(\n                [embedding, torch.zeros_like(embedding[:, :1])], dim=-1\n            )\n        return embedding\n\n    #@torch.compile(mode=\"default\", dynamic=False, fullgraph=False, backend=\"inductor\")\n    def forward(self, t, dtype):\n        t_freq = self.timestep_embedding(t, self.frequency_embedding_size).to(dtype)\n        t_emb = self.mlp(t_freq)\n        return t_emb\n\n\nclass ReMMDiT(nn.Module):\n    def __init__(\n        self,\n        in_channels=4,\n        out_channels=4,\n        patch_size=2,\n        dim=3072,\n        n_layers=36,\n        n_double_layers=4,\n        n_heads=12,\n        global_conddim=3072,\n        cond_seq_dim=2048,\n        max_seq=32 * 32,\n        device=None,\n        dtype=None,\n        operations=None,\n    ):\n        super().__init__()\n        self.dtype = dtype\n\n        self.t_embedder = TimestepEmbedder(global_conddim, dtype=dtype, device=device, operations=operations)\n\n        self.cond_seq_linear = operations.Linear(\n            cond_seq_dim, dim, bias=False, dtype=dtype, device=device\n        )  # linear for something like text sequence.\n        self.init_x_linear = operations.Linear(\n            patch_size * patch_size * in_channels, dim, dtype=dtype, device=device\n        )  # init linear for patchified image.\n\n        self.positional_encoding = nn.Parameter(torch.empty(1, max_seq, dim, dtype=dtype, device=device))\n        self.register_tokens = nn.Parameter(torch.empty(1, 8, dim, dtype=dtype, device=device))\n\n        self.double_layers = nn.ModuleList([])\n        self.single_layers = nn.ModuleList([])\n\n\n        for idx in range(n_double_layers):\n            self.double_layers.append(\n                ReMMDiTBlock(dim, n_heads, global_conddim, is_last=(idx == n_layers - 1), dtype=dtype, device=device, operations=operations)\n            )\n\n        for idx in range(n_double_layers, n_layers):\n            self.single_layers.append(\n                ReDiTBlock(dim, n_heads, global_conddim, dtype=dtype, device=device, operations=operations)\n            )\n\n\n        self.final_linear = operations.Linear(\n            dim, patch_size * patch_size * out_channels, bias=False, dtype=dtype, device=device\n        )\n\n        self.modF = nn.Sequential(\n            nn.SiLU(),\n            operations.Linear(global_conddim, 2 * dim, bias=False, dtype=dtype, device=device),\n        )\n\n        self.out_channels = out_channels\n        self.patch_size = patch_size\n        self.n_double_layers = n_double_layers\n        self.n_layers = n_layers\n\n        self.h_max = round(max_seq**0.5)\n        self.w_max = round(max_seq**0.5)\n\n    @torch.no_grad()\n    def extend_pe(self, init_dim=(16, 16), target_dim=(64, 64)):\n        # extend pe\n        pe_data = self.positional_encoding.data.squeeze(0)[: init_dim[0] * init_dim[1]]\n\n        pe_as_2d = pe_data.view(init_dim[0], init_dim[1], -1).permute(2, 0, 1)\n\n        # now we need to extend this to target_dim. for this we will use interpolation.\n        # we will use torch.nn.functional.interpolate\n        pe_as_2d = F.interpolate(\n            pe_as_2d.unsqueeze(0), size=target_dim, mode=\"bilinear\"\n        )\n        pe_new = pe_as_2d.squeeze(0).permute(1, 2, 0).flatten(0, 1)\n        self.positional_encoding.data = pe_new.unsqueeze(0).contiguous()\n        self.h_max, self.w_max = target_dim\n\n    def pe_selection_index_based_on_dim(self, h, w):\n        h_p, w_p            = h // self.patch_size, w // self.patch_size\n        original_pe_indexes = torch.arange(self.positional_encoding.shape[1])\n        original_pe_indexes = original_pe_indexes.view(self.h_max, self.w_max)\n        starth              =  self.h_max // 2 - h_p // 2\n        endh                = starth + h_p\n        startw              = self.w_max // 2 - w_p // 2\n        endw                = startw + w_p\n        original_pe_indexes = original_pe_indexes[\n            starth:endh, startw:endw\n        ]\n        return original_pe_indexes.flatten()\n\n    def unpatchify(self, x, h, w):\n        c = self.out_channels\n        p = self.patch_size\n\n        x = x.reshape(shape=(x.shape[0], h, w, p, p, c))\n        x = torch.einsum(\"nhwpqc->nchpwq\", x)\n        imgs = x.reshape(shape=(x.shape[0], c, h * p, w * p))\n        return imgs\n\n    def patchify(self, x):\n        B, C, H, W = x.size()\n        x = comfy.ldm.common_dit.pad_to_patch_size(x, (self.patch_size, self.patch_size))\n        x = x.view(\n            B,\n            C,\n            (H + 1) // self.patch_size,\n            self.patch_size,\n            (W + 1) // self.patch_size,\n            self.patch_size,\n        )\n        x = x.permute(0, 2, 4, 1, 3, 5).flatten(-3).flatten(1, 2)\n        return x\n\n    def apply_pos_embeds(self, x, h, w):\n        h = (h + 1) // self.patch_size\n        w = (w + 1) // self.patch_size\n        max_dim = max(h, w)\n\n        cur_dim = self.h_max\n        pos_encoding = comfy.ops.cast_to_input(self.positional_encoding.reshape(1, cur_dim, cur_dim, -1), x)\n\n        if max_dim > cur_dim:\n            pos_encoding = F.interpolate(pos_encoding.movedim(-1, 1), (max_dim, max_dim), mode=\"bilinear\").movedim(1, -1)\n            cur_dim = max_dim\n\n        from_h = (cur_dim - h) // 2\n        from_w = (cur_dim - w) // 2\n        pos_encoding = pos_encoding[:,from_h:from_h+h,from_w:from_w+w]\n        return x + pos_encoding.reshape(1, -1, self.positional_encoding.shape[-1])\n\n    def forward(self, x, timestep, context, transformer_options={}, **kwargs):\n        \n        x_orig       = x.clone()\n        context_orig = context.clone()\n        \n        SIGMA = timestep[0].unsqueeze(0) #/ 1000\n        EO = transformer_options.get(\"ExtraOptions\", ExtraOptions(\"\"))\n        if EO is not None:\n            EO.mute = True\n            \n        y0_style_pos        = transformer_options.get(\"y0_style_pos\")\n        y0_style_neg        = transformer_options.get(\"y0_style_neg\")\n\n        y0_style_pos_weight    = transformer_options.get(\"y0_style_pos_weight\", 0.0)\n        y0_style_pos_synweight = transformer_options.get(\"y0_style_pos_synweight\", 0.0)\n        y0_style_pos_synweight *= y0_style_pos_weight\n\n        y0_style_neg_weight    = transformer_options.get(\"y0_style_neg_weight\", 0.0)\n        y0_style_neg_synweight = transformer_options.get(\"y0_style_neg_synweight\", 0.0)\n        y0_style_neg_synweight *= y0_style_neg_weight\n                \n        \n        out_list = []\n        for i in range(len(transformer_options['cond_or_uncond'])):\n            UNCOND = transformer_options['cond_or_uncond'][i] == 1\n\n            x       = x_orig[i][None,...].clone()\n            context = context_orig.clone()\n\n            patches_replace = transformer_options.get(\"patches_replace\", {})\n            # patchify x, add PE\n            b, c, h, w = x.shape\n            \n            h_len = ((h + (self.patch_size // 2)) // self.patch_size) # h_len 96\n            w_len = ((w + (self.patch_size // 2)) // self.patch_size) # w_len 96\n\n\n            x = self.init_x_linear(self.patchify(x))  # B, T_x, D\n            x = self.apply_pos_embeds(x, h, w)\n\n            if UNCOND:\n\n                transformer_options['reg_cond_weight'] = transformer_options.get(\"regional_conditioning_weight\", 0.0) \n                transformer_options['reg_cond_floor']  = transformer_options.get(\"regional_conditioning_floor\",  0.0) \n                transformer_options['reg_cond_mask_orig'] = transformer_options.get('regional_conditioning_mask_orig')\n                \n                AttnMask   = transformer_options.get('AttnMask',   None)                    \n                RegContext = transformer_options.get('RegContext', None)\n                \n                if AttnMask is not None and transformer_options['reg_cond_weight'] > 0.0:\n                    AttnMask.attn_mask_recast(x.dtype)\n                    context_tmp = RegContext.get().to(context.dtype)\n                    #context_tmp = 0 * context_tmp.clone()\n                    \n                    # If it's not a perfect factor, repeat and slice:\n                    A = context[i][None,...].clone()\n                    B = context_tmp\n                    context_tmp = A.repeat(1, (B.shape[1] // A.shape[1]) + 1, 1)[:, :B.shape[1], :]\n\n                else:\n                    context_tmp = context[i][None,...].clone()\n                    \n            elif UNCOND == False:\n                transformer_options['reg_cond_weight'] = transformer_options.get(\"regional_conditioning_weight\", 0.0) \n                transformer_options['reg_cond_floor']  = transformer_options.get(\"regional_conditioning_floor\", 0.0) \n                transformer_options['reg_cond_mask_orig'] = transformer_options.get('regional_conditioning_mask_orig')\n                \n                AttnMask   = transformer_options.get('AttnMask',   None)                    \n                RegContext = transformer_options.get('RegContext', None)\n                \n                if AttnMask is not None and transformer_options['reg_cond_weight'] > 0.0:\n                    AttnMask.attn_mask_recast(x.dtype)\n                    context_tmp = RegContext.get().to(context.dtype)\n                else:\n                    context_tmp = context[i][None,...].clone()\n            \n            if context_tmp is None:\n                context_tmp = context[i][None,...].clone()\n                \n\n\n\n            # process conditions for MMDiT Blocks\n            #c_seq = context  # B, T_c, D_c\n            c_seq = context_tmp  # B, T_c, D_c\n\n            t = timestep\n\n            c = self.cond_seq_linear(c_seq)  # B, T_c, D         # 1,256,2048 -> \n            c = torch.cat([comfy.ops.cast_to_input(self.register_tokens, c).repeat(c.size(0), 1, 1), c], dim=1)   #1,256,3072 -> 1,264,3072\n\n            global_cond = self.t_embedder(t, x.dtype)  # B, D\n\n            global_cond = global_cond[i][None]\n\n            \n\n            weight    = transformer_options['reg_cond_weight'] if 'reg_cond_weight' in transformer_options else 0.0\n            floor     = transformer_options['reg_cond_floor']  if 'reg_cond_floor'  in transformer_options else 0.0\n            \n            floor     = min(floor, weight)\n            \n            reg_cond_mask_expanded = transformer_options.get('reg_cond_mask_expanded')\n            reg_cond_mask_expanded = reg_cond_mask_expanded.to(img.dtype).to(img.device) if reg_cond_mask_expanded is not None else None\n            reg_cond_mask = None\n\n\n\n            AttnMask = transformer_options.get('AttnMask')\n            mask     = None\n            if AttnMask is not None and weight > 0:\n                mask                      = AttnMask.get(weight=weight) #mask_obj[0](transformer_options, weight.item())\n                \n                mask_type_bool = type(mask[0][0].item()) == bool if mask is not None else False\n                if not mask_type_bool:\n                    mask = mask.to(x.dtype)\n                    \n                if mask_type_bool:\n                    mask = F.pad(mask, (8, 0, 8, 0), value=True)\n                    #mask = F.pad(mask, (0, 8, 0, 8), value=True)\n                else:\n                    mask = F.pad(mask, (8, 0, 8, 0), value=1.0)\n                \n                text_len                  = context.shape[1] # mask_obj[0].text_len\n                \n                mask[text_len:,text_len:] = torch.clamp(mask[text_len:,text_len:], min=floor.to(mask.device))   #ORIGINAL SELF-ATTN REGION BLEED\n                reg_cond_mask = reg_cond_mask_expanded.unsqueeze(0).clone() if reg_cond_mask_expanded is not None else None\n\n            mask_type_bool = type(mask[0][0].item()) == bool if mask is not None else False\n\n            total_layers = len(self.double_layers) + len(self.single_layers)\n\n            blocks_replace = patches_replace.get(\"dit\", {})       # context 1,259,2048      x 1,4032,3072\n            if len(self.double_layers) > 0:\n                for i, layer in enumerate(self.double_layers):\n                    if mask_type_bool and weight < (i / (total_layers-1)) and mask is not None:\n                        mask = mask.to(x.dtype)\n                        \n                    if (\"double_block\", i) in blocks_replace:\n                        def block_wrap(args):\n                            out = {}\n                            out[\"txt\"], out[\"img\"] = layer( args[\"txt\"],\n                                                            args[\"img\"],\n                                                            args[\"vec\"])\n                            return out\n                        out = blocks_replace[(\"double_block\", i)]({\"img\": x, \"txt\": c, \"vec\": global_cond}, {\"original_block\": block_wrap})\n                        c = out[\"txt\"]\n                        x = out[\"img\"]\n                    else:\n                        c, x = layer(c, x, global_cond, mask=mask, **kwargs)\n\n            if len(self.single_layers) > 0:\n                c_len = c.size(1)\n                cx = torch.cat([c, x], dim=1)\n                for i, layer in enumerate(self.single_layers):\n                    if mask_type_bool and weight < ((len(self.double_layers) + i) / (total_layers-1)) and mask is not None:\n                        mask = mask.to(x.dtype)\n                    \n                    if (\"single_block\", i) in blocks_replace:\n                        def block_wrap(args):\n                            out = {}\n                            out[\"img\"] = layer(args[\"img\"], args[\"vec\"])\n                            return out\n\n                        out = blocks_replace[(\"single_block\", i)]({\"img\": cx, \"vec\": global_cond}, {\"original_block\": block_wrap})\n                        cx = out[\"img\"]\n                    else:\n                        cx = layer(cx, global_cond, mask=mask, **kwargs)\n\n                x = cx[:, c_len:]\n\n            fshift, fscale = self.modF(global_cond).chunk(2, dim=1)\n\n            x = modulate(x, fshift, fscale)\n            x = self.final_linear(x)\n            x = self.unpatchify(x, (h + 1) // self.patch_size, (w + 1) // self.patch_size)[:,:,:h,:w]\n            \n            out_list.append(x)\n            \n        eps = torch.stack(out_list, dim=0).squeeze(dim=1)\n        \n        \n        \n        \n        \n        \n        freqsep_lowpass_method = transformer_options.get(\"freqsep_lowpass_method\")\n        freqsep_sigma          = transformer_options.get(\"freqsep_sigma\")\n        freqsep_kernel_size    = transformer_options.get(\"freqsep_kernel_size\")\n        freqsep_inner_kernel_size    = transformer_options.get(\"freqsep_inner_kernel_size\")\n        freqsep_stride    = transformer_options.get(\"freqsep_stride\")\n        \n        freqsep_lowpass_weight = transformer_options.get(\"freqsep_lowpass_weight\")\n        freqsep_highpass_weight= transformer_options.get(\"freqsep_highpass_weight\")\n        freqsep_mask           = transformer_options.get(\"freqsep_mask\")\n        \n        dtype = eps.dtype if self.style_dtype is None else self.style_dtype\n        \n        if y0_style_pos is not None:\n            y0_style_pos_weight    = transformer_options.get(\"y0_style_pos_weight\")\n            y0_style_pos_synweight = transformer_options.get(\"y0_style_pos_synweight\")\n            y0_style_pos_synweight *= y0_style_pos_weight\n            y0_style_pos_mask = transformer_options.get(\"y0_style_pos_mask\")\n            y0_style_pos_mask_edge = transformer_options.get(\"y0_style_pos_mask_edge\")\n\n            y0_style_pos = y0_style_pos.to(dtype)\n            x   = x_orig.clone().to(dtype)\n            #x   = x.to(dtype)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n            \n            denoised_embed = self.Retrojector.embed(denoised)\n            y0_adain_embed = self.Retrojector.embed(y0_style_pos)\n            \n            if transformer_options['y0_style_method'] == \"scattersort\":\n                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')\n                pad = transformer_options.get('y0_style_tile_padding')\n                if pad is not None and tile_h is not None and tile_w is not None:\n                    \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if EO(\"scattersort_median_LP\"):\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=EO(\"scattersort_median_LP\",7))\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=EO(\"scattersort_median_LP\",7))\n                        \n                        denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_LP = apply_scattersort_tiled(denoised_spatial_LP, y0_adain_spatial_LP, tile_h, tile_w, pad)\n                        \n                        denoised_spatial = denoised_spatial_LP + denoised_spatial_HP\n                        denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n                    else:\n                        denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)\n                    \n                    denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n                    \n                else:\n                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_pos_mask, y0_style_pos_mask_edge, h_len, w_len)\n\n\n\n            elif transformer_options['y0_style_method'] == \"AdaIN\":\n                if freqsep_mask is not None:\n                    freqsep_mask = freqsep_mask.view(1, 1, *freqsep_mask.shape[-2:]).float()\n                    freqsep_mask = F.interpolate(freqsep_mask.float(), size=(h_len, w_len), mode='nearest-exact')\n                \n                if hasattr(self, \"adain_tile\"):\n                    tile_h, tile_w = self.adain_tile\n                    \n                    denoised_pretile = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_pretile = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if self.adain_flag:\n                        h_off = tile_h // 2\n                        w_off = tile_w // 2\n                        denoised_pretile = denoised_pretile[:,:,h_off:-h_off, w_off:-w_off]\n                        self.adain_flag = False\n                    else:\n                        h_off = 0\n                        w_off = 0\n                        self.adain_flag = True\n                    \n                    tiles,    orig_shape, grid, strides = tile_latent(denoised_pretile, tile_size=(tile_h,tile_w))\n                    y0_tiles, orig_shape, grid, strides = tile_latent(y0_adain_pretile, tile_size=(tile_h,tile_w))\n                    \n                    tiles_out = []\n                    for i in range(tiles.shape[0]):\n                        tile = tiles[i].unsqueeze(0)\n                        y0_tile = y0_tiles[i].unsqueeze(0)\n                        \n                        tile    = rearrange(tile,    \"b c h w -> b (h w) c\", h=tile_h, w=tile_w)\n                        y0_tile = rearrange(y0_tile, \"b c h w -> b (h w) c\", h=tile_h, w=tile_w)\n                        \n                        tile = adain_seq_inplace(tile, y0_tile)\n                        tiles_out.append(rearrange(tile, \"b (h w) c -> b c h w\", h=tile_h, w=tile_w))\n                    \n                    tiles_out_tensor = torch.cat(tiles_out, dim=0)\n                    tiles_out_tensor = untile_latent(tiles_out_tensor, orig_shape, grid, strides)\n\n                    if h_off == 0:\n                        denoised_pretile = tiles_out_tensor\n                    else:\n                        denoised_pretile[:,:,h_off:-h_off, w_off:-w_off] = tiles_out_tensor\n                    denoised_embed = rearrange(denoised_pretile, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None and freqsep_lowpass_method.endswith(\"pw\"): #EO(\"adain_pw\"):\n\n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n\n                    if   freqsep_lowpass_method == \"median_pw\":\n                        denoised_spatial_new = adain_patchwise_row_batch_med(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size, use_median_blur=True, lowpass_weight=freqsep_lowpass_weight, highpass_weight=freqsep_highpass_weight)\n                    elif freqsep_lowpass_method == \"gaussian_pw\": \n                        denoised_spatial_new = adain_patchwise_row_batch(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None: \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if   freqsep_lowpass_method == \"median\":\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=freqsep_kernel_size)\n                    elif freqsep_lowpass_method == \"gaussian\":\n                        denoised_spatial_LP = gaussian_blur_2d(denoised_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = gaussian_blur_2d(y0_adain_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                    \n                    if EO(\"adain_fs_uhp\"):\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_ULP = gaussian_blur_2d(denoised_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        y0_adain_spatial_ULP = gaussian_blur_2d(y0_adain_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        \n                        denoised_spatial_UHP = denoised_spatial_HP  - denoised_spatial_ULP\n                        y0_adain_spatial_UHP = y0_adain_spatial_HP  - y0_adain_spatial_ULP\n                        \n                        #denoised_spatial_HP  = y0_adain_spatial_ULP + denoised_spatial_UHP\n                        denoised_spatial_HP  = denoised_spatial_ULP + y0_adain_spatial_UHP\n                    \n                    denoised_spatial_new = freqsep_lowpass_weight * y0_adain_spatial_LP + freqsep_highpass_weight * denoised_spatial_HP\n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                else:\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n            \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                self.StyleWCT.set(y0_adain_embed)\n                denoised_embed = self.StyleWCT.get(denoised_embed)\n                \n                if transformer_options.get('y0_standard_guide') is not None:\n                    y0_standard_guide = transformer_options.get('y0_standard_guide')\n                    \n                    y0_standard_guide_embed = self.Retrojector.embed(y0_standard_guide)\n                    f_cs = self.StyleWCT.get(y0_standard_guide_embed)\n                    self.y0_standard_guide = self.Retrojector.unembed(f_cs)\n\n                if transformer_options.get('y0_inv_standard_guide') is not None:\n                    y0_inv_standard_guide = transformer_options.get('y0_inv_standard_guide')\n\n                    y0_inv_standard_guide_embed = self.Retrojector.embed(y0_inv_standard_guide)\n                    f_cs = self.StyleWCT.get(y0_inv_standard_guide_embed)\n                    self.y0_inv_standard_guide = self.Retrojector.unembed(f_cs)\n\n            denoised_approx = self.Retrojector.unembed(denoised_embed)\n            \n            eps = (x - denoised_approx) / sigma\n\n            if not UNCOND:\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_pos_weight * (eps[1] - eps_orig[1])\n                    eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n                else:\n                    eps[0] = eps_orig[0] + y0_style_pos_weight * (eps[0] - eps_orig[0])\n            elif eps.shape[0] == 1 and UNCOND:\n                eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n            \n            eps = eps.float()\n        \n        if y0_style_neg is not None:\n            y0_style_neg_weight    = transformer_options.get(\"y0_style_neg_weight\")\n            y0_style_neg_synweight = transformer_options.get(\"y0_style_neg_synweight\")\n            y0_style_neg_synweight *= y0_style_neg_weight\n            y0_style_neg_mask = transformer_options.get(\"y0_style_neg_mask\")\n            y0_style_neg_mask_edge = transformer_options.get(\"y0_style_neg_mask_edge\")\n            \n            y0_style_neg = y0_style_neg.to(dtype)\n            x   = x.to(dtype)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n\n            denoised_embed = self.Retrojector.embed(denoised)\n            y0_adain_embed = self.Retrojector.embed(y0_style_neg)\n            \n            if transformer_options['y0_style_method'] == \"scattersort\":\n                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')\n                pad = transformer_options.get('y0_style_tile_padding')\n                if pad is not None and tile_h is not None and tile_w is not None:\n                    \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)\n                    \n                    denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n\n                else:\n                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_neg_mask, y0_style_neg_mask_edge, h_len, w_len)\n            \n            \n            elif transformer_options['y0_style_method'] == \"AdaIN\":\n                denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                self.StyleWCT.set(y0_adain_embed)\n                denoised_embed = self.StyleWCT.get(denoised_embed)\n\n            denoised_approx = self.Retrojector.unembed(denoised_embed)\n\n            if UNCOND:\n                eps = (x - denoised_approx) / sigma\n                eps[0] = eps_orig[0] + y0_style_neg_weight * (eps[0] - eps_orig[0])\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n            elif eps.shape[0] == 1 and not UNCOND:\n                eps[0] = eps_orig[0] + y0_style_neg_synweight * (eps[0] - eps_orig[0])\n            \n            eps = eps.float()\n            \n        return eps\n\n\n\n\ndef unpatchify2(x: torch.Tensor, H: int, W: int, patch_size: int) -> torch.Tensor:\n    \"\"\"\n    Invert patchify:\n      x: (B, N, C*p*p)\n      returns: (B, C, H, W), slicing off any padding\n    \"\"\"\n    B, N, CPP = x.shape\n    p = patch_size\n    Hp = math.ceil(H / p)\n    Wp = math.ceil(W / p)\n    C = CPP // (p * p)\n    assert N == Hp * Wp, f\"Expected N={Hp*Wp} patches, got {N}\"\n\n    x = x.view(B, Hp, Wp, CPP)       \n    x = x.view(B, Hp, Wp, C, p, p)     \n    x = x.permute(0, 3, 1, 4, 2, 5)      \n    imgs = x.reshape(B, C, Hp * p, Wp * p) \n    return imgs[:, :, :H, :W]\n\n"
  },
  {
    "path": "beta/__init__.py",
    "content": "\r\nfrom . import rk_sampler_beta\r\nfrom . import samplers\r\nfrom . import samplers_extensions\r\n\r\n\r\ndef add_beta(NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers):\r\n    \r\n    NODE_CLASS_MAPPINGS.update({\r\n        #\"SharkSampler\"                    : samplers.SharkSampler,\r\n        #\"SharkSamplerAdvanced_Beta\"       : samplers.SharkSampler, #SharkSamplerAdvanced_Beta,\r\n        \"SharkOptions_Beta\"               : samplers_extensions.SharkOptions_Beta,\r\n        \"ClownOptions_SDE_Beta\"           : samplers_extensions.ClownOptions_SDE_Beta,\r\n        \"ClownOptions_DetailBoost_Beta\"   : samplers_extensions.ClownOptions_DetailBoost_Beta,\r\n        \"ClownGuide_Style_Beta\"           : samplers_extensions.ClownGuide_Style_Beta,\r\n        \"ClownGuide_Style_EdgeWidth\"      : samplers_extensions.ClownGuide_Style_EdgeWidth,\r\n        \"ClownGuide_Style_TileSize\"       : samplers_extensions.ClownGuide_Style_TileSize,\r\n\r\n        \"ClownGuide_Beta\"                 : samplers_extensions.ClownGuide_Beta,\r\n        \"ClownGuides_Beta\"                : samplers_extensions.ClownGuides_Beta,\r\n        \"ClownGuidesAB_Beta\"              : samplers_extensions.ClownGuidesAB_Beta,\r\n        \r\n        \"ClownGuides_Sync\"                : samplers_extensions.ClownGuides_Sync,\r\n        \"ClownGuides_Sync_Advanced\"       : samplers_extensions.ClownGuides_Sync_Advanced,\r\n        \"ClownGuide_FrequencySeparation\"  : samplers_extensions.ClownGuide_FrequencySeparation,\r\n\r\n        \r\n        \"SharkOptions_GuiderInput\"        : samplers_extensions.SharkOptions_GuiderInput,\r\n        \"ClownOptions_ImplicitSteps_Beta\" : samplers_extensions.ClownOptions_ImplicitSteps_Beta,\r\n        \"ClownOptions_Cycles_Beta\"        : samplers_extensions.ClownOptions_Cycles_Beta,\r\n\r\n        \"SharkOptions_GuideCond_Beta\"     : samplers_extensions.SharkOptions_GuideCond_Beta,\r\n        \"SharkOptions_GuideConds_Beta\"    : samplers_extensions.SharkOptions_GuideConds_Beta,\r\n        \r\n        \"ClownOptions_Tile_Beta\"          : samplers_extensions.ClownOptions_Tile_Beta,\r\n        \"ClownOptions_Tile_Advanced_Beta\" : samplers_extensions.ClownOptions_Tile_Advanced_Beta,\r\n\r\n\r\n        \"ClownGuide_Mean_Beta\"            : samplers_extensions.ClownGuide_Mean_Beta,\r\n        \"ClownGuide_AdaIN_MMDiT_Beta\"     : samplers_extensions.ClownGuide_AdaIN_MMDiT_Beta,\r\n        \"ClownGuide_AttnInj_MMDiT_Beta\"   : samplers_extensions.ClownGuide_AttnInj_MMDiT_Beta,\r\n        \"ClownGuide_StyleNorm_Advanced_HiDream\" : samplers_extensions.ClownGuide_StyleNorm_Advanced_HiDream,\r\n\r\n        \"ClownOptions_SDE_Mask_Beta\"      : samplers_extensions.ClownOptions_SDE_Mask_Beta,\r\n        \r\n        \"ClownOptions_StepSize_Beta\"      : samplers_extensions.ClownOptions_StepSize_Beta,\r\n        \"ClownOptions_SigmaScaling_Beta\"  : samplers_extensions.ClownOptions_SigmaScaling_Beta,\r\n\r\n        \"ClownOptions_Momentum_Beta\"      : samplers_extensions.ClownOptions_Momentum_Beta,\r\n        \"ClownOptions_SwapSampler_Beta\"   : samplers_extensions.ClownOptions_SwapSampler_Beta,\r\n        \"ClownOptions_ExtraOptions_Beta\"  : samplers_extensions.ClownOptions_ExtraOptions_Beta,\r\n        \"ClownOptions_Automation_Beta\"    : samplers_extensions.ClownOptions_Automation_Beta,\r\n\r\n        \"SharkOptions_UltraCascade_Latent_Beta\"  : samplers_extensions.SharkOptions_UltraCascade_Latent_Beta,\r\n        \"SharkOptions_StartStep_Beta\"     : samplers_extensions.SharkOptions_StartStep_Beta,\r\n        \r\n        \"ClownOptions_Combine\"            : samplers_extensions.ClownOptions_Combine,\r\n        \"ClownOptions_Frameweights\"       : samplers_extensions.ClownOptions_Frameweights,\r\n        \"ClownOptions_FlowGuide\"          : samplers_extensions.ClownOptions_FlowGuide,\r\n        \r\n        \"ClownStyle_Block_MMDiT\"          : samplers_extensions.ClownStyle_Block_MMDiT,\r\n        \"ClownStyle_MMDiT\"                : samplers_extensions.ClownStyle_MMDiT,\r\n        \"ClownStyle_Attn_MMDiT\"           : samplers_extensions.ClownStyle_Attn_MMDiT,\r\n        \"ClownStyle_Boost\"                : samplers_extensions.ClownStyle_Boost,\r\n\r\n        \"ClownStyle_UNet\"                 : samplers_extensions.ClownStyle_UNet,\r\n        \"ClownStyle_Block_UNet\"           : samplers_extensions.ClownStyle_Block_UNet,\r\n        \"ClownStyle_Attn_UNet\"            : samplers_extensions.ClownStyle_Attn_UNet,\r\n        \"ClownStyle_ResBlock_UNet\"        : samplers_extensions.ClownStyle_ResBlock_UNet,\r\n        \"ClownStyle_SpatialBlock_UNet\"    : samplers_extensions.ClownStyle_SpatialBlock_UNet,\r\n        \"ClownStyle_TransformerBlock_UNet\": samplers_extensions.ClownStyle_TransformerBlock_UNet,\r\n\r\n\r\n        \"ClownSamplerSelector_Beta\"       : samplers_extensions.ClownSamplerSelector_Beta,\r\n\r\n        \"SharkSampler_Beta\"               : samplers.SharkSampler_Beta,\r\n        \r\n        \"SharkChainsampler_Beta\"          : samplers.SharkChainsampler_Beta,\r\n\r\n        \"ClownsharKSampler_Beta\"          : samplers.ClownsharKSampler_Beta,\r\n        \"ClownsharkChainsampler_Beta\"     : samplers.ClownsharkChainsampler_Beta,\r\n        \r\n        \"ClownSampler_Beta\"               : samplers.ClownSampler_Beta,\r\n        \"ClownSamplerAdvanced_Beta\"       : samplers.ClownSamplerAdvanced_Beta,\r\n        \r\n        \"BongSampler\"                     : samplers.BongSampler,\r\n\r\n    })\r\n\r\n    extra_samplers.update({\r\n        \"res_2m\"     : sample_res_2m,\r\n        \"res_3m\"     : sample_res_3m,\r\n        \"res_2s\"     : sample_res_2s,\r\n        \"res_3s\"     : sample_res_3s,\r\n        \"res_5s\"     : sample_res_5s,\r\n        \"res_6s\"     : sample_res_6s,\r\n        \"res_2m_ode\" : sample_res_2m_ode,\r\n        \"res_3m_ode\" : sample_res_3m_ode,\r\n        \"res_2s_ode\" : sample_res_2s_ode,\r\n        \"res_3s_ode\" : sample_res_3s_ode,\r\n        \"res_5s_ode\" : sample_res_5s_ode,\r\n        \"res_6s_ode\" : sample_res_6s_ode,\r\n\r\n        \"deis_2m\"    : sample_deis_2m,\r\n        \"deis_3m\"    : sample_deis_3m,\r\n        \"deis_2m_ode\": sample_deis_2m_ode,\r\n        \"deis_3m_ode\": sample_deis_3m_ode,\r\n        \"rk_beta\": rk_sampler_beta.sample_rk_beta,\r\n    })\r\n    \r\n    NODE_DISPLAY_NAME_MAPPINGS.update({\r\n            #\"SharkSampler\"                          : \"SharkSampler\",\r\n            #\"SharkSamplerAdvanced_Beta\"             : \"SharkSamplerAdvanced\",\r\n            \"SharkSampler_Beta\"                     : \"SharkSampler\",\r\n            \"SharkChainsampler_Beta\"                : \"SharkChainsampler\",\r\n            \"BongSampler\"                           : \"BongSampler\",\r\n            \"ClownsharKSampler_Beta\"                : \"ClownsharKSampler\",\r\n            \"ClownsharkChainsampler_Beta\"           : \"ClownsharkChainsampler\",\r\n            \"ClownSampler_Beta\"                     : \"ClownSampler\",\r\n            \"ClownSamplerAdvanced_Beta\"             : \"ClownSamplerAdvanced\",\r\n            \"ClownGuide_Mean_Beta\"                  : \"ClownGuide Mean\",\r\n            \"ClownGuide_AdaIN_MMDiT_Beta\"           : \"ClownGuide AdaIN (HiDream)\",\r\n            \"ClownGuide_AttnInj_MMDiT_Beta\"         : \"ClownGuide AttnInj (HiDream)\",\r\n            \"ClownGuide_StyleNorm_Advanced_HiDream\" : \"ClownGuide_StyleNorm_Advanced_HiDream\",\r\n            \"ClownGuide_Style_Beta\"                 : \"ClownGuide Style\",\r\n            \"ClownGuide_Beta\"                       : \"ClownGuide\",\r\n            \"ClownGuides_Beta\"                      : \"ClownGuides\",\r\n            \"ClownGuides_Sync\"                      : \"ClownGuides Sync\",\r\n            \"ClownGuides_Sync_Advanced\"             : \"ClownGuides Sync_Advanced\",\r\n\r\n\r\n            \"ClownGuidesAB_Beta\"                    : \"ClownGuidesAB\",\r\n            \"ClownSamplerSelector_Beta\"             : \"ClownSamplerSelector\",\r\n            \"ClownOptions_SDE_Mask_Beta\"            : \"ClownOptions SDE Mask\",\r\n            \"ClownOptions_SDE_Beta\"                 : \"ClownOptions SDE\",\r\n            \"ClownOptions_StepSize_Beta\"            : \"ClownOptions Step Size\",\r\n            \"ClownOptions_DetailBoost_Beta\"         : \"ClownOptions Detail Boost\",\r\n            \"ClownOptions_SigmaScaling_Beta\"        : \"ClownOptions Sigma Scaling\",\r\n            \"ClownOptions_Momentum_Beta\"            : \"ClownOptions Momentum\",\r\n            \"ClownOptions_ImplicitSteps_Beta\"       : \"ClownOptions Implicit Steps\",\r\n            \"ClownOptions_Cycles_Beta\"              : \"ClownOptions Cycles\",\r\n            \"ClownOptions_SwapSampler_Beta\"         : \"ClownOptions Swap Sampler\",\r\n            \"ClownOptions_ExtraOptions_Beta\"        : \"ClownOptions Extra Options\",\r\n            \"ClownOptions_Automation_Beta\"          : \"ClownOptions Automation\",\r\n            \"SharkOptions_GuideCond_Beta\"           : \"SharkOptions Guide Cond\",\r\n            \"SharkOptions_GuideConds_Beta\"          : \"SharkOptions Guide Conds\",\r\n            \"SharkOptions_Beta\"                     : \"SharkOptions\",\r\n            \"SharkOptions_StartStep_Beta\"           : \"SharkOptions Start Step\",\r\n            \"SharkOptions_UltraCascade_Latent_Beta\" : \"SharkOptions UltraCascade Latent\",\r\n            \"ClownOptions_Combine\"                  : \"ClownOptions Combine\",\r\n            \"ClownOptions_Frameweights\"             : \"ClownOptions Frameweights\",\r\n            \"SharkOptions_GuiderInput\"              : \"SharkOptions Guider Input\",\r\n            \"ClownOptions_Tile_Beta\"                : \"ClownOptions Tile\",\r\n            \"ClownOptions_Tile_Advanced_Beta\"       : \"ClownOptions Tile Advanced\",\r\n\r\n    })\r\n    \r\n    return NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers\r\n\r\n\r\n\r\ndef sample_res_2m(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"res_2m\",)\r\ndef sample_res_3m(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"res_3m\",)\r\ndef sample_res_2s(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"res_2s\",)\r\ndef sample_res_3s(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"res_3s\",)\r\ndef sample_res_5s(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"res_5s\",)\r\ndef sample_res_6s(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"res_6s\",)\r\n\r\ndef sample_res_2m_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"res_2m\", eta=0.0, eta_substep=0.0, )\r\ndef sample_res_3m_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"res_3m\", eta=0.0, eta_substep=0.0, )\r\ndef sample_res_2s_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"res_2s\", eta=0.0, eta_substep=0.0, )\r\ndef sample_res_3s_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"res_3s\", eta=0.0, eta_substep=0.0, )\r\ndef sample_res_5s_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"res_5s\", eta=0.0, eta_substep=0.0, )\r\ndef sample_res_6s_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"res_6s\", eta=0.0, eta_substep=0.0, )\r\n\r\ndef sample_deis_2m(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"deis_2m\",)\r\ndef sample_deis_3m(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"deis_3m\",)\r\n\r\ndef sample_deis_2m_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"deis_2m\", eta=0.0, eta_substep=0.0, )\r\ndef sample_deis_3m_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type=\"deis_3m\", eta=0.0, eta_substep=0.0, )\r\n\r\n"
  },
  {
    "path": "beta/constants.py",
    "content": "MAX_STEPS = 10000\n\n\nIMPLICIT_TYPE_NAMES = [\n    \"rebound\",\n    \"retro-eta\",\n    \"bongmath\",\n    \"predictor-corrector\",\n]\n\n\n\n\nGUIDE_MODE_NAMES_BETA_SIMPLE = [\n    \"flow\",\n    \"sync\",\n    \"lure\",\n    \"data\",\n    \"epsilon\",\n    \"inversion\",\n    \"pseudoimplicit\",\n    \"fully_pseudoimplicit\",\n    \"none\",\n]\n\nFRAME_WEIGHTS_CONFIG_NAMES = [\n    \"frame_weights\",\n    \"frame_weights_inv\",\n    \"frame_targets\"\n]\n\nFRAME_WEIGHTS_DYNAMICS_NAMES = [\n    \"constant\",\n    \"linear\",\n    \"ease_out\",\n    \"ease_in\",\n    \"middle\",\n    \"trough\",\n]\n\n\nFRAME_WEIGHTS_SCHEDULE_NAMES = [\n    \"moderate_early\",\n    \"moderate_late\",\n    \"fast_early\",\n    \"fast_late\",\n    \"slow_early\",\n    \"slow_late\",\n]\n\n\n\nGUIDE_MODE_NAMES_PSEUDOIMPLICIT = [\n    \"pseudoimplicit\",\n    \"pseudoimplicit_cw\",\n    \"pseudoimplicit_projection\",\n    \"pseudoimplicit_projection_cw\",\n    \"fully_pseudoimplicit\",\n    \"fully_pseudoimplicit_projection\",\n    \"fully_pseudoimplicit_cw\", \n    \"fully_pseudoimplicit_projection_cw\"\n]\n"
  },
  {
    "path": "beta/deis_coefficients.py",
    "content": "# Adapted from: https://github.com/zju-pi/diff-sampler/blob/main/gits-main/solver_utils.py\n# fixed the calcs for \"rhoab\" which suffered from an off-by-one error and made some other minor corrections\n\nimport torch\nimport numpy as np\n\n# A pytorch reimplementation of DEIS (https://github.com/qsh-zh/deis).\n#############################\n### Utils for DEIS solver ###\n#############################\n#----------------------------------------------------------------------------\n# Transfer from the input time (sigma) used in EDM to that (t) used in DEIS.\n\ndef edm2t(edm_steps, epsilon_s=1e-3, sigma_min=0.002, sigma_max=80):\n    vp_sigma = lambda beta_d, beta_min: lambda t: (np.e ** (0.5 * beta_d * (t ** 2) + beta_min * t) - 1) ** 0.5\n    vp_sigma_inv = lambda beta_d, beta_min: lambda sigma: ((beta_min ** 2 + 2 * beta_d * (sigma ** 2 + 1).log()).sqrt() - beta_min) / beta_d\n    vp_beta_d = 2 * (np.log(torch.tensor(sigma_min).cpu() ** 2 + 1) / epsilon_s - np.log(torch.tensor(sigma_max).cpu() ** 2 + 1)) / (epsilon_s - 1)\n    vp_beta_min = np.log(torch.tensor(sigma_max).cpu() ** 2 + 1) - 0.5 * vp_beta_d\n    t_steps = vp_sigma_inv(vp_beta_d.clone().detach().cpu(), vp_beta_min.clone().detach().cpu())(edm_steps.clone().detach().cpu())\n    return t_steps, vp_beta_min, vp_beta_d + vp_beta_min\n\n#----------------------------------------------------------------------------\n\ndef cal_poly(prev_t, j, taus):\n    poly = 1\n    for k in range(prev_t.shape[0]):\n        if k == j:\n            continue\n        poly *= (taus - prev_t[k]) / (prev_t[j] - prev_t[k])\n    return poly\n\n#----------------------------------------------------------------------------\n# Transfer from t to alpha_t.\n\ndef t2alpha_fn(beta_0, beta_1, t):\n    return torch.exp(-0.5 * t ** 2 * (beta_1 - beta_0) - t * beta_0)\n\n#----------------------------------------------------------------------------\n\ndef cal_integrand(beta_0, beta_1, taus):\n    with torch.inference_mode(mode=False):\n        taus = taus.clone()\n        beta_0 = beta_0.clone()\n        beta_1 = beta_1.clone()\n        with torch.enable_grad():\n            taus.requires_grad_(True)\n            alpha = t2alpha_fn(beta_0, beta_1, taus)\n            log_alpha = alpha.log()\n            log_alpha.sum().backward()\n            d_log_alpha_dtau = taus.grad\n    integrand = -0.5 * d_log_alpha_dtau / torch.sqrt(alpha * (1 - alpha))\n    return integrand\n\n#----------------------------------------------------------------------------\n\ndef get_deis_coeff_list(t_steps, max_order, N=10000, deis_mode='tab'):\n    \"\"\"\n    Get the coefficient list for DEIS sampling.\n\n    Args:\n        t_steps: A pytorch tensor. The time steps for sampling.\n        max_order: A `int`. Maximum order of the solver. 1 <= max_order <= 4\n        N: A `int`. Use how many points to perform the numerical integration when deis_mode=='tab'.\n        deis_mode: A `str`. Select between 'tab' and 'rhoab'. Type of DEIS.\n    Returns:\n        A pytorch tensor. A batch of generated samples or sampling trajectories if return_inters=True.\n    \"\"\"\n    if deis_mode == 'tab':\n        t_steps, beta_0, beta_1 = edm2t(t_steps)\n        C = []\n        for i, (t_cur, t_next) in enumerate(zip(t_steps[:-1], t_steps[1:])):\n            order = min(i+1, max_order)\n            if order == 1:\n                C.append([])\n            else:\n                taus = torch.linspace(t_cur, t_next, N)   # split the interval for integral approximation\n                dtau = (t_next - t_cur) / N\n                prev_t = t_steps[[i - k for k in range(order)]]\n                coeff_temp = []\n                integrand = cal_integrand(beta_0, beta_1, taus)\n                for j in range(order):\n                    poly = cal_poly(prev_t, j, taus)\n                    coeff_temp.append(torch.sum(integrand * poly) * dtau)\n                C.append(coeff_temp)\n\n    elif deis_mode == 'rhoab':\n        # Analytical solution, second order\n        def get_def_integral_2(a, b, start, end, c):\n            coeff = (end**3 - start**3) / 3 - (end**2 - start**2) * (a + b) / 2 + (end - start) * a * b\n            return coeff / ((c - a) * (c - b))\n\n        # Analytical solution, third order\n        def get_def_integral_3(a, b, c, start, end, d):\n            coeff = (end**4 - start**4) / 4 - (end**3 - start**3) * (a + b + c) / 3 \\\n                    + (end**2 - start**2) * (a*b + a*c + b*c) / 2 - (end - start) * a * b * c\n            return coeff / ((d - a) * (d - b) * (d - c))\n\n        C = []\n        for i, (t_cur, t_next) in enumerate(zip(t_steps[:-1], t_steps[1:])):\n            order = min(i+1, max_order) #fixed order calcs\n            if order == 1:\n                C.append([])\n            else:\n                prev_t = t_steps[[i - k for k in range(order+1)]]\n                if order == 2:\n                    coeff_cur = ((t_next - prev_t[1])**2 - (t_cur - prev_t[1])**2) / (2 * (t_cur - prev_t[1]))\n                    coeff_prev1 = (t_next - t_cur)**2 / (2 * (prev_t[1] - t_cur))\n                    coeff_temp = [coeff_cur, coeff_prev1]\n                elif order == 3:\n                    coeff_cur = get_def_integral_2(prev_t[1], prev_t[2], t_cur, t_next, t_cur)\n                    coeff_prev1 = get_def_integral_2(t_cur, prev_t[2], t_cur, t_next, prev_t[1])\n                    coeff_prev2 = get_def_integral_2(t_cur, prev_t[1], t_cur, t_next, prev_t[2])\n                    coeff_temp = [coeff_cur, coeff_prev1, coeff_prev2]\n                elif order == 4:\n                    coeff_cur = get_def_integral_3(prev_t[1], prev_t[2], prev_t[3], t_cur, t_next, t_cur)\n                    coeff_prev1 = get_def_integral_3(t_cur, prev_t[2], prev_t[3], t_cur, t_next, prev_t[1])\n                    coeff_prev2 = get_def_integral_3(t_cur, prev_t[1], prev_t[3], t_cur, t_next, prev_t[2])\n                    coeff_prev3 = get_def_integral_3(t_cur, prev_t[1], prev_t[2], t_cur, t_next, prev_t[3])\n                    coeff_temp = [coeff_cur, coeff_prev1, coeff_prev2, coeff_prev3]\n                C.append(coeff_temp)\n \n    return C\n\n"
  },
  {
    "path": "beta/noise_classes.py",
    "content": "import torch\r\nimport torch.nn.functional as F\r\n\r\nfrom torch               import nn, Tensor, Generator, lerp\r\nfrom torch.nn.functional import unfold\r\nfrom torch.distributions import StudentT, Laplace\r\n\r\nimport numpy as np\r\nimport pywt\r\nimport functools\r\n\r\nfrom typing import Callable, Tuple\r\nfrom math   import pi\r\n\r\nfrom comfy.k_diffusion.sampling import BrownianTreeNoiseSampler\r\n\r\nfrom ..res4lyf import RESplain\r\n\r\n# Set this to \"True\" if you have installed OpenSimplex. Recommended to install without dependencies due to conflicting packages: pip3 install opensimplex --no-deps \r\nOPENSIMPLEX_ENABLE = False\r\n\r\nif OPENSIMPLEX_ENABLE:\r\n    from opensimplex import OpenSimplex\r\n\r\nclass PrecisionTool:\r\n    def __init__(self, cast_type='fp64'):\r\n        self.cast_type = cast_type\r\n\r\n    def cast_tensor(self, func):\r\n        @functools.wraps(func)\r\n        def wrapper(*args, **kwargs):\r\n            if self.cast_type not in ['fp64', 'fp32', 'fp16']:\r\n                return func(*args, **kwargs)\r\n\r\n            target_device = None\r\n            for arg in args:\r\n                if torch.is_tensor(arg):\r\n                    target_device = arg.device\r\n                    break\r\n            if target_device is None:\r\n                for v in kwargs.values():\r\n                    if torch.is_tensor(v):\r\n                        target_device = v.device\r\n                        break\r\n            \r\n        # recursively zs_recast tensors in nested dictionaries\r\n            def cast_and_move_to_device(data):\r\n                if torch.is_tensor(data):\r\n                    if self.cast_type == 'fp64':\r\n                        return data.to(torch.float64).to(target_device)\r\n                    elif self.cast_type == 'fp32':\r\n                        return data.to(torch.float32).to(target_device)\r\n                    elif self.cast_type == 'fp16':\r\n                        return data.to(torch.float16).to(target_device)\r\n                elif isinstance(data, dict):\r\n                    return {k: cast_and_move_to_device(v) for k, v in data.items()}\r\n                return data\r\n\r\n            new_args = [cast_and_move_to_device(arg) for arg in args]\r\n            new_kwargs = {k: cast_and_move_to_device(v) for k, v in kwargs.items()}\r\n            \r\n            return func(*new_args, **new_kwargs)\r\n        return wrapper\r\n\r\n    def set_cast_type(self, new_value):\r\n        if new_value in ['fp64', 'fp32', 'fp16']:\r\n            self.cast_type = new_value\r\n        else:\r\n            self.cast_type = 'fp64'\r\n\r\nprecision_tool = PrecisionTool(cast_type='fp64')\r\n\r\n\r\ndef noise_generator_factory(cls, **fixed_params):\r\n    def create_instance(**kwargs):\r\n        params = {**fixed_params, **kwargs}\r\n        return cls(**params)\r\n    return create_instance\r\n\r\ndef like(x):\r\n    return {'size': x.shape, 'dtype': x.dtype, 'layout': x.layout, 'device': x.device}\r\n\r\ndef scale_to_range(x, scaled_min = -1.73, scaled_max = 1.73): #1.73 is roughly the square root of 3\r\n    return scaled_min + (x - x.min()) * (scaled_max - scaled_min) / (x.max() - x.min())\r\n\r\ndef normalize(x):\r\n    return (x - x.mean())/ x.std()\r\n\r\nclass NoiseGenerator:\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None):\r\n        self.seed = seed\r\n\r\n        if x is not None:\r\n            self.x      = x\r\n            self.size   = x.shape\r\n            self.dtype  = x.dtype\r\n            self.layout = x.layout\r\n            self.device = x.device\r\n        else:   \r\n            self.x      = torch.zeros(size, dtype, layout, device)\r\n\r\n        # allow overriding parameters imported from latent 'x' if specified\r\n        if size is not None:\r\n            self.size   = size\r\n        if dtype is not None:\r\n            self.dtype  = dtype\r\n        if layout is not None:\r\n            self.layout = layout\r\n        if device is not None:\r\n            self.device = device\r\n\r\n        self.sigma_max = sigma_max.to(device) if isinstance(sigma_max, torch.Tensor) else sigma_max\r\n        self.sigma_min = sigma_min.to(device) if isinstance(sigma_min, torch.Tensor) else sigma_min\r\n\r\n        self.last_seed = seed #- 1 #adapt for update being called during initialization, which increments last_seed\r\n        \r\n        if generator is None:\r\n            self.generator = torch.Generator(device=self.device).manual_seed(seed)\r\n        else:\r\n            self.generator = generator\r\n\r\n    def __call__(self):\r\n        raise NotImplementedError(\"This method got clownsharked!\")\r\n    \r\n    def update(self, **kwargs):\r\n        \r\n        #if not isinstance(self, BrownianNoiseGenerator):\r\n        #    self.last_seed += 1\r\n                    \r\n        updated_values = []\r\n        for attribute_name, value in kwargs.items():\r\n            if value is not None:\r\n                setattr(self, attribute_name, value)\r\n            updated_values.append(getattr(self, attribute_name))\r\n        return tuple(updated_values)\r\n\r\n\r\n\r\nclass BrownianNoiseGenerator(NoiseGenerator):\r\n    def __call__(self, *, sigma=None, sigma_next=None, **kwargs):\r\n        return BrownianTreeNoiseSampler(self.x, self.sigma_min, self.sigma_max, seed=self.seed, cpu = self.device.type=='cpu')(sigma, sigma_next)\r\n\r\n\r\n\r\nclass FractalNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                alpha=0.0, k=1.0, scale=0.1): \r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(alpha=alpha, k=k, scale=scale)\r\n\r\n    def __call__(self, *, alpha=None, k=None, scale=None, **kwargs):\r\n        self.update(alpha=alpha, k=k, scale=scale)\r\n        self.last_seed += 1\r\n        \r\n        if len(self.size) == 5:\r\n            b, c, t, h, w = self.size\r\n        else:\r\n            b, c, h, w = self.size\r\n        \r\n        noise = torch.normal(mean=0.0, std=1.0, size=self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n        \r\n        y_freq = torch.fft.fftfreq(h, 1/h, device=self.device)\r\n        x_freq = torch.fft.fftfreq(w, 1/w, device=self.device)\r\n\r\n        if len(self.size) == 5:\r\n            t_freq = torch.fft.fftfreq(t, 1/t, device=self.device)\r\n            freq = torch.sqrt(t_freq[:, None, None]**2 + y_freq[None, :, None]**2 + x_freq[None, None, :]**2).clamp(min=1e-10)\r\n        else:\r\n            freq = torch.sqrt(y_freq[:, None]**2 + x_freq[None, :]**2).clamp(min=1e-10)\r\n        \r\n        spectral_density = self.k / torch.pow(freq, self.alpha * self.scale)\r\n        spectral_density[0, 0] = 0\r\n\r\n        noise_fft = torch.fft.fftn(noise)\r\n        modified_fft = noise_fft * spectral_density\r\n        noise = torch.fft.ifftn(modified_fft).real\r\n\r\n        return noise / torch.std(noise)\r\n    \r\n    \r\n\r\nclass SimplexNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                scale=0.01):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.noise = OpenSimplex(seed=seed)\r\n        self.scale = scale\r\n        \r\n    def __call__(self, *, scale=None, **kwargs):\r\n        self.update(scale=scale)\r\n        self.last_seed += 1\r\n        \r\n        if len(self.size) == 5:\r\n            b, c, t, h, w = self.size\r\n        else:\r\n            b, c, h, w = self.size\r\n\r\n        noise_array = self.noise.noise3array(np.arange(w),np.arange(h),np.arange(c))\r\n        self.noise = OpenSimplex(seed=self.noise.get_seed()+1)\r\n        \r\n        noise_tensor = torch.from_numpy(noise_array).to(self.device)\r\n        noise_tensor = torch.unsqueeze(noise_tensor, dim=0)\r\n        if len(self.size) == 5:\r\n            noise_tensor = torch.unsqueeze(noise_tensor, dim=0)\r\n        \r\n        return noise_tensor / noise_tensor.std()\r\n        #return normalize(scale_to_range(noise_tensor))\r\n\r\n\r\n\r\nclass HiresPyramidNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                discount=0.7, mode='nearest-exact'):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(discount=discount, mode=mode)\r\n\r\n    def __call__(self, *, discount=None, mode=None, **kwargs):\r\n        self.update(discount=discount, mode=mode)\r\n        self.last_seed += 1\r\n\r\n        if len(self.size) == 5:\r\n            b, c, t, h, w = self.size\r\n            orig_h, orig_w, orig_t = h, w, t\r\n            u = nn.Upsample(size=(orig_h, orig_w, orig_t), mode=self.mode).to(self.device)\r\n        else:\r\n            b, c, h, w = self.size\r\n            orig_h, orig_w = h, w\r\n            orig_t = t = 1\r\n            u = nn.Upsample(size=(orig_h, orig_w), mode=self.mode).to(self.device)\r\n\r\n        noise = ((torch.rand(size=self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator) - 0.5) * 2 * 1.73)\r\n\r\n        for i in range(4):\r\n            r = torch.rand(1, device=self.device, generator=self.generator).item() * 2 + 2\r\n            h, w = min(orig_h * 15, int(h * (r ** i))), min(orig_w * 15, int(w * (r ** i)))\r\n            if len(self.size) == 5:\r\n                t = min(orig_t * 15, int(t * (r ** i)))\r\n                new_noise = torch.randn((b, c, t, h, w), dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n            else:\r\n                new_noise = torch.randn((b, c, h, w), dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n\r\n            upsampled_noise = u(new_noise)\r\n            noise += upsampled_noise * self.discount ** i\r\n            \r\n            if h >= orig_h * 15 or w >= orig_w * 15 or t >= orig_t * 15:\r\n                break  # if resolution is too high\r\n        \r\n        return noise / noise.std()\r\n\r\n\r\n\r\nclass PyramidNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                discount=0.8, mode='nearest-exact'):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(discount=discount, mode=mode)\r\n\r\n    def __call__(self, *, discount=None, mode=None, **kwargs):\r\n        self.update(discount=discount, mode=mode)\r\n        self.last_seed += 1\r\n\r\n        x = torch.zeros(self.size, dtype=self.dtype, layout=self.layout, device=self.device)\r\n\r\n        if len(self.size) == 5:\r\n            b, c, t, h, w = self.size\r\n            orig_h, orig_w, orig_t = h, w, t\r\n        else:\r\n            b, c, h, w = self.size\r\n            orig_h, orig_w = h, w\r\n\r\n        r = 1\r\n        for i in range(5):\r\n            r *= 2\r\n\r\n            if len(self.size) == 5:\r\n                scaledSize = (b, c, t * r, h * r, w * r)\r\n                origSize = (orig_h, orig_w, orig_t)\r\n            else:\r\n                scaledSize = (b, c, h * r, w * r)\r\n                origSize = (orig_h, orig_w)\r\n\r\n            x += torch.nn.functional.interpolate(\r\n                torch.normal(mean=0, std=0.5 ** i, size=scaledSize, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator),\r\n                size=origSize, mode=self.mode\r\n            ) * self.discount ** i\r\n        return x / x.std()\r\n\r\n\r\n\r\nclass InterpolatedPyramidNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                discount=0.7, mode='nearest-exact'):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(discount=discount, mode=mode)\r\n\r\n    def __call__(self, *, discount=None, mode=None, **kwargs):\r\n        self.update(discount=discount, mode=mode)\r\n        self.last_seed += 1\r\n\r\n        if len(self.size) == 5:\r\n            b, c, t, h, w = self.size\r\n            orig_t, orig_h, orig_w = t, h, w\r\n        else:\r\n            b, c, h, w = self.size\r\n            orig_h, orig_w = h, w\r\n            t = orig_t = 1\r\n\r\n        noise = ((torch.rand(size=self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator) - 0.5) * 2 * 1.73)\r\n        multipliers = [1]\r\n\r\n        for i in range(4):\r\n            r = torch.rand(1, device=self.device, generator=self.generator).item() * 2 + 2\r\n            h, w = min(orig_h * 15, int(h * (r ** i))), min(orig_w * 15, int(w * (r ** i)))\r\n            \r\n            if len(self.size) == 5:\r\n                t = min(orig_t * 15, int(t * (r ** i)))\r\n                new_noise = torch.randn((b, c, t, h, w), dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n                upsampled_noise = nn.functional.interpolate(new_noise, size=(orig_t, orig_h, orig_w), mode=self.mode)\r\n            else:\r\n                new_noise = torch.randn((b, c, h, w), dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n                upsampled_noise = nn.functional.interpolate(new_noise, size=(orig_h, orig_w), mode=self.mode)\r\n\r\n            noise += upsampled_noise * self.discount ** i\r\n            multipliers.append(        self.discount ** i)\r\n            \r\n            if h >= orig_h * 15 or w >= orig_w * 15 or (len(self.size) == 5 and t >= orig_t * 15):\r\n                break  # if resolution is too high\r\n        \r\n        noise = noise / sum([m ** 2 for m in multipliers]) ** 0.5 \r\n        return noise / noise.std()\r\n\r\n\r\n\r\nclass CascadeBPyramidNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                levels=10, mode='nearest', size_range=[1,16]):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(epsilon=x, levels=levels, mode=mode, size_range=size_range)\r\n\r\n    def __call__(self, *, levels=10, mode='nearest', size_range=[1,16], **kwargs):\r\n        self.update(levels=levels, mode=mode)\r\n        if len(self.size) == 5:\r\n            raise NotImplementedError(\"CascadeBPyramidNoiseGenerator is not implemented for 5D tensors (eg. video).\") \r\n        self.last_seed += 1\r\n\r\n        b, c, h, w = self.size\r\n\r\n        epsilon = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n        multipliers = [1]\r\n        for i in range(1, levels):\r\n            m = 0.75 ** i\r\n\r\n            h, w = int(epsilon.size(-2) // (2 ** i)), int(epsilon.size(-2) // (2 ** i))\r\n            if size_range is None or (size_range[0] <= h <= size_range[1] or size_range[0] <= w <= size_range[1]):\r\n                offset = torch.randn(epsilon.size(0), epsilon.size(1), h, w, device=self.device, generator=self.generator)\r\n                epsilon = epsilon + torch.nn.functional.interpolate(offset, size=epsilon.shape[-2:], mode=self.mode) * m\r\n                multipliers.append(m)\r\n\r\n            if h <= 1 or w <= 1:\r\n                break\r\n        epsilon = epsilon / sum([m ** 2 for m in multipliers]) ** 0.5 #divides the epsilon tensor by the square root of the sum of the squared multipliers.\r\n\r\n        return epsilon\r\n\r\n\r\nclass UniformNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                mean=0.0, scale=1.73):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(mean=mean, scale=scale)\r\n\r\n    def __call__(self, *, mean=None, scale=None, **kwargs):\r\n        self.update(mean=mean, scale=scale)\r\n        self.last_seed += 1\r\n\r\n        noise = torch.rand(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n\r\n        return self.scale * 2 * (noise - 0.5) + self.mean\r\n\r\nclass GaussianNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                mean=0.0, std=1.0):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(mean=mean, std=std)\r\n\r\n    def __call__(self, *, mean=None, std=None, **kwargs):\r\n        self.update(mean=mean, std=std)\r\n        self.last_seed += 1\r\n\r\n        noise = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n\r\n        return (noise - noise.mean()) / noise.std()\r\n\r\nclass GaussianBackwardsNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                mean=0.0, std=1.0):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(mean=mean, std=std)\r\n\r\n    def __call__(self, *, mean=None, std=None, **kwargs):\r\n        self.update(mean=mean, std=std)\r\n        self.last_seed += 1\r\n        RESplain(\"GaussianBackwards last seed:\", self.generator.initial_seed())\r\n        self.generator.manual_seed(self.generator.initial_seed() - 1)\r\n        noise = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n\r\n        return (noise - noise.mean()) / noise.std()\r\n\r\nclass LaplacianNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                loc=0, scale=1.0):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(loc=loc, scale=scale)\r\n\r\n    def __call__(self, *, loc=None, scale=None, **kwargs):\r\n        self.update(loc=loc, scale=scale)\r\n        self.last_seed += 1\r\n\r\n        # b, c, h, w = self.size\r\n        # orig_h, orig_w = h, w\r\n\r\n        noise = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator) / 4.0\r\n\r\n        rng_state = torch.random.get_rng_state()\r\n        torch.manual_seed(self.generator.initial_seed())\r\n        laplacian_noise = Laplace(loc=self.loc, scale=self.scale).rsample(self.size).to(self.device)\r\n        self.generator.manual_seed(self.generator.initial_seed() + 1)\r\n        torch.random.set_rng_state(rng_state)\r\n\r\n        noise += laplacian_noise\r\n        return noise / noise.std()\r\n\r\nclass StudentTNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                loc=0, scale=0.2, df=1):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(loc=loc, scale=scale, df=df)\r\n\r\n    def __call__(self, *, loc=None, scale=None, df=None, **kwargs):\r\n        self.update(loc=loc, scale=scale, df=df)\r\n        self.last_seed += 1\r\n\r\n        # b, c, h, w = self.size\r\n        # orig_h, orig_w = h, w\r\n\r\n        rng_state = torch.random.get_rng_state()\r\n        torch.manual_seed(self.generator.initial_seed())\r\n\r\n        noise = StudentT(loc=self.loc, scale=self.scale, df=self.df).rsample(self.size)\r\n        if not isinstance(self, BrownianNoiseGenerator):\r\n            self.last_seed += 1\r\n                    \r\n        s = torch.quantile(noise.flatten(start_dim=1).abs(), 0.75, dim=-1)\r\n        \r\n        if len(self.size) == 5:\r\n            s = s.reshape(*s.shape, 1, 1, 1, 1)\r\n        else:\r\n            s = s.reshape(*s.shape, 1, 1, 1)\r\n\r\n        noise = noise.clamp(-s, s)\r\n\r\n        noise_latent = torch.copysign(torch.pow(torch.abs(noise), 0.5), noise).to(self.device)\r\n\r\n        self.generator.manual_seed(self.generator.initial_seed() + 1)\r\n        torch.random.set_rng_state(rng_state)\r\n        return (noise_latent - noise_latent.mean()) / noise_latent.std()\r\n\r\nclass WaveletNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                wavelet='haar'):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(wavelet=wavelet)\r\n\r\n    def __call__(self, *, wavelet=None, **kwargs):\r\n        self.update(wavelet=wavelet)\r\n        self.last_seed += 1\r\n\r\n        # b, c, h, w = self.size\r\n        # orig_h, orig_w = h, w\r\n\r\n        # noise for spatial dimensions only\r\n        coeffs = pywt.wavedecn(torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator).to('cpu'), wavelet=self.wavelet, mode='periodization')\r\n        noise = pywt.waverecn(coeffs, wavelet=self.wavelet, mode='periodization')\r\n        noise_tensor = torch.tensor(noise, dtype=self.dtype, device=self.device)\r\n\r\n        noise_tensor = (noise_tensor - noise_tensor.mean()) / noise_tensor.std()\r\n        return noise_tensor\r\n\r\nclass PerlinNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                detail=0.0):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(detail=detail)\r\n\r\n    @staticmethod\r\n    def get_positions(block_shape: Tuple[int, int]) -> Tensor:\r\n        bh, bw = block_shape\r\n        positions = torch.stack(\r\n            torch.meshgrid(\r\n                [(torch.arange(b) + 0.5) / b for b in (bw, bh)],\r\n                indexing=\"xy\",\r\n            ),\r\n            -1,\r\n        ).view(1, bh, bw, 1, 1, 2)\r\n        return positions\r\n\r\n    @staticmethod\r\n    def unfold_grid(vectors: Tensor) -> Tensor:\r\n        batch_size, _, gpy, gpx = vectors.shape\r\n        return (\r\n            unfold(vectors, (2, 2))\r\n            .view(batch_size, 2, 4, -1)\r\n            .permute(0, 2, 3, 1)\r\n            .view(batch_size, 4, gpy - 1, gpx - 1, 2)\r\n        )\r\n\r\n    @staticmethod\r\n    def smooth_step(t: Tensor) -> Tensor:\r\n        return t * t * (3.0 - 2.0 * t)\r\n\r\n    @staticmethod\r\n    def perlin_noise_tensor(\r\n        self,\r\n        vectors: Tensor, positions: Tensor, step: Callable = None\r\n    ) -> Tensor:\r\n        if step is None:\r\n            step = self.smooth_step\r\n\r\n        batch_size = vectors.shape[0]\r\n        # grid height, grid width\r\n        gh, gw = vectors.shape[2:4]\r\n        # block height, block width\r\n        bh, bw = positions.shape[1:3]\r\n\r\n        for i in range(2):\r\n            if positions.shape[i + 3] not in (1, vectors.shape[i + 2]):\r\n                raise Exception(\r\n                    f\"Blocks shapes do not match: vectors ({vectors.shape[1]}, {vectors.shape[2]}), positions {gh}, {gw})\"\r\n                )\r\n\r\n        if positions.shape[0] not in (1, batch_size):\r\n            raise Exception(\r\n                f\"Batch sizes do not match: vectors ({vectors.shape[0]}), positions ({positions.shape[0]})\"\r\n            )\r\n\r\n        vectors = vectors.view(batch_size, 4, 1, gh * gw, 2)\r\n        positions = positions.view(positions.shape[0], bh * bw, -1, 2)\r\n\r\n        step_x = step(positions[..., 0])\r\n        step_y = step(positions[..., 1])\r\n\r\n        row0 = lerp(\r\n            (vectors[:, 0] * positions).sum(dim=-1),\r\n            (vectors[:, 1] * (positions - positions.new_tensor((1, 0)))).sum(dim=-1),\r\n            step_x,\r\n        )\r\n        row1 = lerp(\r\n            (vectors[:, 2] * (positions - positions.new_tensor((0, 1)))).sum(dim=-1),\r\n            (vectors[:, 3] * (positions - positions.new_tensor((1, 1)))).sum(dim=-1),\r\n            step_x,\r\n        )\r\n        noise = lerp(row0, row1, step_y)\r\n        return (\r\n            noise.view(\r\n                batch_size,\r\n                bh,\r\n                bw,\r\n                gh,\r\n                gw,\r\n            )\r\n            .permute(0, 3, 1, 4, 2)\r\n            .reshape(batch_size, gh * bh, gw * bw)\r\n        )\r\n\r\n    def perlin_noise(\r\n        self,\r\n        grid_shape: Tuple[int, int],\r\n        out_shape: Tuple[int, int],\r\n        batch_size: int = 1,\r\n        generator: Generator = None,\r\n        *args,\r\n        **kwargs,\r\n    ) -> Tensor:\r\n        gh, gw = grid_shape         # grid height and width\r\n        oh, ow = out_shape        # output height and width\r\n        bh, bw = oh // gh, ow // gw        # block height and width\r\n\r\n        if oh != bh * gh:\r\n            raise Exception(f\"Output height {oh} must be divisible by grid height {gh}\")\r\n        if ow != bw * gw != 0:\r\n            raise Exception(f\"Output width {ow} must be divisible by grid width {gw}\")\r\n\r\n        angle = torch.empty(\r\n            [batch_size] + [s + 1 for s in grid_shape], device=self.device, *args, **kwargs\r\n        ).uniform_(to=2.0 * pi, generator=self.generator)\r\n        # random vectors on grid points\r\n        vectors = self.unfold_grid(torch.stack((torch.cos(angle), torch.sin(angle)), dim=1))\r\n        # positions inside grid cells [0, 1)\r\n        positions = self.get_positions((bh, bw)).to(vectors)\r\n        return self.perlin_noise_tensor(self, vectors, positions).squeeze(0)\r\n\r\n    def __call__(self, *, detail=None, **kwargs):\r\n        self.update(detail=detail) #currently unused\r\n        self.last_seed += 1\r\n        if len(self.size) == 5:\r\n            b, c, t, h, w = self.size\r\n            noise = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator) / 2.0\r\n            \r\n            for tt in range(t):\r\n                for i in range(2):\r\n                    perlin_slice = self.perlin_noise((h, w), (h, w), batch_size=c, generator=self.generator).to(self.device)\r\n                    perlin_expanded = perlin_slice.unsqueeze(0).unsqueeze(2)\r\n                    time_slice = noise[:, :, tt:tt+1, :, :]\r\n                    noise[:, :, tt:tt+1, :, :] += perlin_expanded\r\n        else:\r\n            b, c, h, w = self.size\r\n            #orig_h, orig_w = h, w\r\n\r\n            noise = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator) / 2.0\r\n            for i in range(2):\r\n                noise += self.perlin_noise((h, w), (h, w), batch_size=c, generator=self.generator).to(self.device)\r\n                \r\n        return noise / noise.std()\r\n    \r\nfrom functools import partial\r\n\r\nNOISE_GENERATOR_CLASSES = {\r\n    \"fractal\"               :                         FractalNoiseGenerator,\r\n    \"gaussian\"              :                         GaussianNoiseGenerator,\r\n    \"gaussian_backwards\"    :                         GaussianBackwardsNoiseGenerator,\r\n    \"uniform\"               :                         UniformNoiseGenerator,\r\n    \"pyramid-cascade_B\"     :                         CascadeBPyramidNoiseGenerator,\r\n    \"pyramid-interpolated\"  :                         InterpolatedPyramidNoiseGenerator,\r\n    \"pyramid-bilinear\"      : noise_generator_factory(PyramidNoiseGenerator,      mode='bilinear'),\r\n    \"pyramid-bicubic\"       : noise_generator_factory(PyramidNoiseGenerator,      mode='bicubic'),   \r\n    \"pyramid-nearest\"       : noise_generator_factory(PyramidNoiseGenerator,      mode='nearest'),  \r\n    \"hires-pyramid-bilinear\": noise_generator_factory(HiresPyramidNoiseGenerator, mode='bilinear'),\r\n    \"hires-pyramid-bicubic\" : noise_generator_factory(HiresPyramidNoiseGenerator, mode='bicubic'),   \r\n    \"hires-pyramid-nearest\" : noise_generator_factory(HiresPyramidNoiseGenerator, mode='nearest'),  \r\n    \"brownian\"              :                         BrownianNoiseGenerator,\r\n    \"laplacian\"             :                         LaplacianNoiseGenerator,\r\n    \"studentt\"              :                         StudentTNoiseGenerator,\r\n    \"wavelet\"               :                         WaveletNoiseGenerator,\r\n    \"perlin\"                :                         PerlinNoiseGenerator,\r\n}\r\n\r\n\r\nNOISE_GENERATOR_CLASSES_SIMPLE = {\r\n    \"none\"                  :                         GaussianNoiseGenerator,\r\n    \"brownian\"              :                         BrownianNoiseGenerator,\r\n    \"gaussian\"              :                         GaussianNoiseGenerator,\r\n    \"gaussian_backwards\"    :                         GaussianBackwardsNoiseGenerator,\r\n    \"laplacian\"             :                         LaplacianNoiseGenerator,\r\n    \"perlin\"                :                         PerlinNoiseGenerator,\r\n    \"studentt\"              :                         StudentTNoiseGenerator,\r\n    \"uniform\"               :                         UniformNoiseGenerator,\r\n    \"wavelet\"               :                         WaveletNoiseGenerator,\r\n    \"brown\"                 : noise_generator_factory(FractalNoiseGenerator,      alpha=2.0),\r\n    \"pink\"                  : noise_generator_factory(FractalNoiseGenerator,      alpha=1.0),\r\n    \"white\"                 : noise_generator_factory(FractalNoiseGenerator,      alpha=0.0),\r\n    \"blue\"                  : noise_generator_factory(FractalNoiseGenerator,      alpha=-1.0),\r\n    \"violet\"                : noise_generator_factory(FractalNoiseGenerator,      alpha=-2.0),\r\n    \"ultraviolet_A\"         : noise_generator_factory(FractalNoiseGenerator,      alpha=-3.0),\r\n    \"ultraviolet_B\"         : noise_generator_factory(FractalNoiseGenerator,      alpha=-4.0),\r\n    \"ultraviolet_C\"         : noise_generator_factory(FractalNoiseGenerator,      alpha=-5.0),\r\n\r\n    \"hires-pyramid-bicubic\" : noise_generator_factory(HiresPyramidNoiseGenerator, mode='bicubic'),   \r\n    \"hires-pyramid-bilinear\": noise_generator_factory(HiresPyramidNoiseGenerator, mode='bilinear'),\r\n    \"hires-pyramid-nearest\" : noise_generator_factory(HiresPyramidNoiseGenerator, mode='nearest'),  \r\n    \"pyramid-bicubic\"       : noise_generator_factory(PyramidNoiseGenerator,      mode='bicubic'),   \r\n    \"pyramid-bilinear\"      : noise_generator_factory(PyramidNoiseGenerator,      mode='bilinear'),\r\n    \"pyramid-nearest\"       : noise_generator_factory(PyramidNoiseGenerator,      mode='nearest'),  \r\n    \"pyramid-interpolated\"  :                         InterpolatedPyramidNoiseGenerator,\r\n    \"pyramid-cascade_B\"     :                         CascadeBPyramidNoiseGenerator,\r\n}                        \r\n\r\nif OPENSIMPLEX_ENABLE:\r\n    NOISE_GENERATOR_CLASSES.update({\r\n        \"simplex\": SimplexNoiseGenerator,\r\n    })\r\n\r\nNOISE_GENERATOR_NAMES = tuple(NOISE_GENERATOR_CLASSES.keys())\r\nNOISE_GENERATOR_NAMES_SIMPLE = tuple(NOISE_GENERATOR_CLASSES_SIMPLE.keys())\r\n\r\n\r\n@precision_tool.cast_tensor\r\ndef prepare_noise(latent_image, seed, noise_type, noise_inds=None, alpha=1.0, k=1.0): # adapted from comfy/sample.py: https://github.com/comfyanonymous/ComfyUI\r\n    #optional arg skip can be used to skip and discard x number of noise generations for a given seed\r\n    noise_func = NOISE_GENERATOR_CLASSES.get(noise_type)(x=latent_image, seed=seed, sigma_min=0.0291675, sigma_max=14.614642)                                          # WARNING: HARDCODED SDXL SIGMA RANGE!\r\n\r\n    if noise_type == \"fractal\":\r\n        noise_func.alpha = alpha\r\n        noise_func.k = k\r\n\r\n    # from here until return is very similar to comfy/sample.py \r\n    if noise_inds is None:\r\n        return noise_func(sigma=14.614642, sigma_next=0.0291675)\r\n\r\n    unique_inds, inverse = np.unique(noise_inds, return_inverse=True)\r\n    noises = []\r\n    for i in range(unique_inds[-1]+1):\r\n        noise = noise_func(size = [1] + list(latent_image.size())[1:], dtype=latent_image.dtype, layout=latent_image.layout, device=latent_image.device)\r\n        if i in unique_inds:\r\n            noises.append(noise)\r\n    noises = [noises[i] for i in inverse]\r\n    noises = torch.cat(noises, axis=0)\r\n    return noises\r\n"
  },
  {
    "path": "beta/phi_functions.py",
    "content": "import torch\r\nimport math\r\nfrom typing import Optional\r\n\r\n\r\n# Remainder solution\r\ndef _phi(j, neg_h):\r\n    remainder = torch.zeros_like(neg_h)\r\n\r\n    for k in range(j): \r\n        remainder += (neg_h)**k / math.factorial(k)\r\n    phi_j_h = ((neg_h).exp() - remainder) / (neg_h)**j\r\n\r\n    return phi_j_h\r\n\r\ndef calculate_gamma(c2, c3):\r\n    return (3*(c3**3) - 2*c3) / (c2*(2 - 3*c2))\r\n\r\n# Exact analytic solution originally calculated by Clybius. https://github.com/Clybius/ComfyUI-Extra-Samplers/tree/main\r\ndef _gamma(n: int,) -> int:\r\n    \"\"\"\r\n    https://en.wikipedia.org/wiki/Gamma_function\r\n    for every positive integer n,\r\n    Γ(n) = (n-1)!\r\n    \"\"\"\r\n    return math.factorial(n-1)\r\n\r\ndef _incomplete_gamma(s: int, x: float, gamma_s: Optional[int] = None) -> float:\r\n    \"\"\"\r\n    https://en.wikipedia.org/wiki/Incomplete_gamma_function#Special_values\r\n    if s is a positive integer,\r\n    Γ(s, x) = (s-1)!*∑{k=0..s-1}(x^k/k!)\r\n    \"\"\"\r\n    if gamma_s is None:\r\n        gamma_s = _gamma(s)\r\n\r\n    sum_: float = 0\r\n    # {k=0..s-1} inclusive\r\n    for k in range(s):\r\n        numerator: float = x**k\r\n        denom: int = math.factorial(k)\r\n        quotient: float = numerator/denom\r\n        sum_ += quotient\r\n    incomplete_gamma_: float = sum_ * math.exp(-x) * gamma_s\r\n    return incomplete_gamma_\r\n\r\ndef phi(j: int, neg_h: float, ):\r\n    \"\"\"\r\n    For j={1,2,3}: you could alternatively use Kat's phi_1, phi_2, phi_3 which perform fewer steps\r\n\r\n    Lemma 1\r\n    https://arxiv.org/abs/2308.02157\r\n    ϕj(-h) = 1/h^j*∫{0..h}(e^(τ-h)*(τ^(j-1))/((j-1)!)dτ)\r\n\r\n    https://www.wolframalpha.com/input?i=integrate+e%5E%28%CF%84-h%29*%28%CF%84%5E%28j-1%29%2F%28j-1%29%21%29d%CF%84\r\n    = 1/h^j*[(e^(-h)*(-τ)^(-j)*τ(j))/((j-1)!)]{0..h}\r\n    https://www.wolframalpha.com/input?i=integrate+e%5E%28%CF%84-h%29*%28%CF%84%5E%28j-1%29%2F%28j-1%29%21%29d%CF%84+between+0+and+h\r\n    = 1/h^j*((e^(-h)*(-h)^(-j)*h^j*(Γ(j)-Γ(j,-h)))/(j-1)!)\r\n    = (e^(-h)*(-h)^(-j)*h^j*(Γ(j)-Γ(j,-h))/((j-1)!*h^j)\r\n    = (e^(-h)*(-h)^(-j)*(Γ(j)-Γ(j,-h))/(j-1)!\r\n    = (e^(-h)*(-h)^(-j)*(Γ(j)-Γ(j,-h))/Γ(j)\r\n    = (e^(-h)*(-h)^(-j)*(1-Γ(j,-h)/Γ(j))\r\n\r\n    requires j>0\r\n    \"\"\"\r\n    assert j > 0\r\n    gamma_: float = _gamma(j)\r\n    incomp_gamma_: float = _incomplete_gamma(j, neg_h, gamma_s=gamma_)\r\n    phi_: float = math.exp(neg_h) * neg_h**-j * (1-incomp_gamma_/gamma_)\r\n    return phi_\r\n\r\n\r\n\r\nfrom mpmath import mp, mpf, factorial, exp\r\n\r\n\r\nmp.dps = 80   # e.g. 80 decimal digits (~ float256)\r\n\r\ndef phi_mpmath_series(j: int, neg_h: float) -> float:\r\n    \"\"\"\r\n    Arbitrary‐precision phi_j(-h) via the remainder‐series definition,\r\n    using mpmath’s mpf and factorial.\r\n    \"\"\"\r\n    j = int(j)\r\n    z = mpf(float(neg_h))    \r\n    S = mp.mpf('0')    # S = sum_{k=0..j-1} z^k / k!\r\n    for k in range(j):\r\n        S += (z**k) / factorial(k)\r\n    phi_val = (exp(z) - S) / (z**j)\r\n    return float(phi_val)\r\n\r\n\r\n\r\nclass Phi:\r\n    def __init__(self, h, c, analytic_solution=False): \r\n        self.h = h\r\n        self.c = c\r\n        self.cache = {}  \r\n        if analytic_solution:\r\n            #self.phi_f = superphi\r\n            self.phi_f = phi_mpmath_series\r\n            self.h = mpf(float(h))\r\n            self.c = [mpf(c_val) for c_val in c]\r\n            #self.c = c\r\n            #self.phi_f = phi\r\n        else:\r\n            self.phi_f = phi\r\n            #self.phi_f = _phi  # remainder method\r\n\r\n    def __call__(self, j, i=-1):\r\n        if (j, i) in self.cache:\r\n            return self.cache[(j, i)]\r\n\r\n        if i < 0:\r\n            c = 1\r\n        else:\r\n            c = self.c[i - 1]\r\n            if c == 0:\r\n                self.cache[(j, i)] = 0\r\n                return 0\r\n\r\n        if j == 0 and type(c) in {float, torch.Tensor}:\r\n            result = math.exp(float(-self.h * c))\r\n        else:\r\n            result = self.phi_f(j, -self.h * c)\r\n\r\n        self.cache[(j, i)] = result\r\n\r\n        return result\r\n\r\n\r\n\r\nfrom mpmath import mp, mpf, gamma, gammainc\r\n\r\ndef superphi(j: int, neg_h: float, ):\r\n    gamma_: float = gamma(j)\r\n    incomp_gamma_: float = gamma_ - gammainc(j, 0, float(neg_h))\r\n    phi_: float = float(math.exp(float(neg_h)) * neg_h**-j) * (1-incomp_gamma_/gamma_)\r\n    return float(phi_)\r\n\r\n"
  },
  {
    "path": "beta/rk_coefficients_beta.py",
    "content": "import torch\r\nfrom torch import Tensor\r\n\r\nimport copy\r\nimport math\r\nfrom mpmath import mp, mpf, factorial, exp\r\nmp.dps = 80\r\nfrom typing          import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKING, TypeVar\r\n\r\nfrom .deis_coefficients import get_deis_coeff_list\r\nfrom .phi_functions import phi, Phi, calculate_gamma\r\n\r\nfrom ..helper import ExtraOptions, get_extra_options_kv, extra_options_flag\r\n\r\n\r\nfrom itertools import permutations, combinations\r\nimport random\r\n\r\nfrom einops import rearrange, einsum\r\nfrom ..res4lyf import get_display_sampler_category\r\n\r\n# Samplers with free parameters (c1, c2, c3)\r\n# 1 2 3       \r\n#   X   res_2s\r\n#   X X res_3s\r\n#   X   res_3s_alt\r\n#   X   res_3s_strehmel_weiner\r\n#   X   dpmpp_2s                  (dpmpp_sde_2s has c2=1.0)\r\n#   X X dpmpp_3s\r\n# X X   irk_exp_diag_2s\r\n\r\nRK_EXPONENTIAL_PREFIXES = (\r\n    \"res\", \r\n    \"dpmpp\", \r\n    \"ddim\", \r\n    \"pec\", \r\n    \"etdrk\", \r\n    \"lawson\", \r\n    \"abnorsett\",\r\n    )\r\n\r\ndef is_exponential(rk_type:str) -> bool:\r\n    return rk_type.startswith(RK_EXPONENTIAL_PREFIXES)\r\n\r\nRK_SAMPLER_NAMES_BETA_FOLDERS = [\"none\",\r\n                    \"multistep/res_2m\",\r\n                    \"multistep/res_3m\",\r\n                    \r\n                    \"multistep/dpmpp_2m\",\r\n                    \"multistep/dpmpp_3m\",\r\n\r\n                    \"multistep/abnorsett_2m\",\r\n                    \"multistep/abnorsett_3m\",\r\n                    \"multistep/abnorsett_4m\",\r\n\r\n                    \"multistep/deis_2m\",\r\n                    \"multistep/deis_3m\", \r\n                    \"multistep/deis_4m\",\r\n                    \r\n                    \"exponential/res_2s_rkmk2e\", \r\n                    \"exponential/res_2s\", \r\n                    \"exponential/res_2s_stable\", \r\n                    \"exponential/res_3s\",\r\n                    \"exponential/res_3s_non-monotonic\",\r\n                    \"exponential/res_3s_alt\",\r\n                    \"exponential/res_3s_cox_matthews\",\r\n                    \"exponential/res_3s_lie\",\r\n                    \"exponential/res_3s_sunstar\",\r\n\r\n                    \"exponential/res_3s_strehmel_weiner\",\r\n                    \"exponential/res_4s_krogstad\",\r\n                    \"exponential/res_4s_krogstad_alt\",\r\n                    \"exponential/res_4s_strehmel_weiner\",\r\n                    \"exponential/res_4s_strehmel_weiner_alt\",\r\n                    \"exponential/res_4s_cox_matthews\",\r\n                    \"exponential/res_4s_cfree4\",\r\n                    \"exponential/res_4s_friedli\",\r\n                    \"exponential/res_4s_minchev\",\r\n                    \"exponential/res_4s_munthe-kaas\",\r\n\r\n                    \"exponential/res_5s\",\r\n                    \"exponential/res_5s_hochbruck-ostermann\",\r\n                    \"exponential/res_6s\",\r\n                    \"exponential/res_8s\",\r\n                    \"exponential/res_8s_alt\",\r\n\r\n                    \"exponential/res_10s\",\r\n                    \"exponential/res_15s\",\r\n                    \"exponential/res_16s\",\r\n                    \r\n                    \"exponential/etdrk2_2s\",\r\n                    \"exponential/etdrk3_a_3s\",\r\n                    \"exponential/etdrk3_b_3s\",\r\n                    \"exponential/etdrk4_4s\",\r\n                    \"exponential/etdrk4_4s_alt\",\r\n                    \r\n                    \"exponential/dpmpp_2s\",\r\n                    \"exponential/dpmpp_sde_2s\",\r\n                    \"exponential/dpmpp_3s\",\r\n                    \r\n                    \"exponential/lawson2a_2s\",\r\n                    \"exponential/lawson2b_2s\",\r\n\r\n                    \"exponential/lawson4_4s\",\r\n                    \"exponential/lawson41-gen_4s\",\r\n                    \"exponential/lawson41-gen-mod_4s\",\r\n\r\n                    \"exponential/ddim\",\r\n                    \r\n                    \"hybrid/pec423_2h2s\",\r\n                    \"hybrid/pec433_2h3s\",\r\n                    \r\n                    \"hybrid/abnorsett2_1h2s\",\r\n                    \"hybrid/abnorsett3_2h2s\",\r\n                    \"hybrid/abnorsett4_3h2s\",\r\n                    \r\n                    \r\n                    \"hybrid/lawson42-gen-mod_1h4s\",\r\n                    \"hybrid/lawson43-gen-mod_2h4s\",\r\n                    \"hybrid/lawson44-gen-mod_3h4s\",\r\n                    \"hybrid/lawson45-gen-mod_4h4s\",\r\n                    \r\n                    \"linear/ralston_2s\",\r\n                    \"linear/ralston_3s\",\r\n                    \"linear/ralston_4s\", \r\n                    \r\n\r\n                    \r\n                    \"linear/midpoint_2s\",\r\n                    \"linear/heun_2s\", \r\n                    \"linear/heun_3s\", \r\n                    \r\n                    \"linear/houwen-wray_3s\",\r\n                    \"linear/kutta_3s\", \r\n                    \"linear/ssprk3_3s\",\r\n                    \"linear/ssprk4_4s\",\r\n                    \r\n                    \"linear/rk38_4s\",\r\n                    \"linear/rk4_4s\", \r\n                    \"linear/rk5_7s\",\r\n                    \"linear/rk6_7s\",\r\n\r\n                    \"linear/bogacki-shampine_4s\",\r\n                    \"linear/bogacki-shampine_7s\",\r\n\r\n                    \"linear/dormand-prince_6s\", \r\n                    \"linear/dormand-prince_13s\", \r\n\r\n                    \"linear/tsi_7s\",\r\n                    #\"verner_robust_16s\",\r\n\r\n                    \"linear/euler\",\r\n                    \r\n                    \"diag_implicit/irk_exp_diag_2s\",\r\n                    \r\n                    \"diag_implicit/kraaijevanger_spijker_2s\",\r\n                    \"diag_implicit/qin_zhang_2s\",\r\n                    \r\n                    \"diag_implicit/pareschi_russo_2s\",\r\n                    \"diag_implicit/pareschi_russo_alt_2s\",\r\n                    \r\n                    \"diag_implicit/crouzeix_2s\",\r\n                    \"diag_implicit/crouzeix_3s\",\r\n                    \"diag_implicit/crouzeix_3s_alt\",\r\n                    \r\n                    \"fully_implicit/gauss-legendre_2s\",\r\n                    \"fully_implicit/gauss-legendre_3s\", \r\n                    \"fully_implicit/gauss-legendre_4s\",\r\n                    \"fully_implicit/gauss-legendre_4s_alternating_a\",\r\n                    \"fully_implicit/gauss-legendre_4s_ascending_a\",\r\n                    \"fully_implicit/gauss-legendre_4s_alt\",\r\n                    \"fully_implicit/gauss-legendre_5s\",                    \r\n                    \"fully_implicit/gauss-legendre_5s_ascending\",\r\n                    #\"gauss-legendre_diag_8s\",\r\n\r\n                    \r\n                    \"fully_implicit/radau_ia_2s\",\r\n                    \"fully_implicit/radau_ia_3s\",\r\n\r\n                    \"fully_implicit/radau_iia_2s\",\r\n                    \"fully_implicit/radau_iia_3s\",\r\n                    \"fully_implicit/radau_iia_3s_alt\",\r\n                    \"fully_implicit/radau_iia_5s\",\r\n                    \"fully_implicit/radau_iia_7s\",\r\n                    \"fully_implicit/radau_iia_9s\",\r\n                    \"fully_implicit/radau_iia_11s\",\r\n                    \r\n                    \"fully_implicit/lobatto_iiia_2s\",\r\n                    \"fully_implicit/lobatto_iiia_3s\", \r\n                    \"fully_implicit/lobatto_iiia_4s\",\r\n                    \"fully_implicit/lobatto_iiib_2s\",\r\n                    \"fully_implicit/lobatto_iiib_3s\",\r\n                    \"fully_implicit/lobatto_iiib_4s\",\r\n\r\n                    \"fully_implicit/lobatto_iiic_2s\",\r\n                    \"fully_implicit/lobatto_iiic_3s\",\r\n                    \"fully_implicit/lobatto_iiic_4s\",\r\n\r\n                    \"fully_implicit/lobatto_iiic_star_2s\",\r\n                    \"fully_implicit/lobatto_iiic_star_3s\",\r\n                    \"fully_implicit/lobatto_iiid_2s\",\r\n                    \"fully_implicit/lobatto_iiid_3s\",\r\n                    \r\n                    ]\r\n\r\n\r\n\r\nRK_SAMPLER_NAMES_BETA_NO_FOLDERS = []\r\nfor orig_sampler_name in RK_SAMPLER_NAMES_BETA_FOLDERS[1:]:\r\n    sampler_name = orig_sampler_name.split(\"/\")[-1] if \"/\" in orig_sampler_name else orig_sampler_name\r\n    RK_SAMPLER_NAMES_BETA_NO_FOLDERS.append(sampler_name)\r\n\r\nIRK_SAMPLER_NAMES_BETA_FOLDERS = [\"none\", \"use_explicit\"]\r\nfor orig_sampler_name in RK_SAMPLER_NAMES_BETA_FOLDERS[1:]:\r\n    if \"implicit\" in orig_sampler_name and \"/\" in orig_sampler_name:\r\n        IRK_SAMPLER_NAMES_BETA_FOLDERS.append(orig_sampler_name)\r\n\r\nIRK_SAMPLER_NAMES_BETA_NO_FOLDERS = []\r\nfor orig_sampler_name in IRK_SAMPLER_NAMES_BETA_FOLDERS[1:]:\r\n    sampler_name = orig_sampler_name.split(\"/\")[-1] if \"/\" in orig_sampler_name else orig_sampler_name\r\n    IRK_SAMPLER_NAMES_BETA_NO_FOLDERS.append(sampler_name)\r\n\r\nRK_SAMPLER_FOLDER_MAP = {}\r\nfor orig_sampler_name in RK_SAMPLER_NAMES_BETA_FOLDERS:\r\n    if \"/\" in orig_sampler_name:\r\n        folder, sampler_name = orig_sampler_name.rsplit(\"/\", 1)\r\n    else:\r\n        folder = \"\"\r\n        sampler_name = orig_sampler_name\r\n    RK_SAMPLER_FOLDER_MAP[sampler_name] = folder\r\n\r\nIRK_SAMPLER_FOLDER_MAP = {}\r\nfor orig_sampler_name in IRK_SAMPLER_NAMES_BETA_FOLDERS:\r\n    if \"/\" in orig_sampler_name:\r\n        folder, sampler_name = orig_sampler_name.rsplit(\"/\", 1)\r\n    else:\r\n        folder = \"\"\r\n        sampler_name = orig_sampler_name\r\n    IRK_SAMPLER_FOLDER_MAP[sampler_name] = folder\r\n\r\nclass DualFormatList(list):\r\n    \"\"\"list that can match items with or without category prefixes.\"\"\"\r\n    def __contains__(self, item):\r\n        if super().__contains__(item):\r\n            return True\r\n\r\n        if isinstance(item, str) and \"/\" in item:\r\n            base_name = item.split(\"/\")[-1]\r\n            return any(name.endswith(base_name) for name in self)\r\n\r\n        return any(isinstance(opt, str) and opt.endswith(\"/\" + item) for opt in self)\r\n\r\ndef get_sampler_name_list(nameOnly = False) -> list:\r\n    sampler_name_list = []\r\n    for sampler_name in RK_SAMPLER_FOLDER_MAP:\r\n        if get_display_sampler_category() and not nameOnly:\r\n            folder_name = RK_SAMPLER_FOLDER_MAP[sampler_name]\r\n            full_sampler_name = f\"{folder_name}/{sampler_name}\"\r\n        else:\r\n            full_sampler_name = sampler_name\r\n        if full_sampler_name[0] == \"/\":\r\n            full_sampler_name = full_sampler_name[1:]\r\n        sampler_name_list.append(full_sampler_name)\r\n    return DualFormatList(sampler_name_list)\r\n\r\ndef get_default_sampler_name(nameOnly = False) -> str:\r\n    default_sampler_name = \"res_2m\"\r\n    #find the key associated with the default value\r\n    for sampler_name in RK_SAMPLER_FOLDER_MAP:\r\n        if sampler_name == default_sampler_name:\r\n            if get_display_sampler_category() and not nameOnly:\r\n                folder_name = RK_SAMPLER_FOLDER_MAP[sampler_name]\r\n                return f\"{folder_name}/{default_sampler_name}\"\r\n            else:\r\n                return default_sampler_name\r\n    return default_sampler_name\r\n\r\ndef get_implicit_sampler_name_list(nameOnly = False) -> list:\r\n    implicit_sampler_name_list = []\r\n    for sampler_name in IRK_SAMPLER_FOLDER_MAP:\r\n        if get_display_sampler_category() and not nameOnly:\r\n            folder_name = IRK_SAMPLER_FOLDER_MAP[sampler_name]\r\n            full_sampler_name = f\"{folder_name}/{sampler_name}\"\r\n        else:\r\n            full_sampler_name = sampler_name\r\n        if full_sampler_name[0] == \"/\":\r\n            full_sampler_name = full_sampler_name[1:]\r\n        implicit_sampler_name_list.append(full_sampler_name)\r\n    return DualFormatList(implicit_sampler_name_list)\r\n\r\ndef get_default_implicit_sampler_name(nameOnly = False) -> str:\r\n    default_sampler_value = \"explicit_diagonal\"\r\n    #find the key associated with the default value\r\n    for sampler_name in IRK_SAMPLER_FOLDER_MAP:\r\n        if sampler_name == default_sampler_value:\r\n            if get_display_sampler_category() and not nameOnly:\r\n                folder_name = IRK_SAMPLER_FOLDER_MAP[sampler_name]\r\n                return f\"{folder_name}/{default_sampler_value}\"\r\n            else:\r\n                return default_sampler_value\r\n    return default_sampler_value\r\n\r\ndef get_full_sampler_name(sampler_name_in: str) -> str:\r\n    if \"/\" in sampler_name_in and sampler_name_in[0] != \"/\":\r\n        return sampler_name_in\r\n    for sampler_name in RK_SAMPLER_FOLDER_MAP:\r\n        if sampler_name == sampler_name_in:\r\n            folder_name = RK_SAMPLER_FOLDER_MAP[sampler_name]\r\n            return f\"{folder_name}/{sampler_name}\"\r\n    return sampler_name\r\n\r\ndef process_sampler_name(sampler_name_in):\r\n    processed_name = sampler_name_in.split(\"/\")[-1] if \"/\" in sampler_name_in else sampler_name_in\r\n    full_sampler_name = get_full_sampler_name(sampler_name_in)\r\n\r\n    if sampler_name_in.startswith(\"fully_implicit\") or sampler_name_in.startswith(\"diag_implicit\"):\r\n        implicit_sampler_name = processed_name\r\n        sampler_name = \"euler\"\r\n    else:\r\n        sampler_name = processed_name\r\n        implicit_sampler_name = \"use_explicit\"\r\n\r\n    return sampler_name, implicit_sampler_name\r\n\r\n\r\n\r\n\r\nalpha_crouzeix = (2/(3**0.5)) * math.cos(math.pi / 18)\r\ngamma_crouzeix = (1/(3**0.5)) * math.cos(math.pi / 18) + 1/2 # Crouzeix & Raviart 1980; A-stable; pg 100 in Solving Ordinary Differential Equations II\r\ndelta_crouzeix = 1 / (6 * (2 * gamma_crouzeix - 1)**2)       # Crouzeix & Raviart 1980; A-stable; pg 100 in Solving Ordinary Differential Equations II\r\n\r\nrk_coeff = {\r\n    \"gauss-legendre_diag_8s\": ( # https://github.com/SciML/IRKGaussLegendre.jl/blob/master/src/IRKCoefficients.jl Antoñana, M., Makazaga, J., Murua, Ander. \"Reducing and monitoring round-off error propagation for symplectic implicit Runge-Kutta schemes.\" Numerical Algorithms. 2017.\r\n    [\r\n        [\r\n            0.5,\r\n            0,0,0,0,0,0,0,\r\n        ],\r\n        [\r\n            1.0818949631055814971365081647359309e00,\r\n            0.5,\r\n            0,0,0,0,0,0,\r\n        ],\r\n        [\r\n            9.5995729622205494766003095439844678e-01,\r\n            1.0869589243008327233290709646162480e00,\r\n            0.5,\r\n            0,0,0,0,0,\r\n        ],\r\n        [\r\n            1.0247213458032003748680445816450829e00,\r\n            9.5505887369737431186016905653386876e-01,\r\n            1.0880938387323083134422138713913203e00,\r\n            0.5,\r\n            0,0,0,0,\r\n        ],\r\n        [\r\n            9.8302382676362890697311829123888390e-01,\r\n            1.0287597754747493109782305570410685e00,\r\n            9.5383453518519996588326911440754302e-01,\r\n            1.0883471611098277842507073806008045e00,\r\n            0.5,\r\n            0,0,0,\r\n        ],\r\n        [\r\n            1.0122259141132982060539425317219435e00,\r\n            9.7998287236359129082628958290257329e-01,\r\n            1.0296038730649779374630125982121223e00,\r\n            9.5383453518519996588326911440754302e-01,\r\n            1.0880938387323083134422138713913203e00,\r\n            0.5,\r\n            0,0,\r\n        ],\r\n        [\r\n            9.9125143323080263118822334698608777e-01,\r\n            1.0140743558891669291459735166525994e00,\r\n            9.7998287236359129082628958290257329e-01,\r\n            1.0287597754747493109782305570410685e00,\r\n            9.5505887369737431186016905653386876e-01,\r\n            1.0869589243008327233290709646162480e00,\r\n            0.5,\r\n            0,\r\n        ],\r\n        [\r\n            1.0054828082532158826793409353214951e00,\r\n            9.9125143323080263118822334698608777e-01,\r\n            1.0122259141132982060539425317219435e00,\r\n            9.8302382676362890697311829123888390e-01,\r\n            1.0247213458032003748680445816450829e00,\r\n            9.5995729622205494766003095439844678e-01,\r\n            1.0818949631055814971365081647359309e00,\r\n            0.5,\r\n        ],\r\n    ],\r\n    [\r\n        [\r\n        5.0614268145188129576265677154981094e-02, \r\n        1.1119051722668723527217799721312045e-01,\r\n        1.5685332293894364366898110099330067e-01,\r\n        1.8134189168918099148257522463859781e-01,\r\n        1.8134189168918099148257522463859781e-01,\r\n        1.5685332293894364366898110099330067e-01,\r\n        1.1119051722668723527217799721312045e-01,\r\n        5.0614268145188129576265677154981094e-02,]\r\n    ],\r\n    [\r\n        1.9855071751231884158219565715263505e-02, # 0.019855071751231884158219565715263505\r\n        1.0166676129318663020422303176208480e-01,\r\n        2.3723379504183550709113047540537686e-01,\r\n        4.0828267875217509753026192881990801e-01,\r\n        5.9171732124782490246973807118009203e-01,\r\n        7.6276620495816449290886952459462321e-01,\r\n        8.9833323870681336979577696823791522e-01,\r\n        9.8014492824876811584178043428473653e-01,\r\n    ]\r\n    ),\r\n    \r\n    \r\n    \"gauss-legendre_5s\": (\r\n    [\r\n        [4563950663 / 32115191526, \r\n         (310937500000000 / 2597974476091533 + 45156250000 * (739**0.5) / 8747388808389), \r\n         (310937500000000 / 2597974476091533 - 45156250000 * (739**0.5) / 8747388808389),\r\n         (5236016175 / 88357462711 + 709703235 * (739**0.5) / 353429850844),\r\n         (5236016175 / 88357462711 - 709703235 * (739**0.5) / 353429850844)],\r\n        \r\n        [(4563950663 / 32115191526 - 38339103 * (739**0.5) / 6250000000),\r\n         (310937500000000 / 2597974476091533 + 9557056475401 * (739**0.5) / 3498955523355600000),\r\n         (310937500000000 / 2597974476091533 - 14074198220719489 * (739**0.5) / 3498955523355600000),\r\n         (5236016175 / 88357462711 + 5601362553163918341 * (739**0.5) / 2208936567775000000000),\r\n         (5236016175 / 88357462711 - 5040458465159165409 * (739**0.5) / 2208936567775000000000)],\r\n        \r\n        [(4563950663 / 32115191526 + 38339103 * (739**0.5) / 6250000000),\r\n         (310937500000000 / 2597974476091533 + 14074198220719489 * (739**0.5) / 3498955523355600000),\r\n         (310937500000000 / 2597974476091533 - 9557056475401 * (739**0.5) / 3498955523355600000),\r\n         (5236016175 / 88357462711 + 5040458465159165409 * (739**0.5) / 2208936567775000000000),\r\n         (5236016175 / 88357462711 - 5601362553163918341 * (739**0.5) / 2208936567775000000000)],\r\n        \r\n        [(4563950663 / 32115191526 - 38209 * (739**0.5) / 7938810),\r\n         (310937500000000 / 2597974476091533 - 359369071093750 * (739**0.5) / 70145310854471391),\r\n         (310937500000000 / 2597974476091533 - 323282178906250 * (739**0.5) / 70145310854471391),\r\n         (5236016175 / 88357462711 - 470139 * (739**0.5) / 1413719403376),\r\n         (5236016175 / 88357462711 - 44986764863 * (739**0.5) / 21205791050640)],\r\n        \r\n        [(4563950663 / 32115191526 + 38209 * (739**0.5) / 7938810),\r\n         (310937500000000 / 2597974476091533 + 359369071093750 * (739**0.5) / 70145310854471391),\r\n         (310937500000000 / 2597974476091533 + 323282178906250 * (739**0.5) / 70145310854471391),\r\n         (5236016175 / 88357462711 + 44986764863 * (739**0.5) / 21205791050640),\r\n         (5236016175 / 88357462711 + 470139 * (739**0.5) / 1413719403376)],\r\n    ],\r\n    [\r\n        \r\n        [\r\n        4563950663 / 16057595763,\r\n        621875000000000 / 2597974476091533,\r\n        621875000000000 / 2597974476091533,\r\n        10472032350 / 88357462711,\r\n        10472032350 / 88357462711]\r\n    ],\r\n    [\r\n        1 / 2,\r\n        1 / 2 - 99 * (739**0.5) / 10000,  # smallest  # 0.06941899716778028758987101075583196       \r\n        1 / 2 + 99 * (739**0.5) / 10000,  # largest\r\n        1 / 2 - (739**0.5) / 60,\r\n        1 / 2 + (739**0.5) / 60\r\n    ]\r\n    ),\r\n    \r\n    \"gauss-legendre_5s_ascending\": (\r\n    [\r\n        [(4563950663 / 32115191526 - 38339103 * (739**0.5) / 6250000000),\r\n         (310937500000000 / 2597974476091533 + 9557056475401 * (739**0.5) / 3498955523355600000),\r\n         (310937500000000 / 2597974476091533 - 14074198220719489 * (739**0.5) / 3498955523355600000),\r\n         (5236016175 / 88357462711 + 5601362553163918341 * (739**0.5) / 2208936567775000000000),\r\n         (5236016175 / 88357462711 - 5040458465159165409 * (739**0.5) / 2208936567775000000000)],\r\n        \r\n        \r\n        [(4563950663 / 32115191526 - 38209 * (739**0.5) / 7938810),\r\n         (310937500000000 / 2597974476091533 - 359369071093750 * (739**0.5) / 70145310854471391),\r\n         (310937500000000 / 2597974476091533 - 323282178906250 * (739**0.5) / 70145310854471391),\r\n         (5236016175 / 88357462711 - 470139 * (739**0.5) / 1413719403376),\r\n         (5236016175 / 88357462711 - 44986764863 * (739**0.5) / 21205791050640)],\r\n        \r\n        [4563950663 / 32115191526, \r\n         (310937500000000 / 2597974476091533 + 45156250000 * (739**0.5) / 8747388808389), \r\n         (310937500000000 / 2597974476091533 - 45156250000 * (739**0.5) / 8747388808389),\r\n         (5236016175 / 88357462711 + 709703235 * (739**0.5) / 353429850844),\r\n         (5236016175 / 88357462711 - 709703235 * (739**0.5) / 353429850844)],\r\n\r\n        \r\n        [(4563950663 / 32115191526 + 38209 * (739**0.5) / 7938810),\r\n         (310937500000000 / 2597974476091533 + 359369071093750 * (739**0.5) / 70145310854471391),\r\n         (310937500000000 / 2597974476091533 + 323282178906250 * (739**0.5) / 70145310854471391),\r\n         (5236016175 / 88357462711 + 44986764863 * (739**0.5) / 21205791050640),\r\n         (5236016175 / 88357462711 + 470139 * (739**0.5) / 1413719403376)],\r\n        \r\n        \r\n        [(4563950663 / 32115191526 + 38339103 * (739**0.5) / 6250000000),\r\n         (310937500000000 / 2597974476091533 + 14074198220719489 * (739**0.5) / 3498955523355600000),\r\n         (310937500000000 / 2597974476091533 - 9557056475401 * (739**0.5) / 3498955523355600000),\r\n         (5236016175 / 88357462711 + 5040458465159165409 * (739**0.5) / 2208936567775000000000),\r\n         (5236016175 / 88357462711 - 5601362553163918341 * (739**0.5) / 2208936567775000000000)],\r\n    ],\r\n    [\r\n        \r\n        [621875000000000 / 2597974476091533,\r\n        10472032350 / 88357462711,\r\n\r\n        4563950663 / 16057595763,\r\n        \r\n        10472032350 / 88357462711,\r\n        621875000000000 / 2597974476091533,]\r\n    ],\r\n    [\r\n        1 / 2 - 99 * (739**0.5) / 10000,  # smallest  # 0.06941899716778028758987101075583196       \r\n        1 / 2 - (739**0.5) / 60,\r\n        1 / 2,\r\n\r\n\r\n        1 / 2 + (739**0.5) / 60,\r\n        \r\n        1 / 2 + 99 * (739**0.5) / 10000,  # largest\r\n    ]\r\n    ),\r\n    \"gauss-legendre_4s_alt\": ( # https://ijstre.com/Publish/072016/371428231.pdf Four Point Gauss Quadrature Runge – Kuta Method Of Order 8 For Ordinary Differential Equations\r\n        [\r\n            [1633/18780 - 71*206**0.5/96717000,\r\n            134689/939000 - 927*206**0.5/78250,\r\n            171511/939000 - 927*206**0.5/78250,\r\n            1633/18780 - 121979*206**0.5/19343400,],            \r\n            [7623/78250 - 1629507*206**0.5/257912000,\r\n            347013/21284000,\r\n            -118701/4256800,\r\n            7623/78250 + 1629507*206**0.5/257912000,],          \r\n            [8978/117375 + 1629507*206**0.5/257912000,\r\n            4520423/12770400,\r\n            10410661/63852000,\r\n            8978/117375 + 1629507*206**0.5/257912000,],            \r\n            [1633/18780 + 121979*206**0.5/19343400,\r\n            134689/939000 + 927*206**0.5/78250,\r\n            171511/939000 + 927*206**0.5/78250,\r\n            1633/18780 + 71*206**0.5/96717000,],       \r\n        ],\r\n        [    \r\n            [1633/9390,\r\n            1531/4695,\r\n            1531/4695,\r\n            1633/9390,]                                        \r\n        ],\r\n        [\r\n            1/2 - 3*206**0.5 / 100,  # 0.06941899716778028758987101075583196                          \r\n            33/100,                                         \r\n            67/100,                                        \r\n            1/2 + 3*206**0.5 / 100,                                         \r\n        ]\r\n    ),\r\n    \"gauss-legendre_4s\": (\r\n        [\r\n            [1/4, 1/4 - 15**0.5 / 6, 1/4 + 15**0.5 / 6, 1/4],            \r\n            [1/4 + 15**0.5 / 6, 1/4, 1/4 - 15**0.5 / 6, 1/4],          \r\n            [1/4, 1/4 + 15**0.5 / 6, 1/4, 1/4 - 15**0.5 / 6],            \r\n            [1/4 - 15**0.5 / 6, 1/4, 1/4 + 15**0.5 / 6, 1/4],       \r\n        ],\r\n        [    \r\n            [\r\n            1/8, \r\n            3/8, \r\n            3/8, \r\n            1/8,]                                        \r\n        ],\r\n        [\r\n            1/2 - 15**0.5 / 10,      # 0.11270166537925831148207346002176004                               \r\n            1/2 + 15**0.5 / 10,                                         \r\n            1/2 + 15**0.5 / 10,                                        \r\n            1/2 - 15**0.5 / 10                                         \r\n        ]\r\n    ),\r\n    \"gauss-legendre_4s_alternating_a\": (\r\n        [\r\n            [1/4, 1/4 - 15**0.5 / 6, 1/4 + 15**0.5 / 6, 1/4],            \r\n            [1/4 + 15**0.5 / 6, 1/4, 1/4 - 15**0.5 / 6, 1/4],   \r\n            [1/4 - 15**0.5 / 6, 1/4, 1/4 + 15**0.5 / 6, 1/4],       \r\n            [1/4, 1/4 + 15**0.5 / 6, 1/4, 1/4 - 15**0.5 / 6],            \r\n        ],\r\n        [    \r\n            [\r\n            1/8, \r\n            3/8, \r\n            1/8, \r\n            3/8,]                                        \r\n        ],\r\n        [\r\n            1/2 - 15**0.5 / 10,      # 0.11270166537925831148207346002176004                               \r\n            1/2 + 15**0.5 / 10,     \r\n            1/2 - 15**0.5 / 10,                                         \r\n            1/2 + 15**0.5 / 10,                                        \r\n        ]\r\n    ),\r\n    \"gauss-legendre_4s_ascending_a\": (\r\n        [\r\n            [1/4 - 15**0.5 / 6,   1/4,               1/4 + 15**0.5 / 6, 1/4],              \r\n            [1/4,                 1/4 - 15**0.5 / 6, 1/4 + 15**0.5 / 6, 1/4],            \r\n            [1/4,                 1/4 + 15**0.5 / 6, 1/4,               1/4 - 15**0.5 / 6],    \r\n            [1/4 + 15**0.5 / 6,   1/4,               1/4 - 15**0.5 / 6, 1/4],          \r\n\r\n        ],\r\n        [\r\n            [\r\n            1/8, \r\n            3/8, \r\n            1/8,\r\n            3/8,]                                        \r\n        ],\r\n        [\r\n            1/2 - 15**0.5 / 10,                                         \r\n            1/2 - 15**0.5 / 10,                                     \r\n            1/2 + 15**0.5 / 10,                                         \r\n            1/2 + 15**0.5 / 10,                                        \r\n        ]\r\n    ),\r\n\r\n    \"gauss-legendre_3s\": ( # Kunzmann-Butcher, IRK, order 6 https://www.math.umd.edu/~mariakc/SymplecticMethods.pdf\r\n        [\r\n            [5/36, 2/9 - 15**0.5 / 15, 5/36 - 15**0.5 / 30],\r\n            [5/36 + 15**0.5 / 24, 2/9, 5/36 - 15**0.5 / 24],\r\n            [5/36 + 15**0.5 / 30, 2/9 + 15**0.5 / 15, 5/36],\r\n        ],\r\n        [\r\n            [5/18, 4/9, 5/18]\r\n        ],\r\n        [1/2 - 15**0.5 / 10, 1/2, 1/2 + 15**0.5 / 10]   # 0.11270166537925831148207346002176004\r\n    ),\r\n    \"gauss-legendre_2s\": ( # Hammer-Hollingsworth, IRK, order 4 https://www.math.umd.edu/~mariakc/SymplecticMethods.pdf\r\n        [\r\n            [1/4, 1/4 - 3**0.5 / 6],\r\n            [1/4 + 3**0.5 / 6, 1/4],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [1/2 - 3**0.5 / 6, 1/2 + 3**0.5 / 6] # 0.21132486540518711774542560974902127    # 1/2 - (1/2)*(1/3**0.5)       1/2 + (1/2)*(1/3**0.5)\r\n    ),\r\n\r\n    \"radau_iia_4s\": (\r\n        [    \r\n            [],\r\n            [],\r\n            [],\r\n            [],\r\n        ],\r\n        [\r\n            [1/4, 1/4, 1/4, 1/4],\r\n        ],\r\n        [(1/11)*(4-6**0.5), (1/11)*(4+6**0.5), 1/2, 1]\r\n    ),\r\n    \r\n    \"radau_iia_11s\": ( # https://github.com/ryanelandt/Radau.jl\r\n        [    \r\n            [0.015280520789530369, -0.0057824996781311875, 0.00438010324638053, -0.0036210375473319026, 0.003092977042211754, -0.0026728314041491816, 0.0023050911672361017, -0.001955651803123845, 0.001593873849612843, -0.0011728625554916522, 0.00046993032567176855],\r\n            [0.03288397668119629, 0.03451351173940448, -0.009285420023734383, 0.00641324617083941, -0.005095455838865143, 0.0042460913690415955, -0.0035876743372353984, 0.003006834900018004, -0.0024326697483255453, 0.0017827773828584467, -0.0007131464180496306],\r\n            [0.029332502147155125, 0.0741624250777296, 0.0511486756872502, -0.012005023334430185, 0.00777794727524923, -0.005944695307870806, 0.004802655736401176, -0.003923600687657003, 0.003127328539609814, -0.0022731432208609507, 0.0009063777304940358],\r\n            [0.03111455337650569, 0.06578995121943092, 0.10929962691877611, 0.06381051663919307, -0.013853591907177828, 0.008557435524870741, -0.0063076358492939275, 0.004913357548166058, -0.0038139969541068734, 0.0027334306074068546, -0.0010839711153145738],\r\n            [0.03005269275666326, 0.07011284530154153, 0.09714692306747527, 0.1353916024839275, 0.07147107644479529, -0.014710238851905252, 0.008733191499420551, -0.00619941303527863, 0.004591640852897801, -0.003213330884490774, 0.001262857250740274],\r\n            [0.030728073929609766, 0.06751925856657341, 0.10334060375222286, 0.12083525997663601, 0.1503267876654705, 0.07350931976920085, -0.014512880052768446, 0.008296645645701008, -0.0056128275038367864, 0.003766229774466616, -0.001457705807615146],\r\n            [0.030292022376401242, 0.06914472100762357, 0.09972096441656238, 0.12801064060853223, 0.13493180383303127, 0.15289670039157693, 0.06975993047996924, -0.013274545709987746, 0.007258767272883859, -0.0044843888202694155, 0.0016878458203415244],\r\n            [0.03056654381836576, 0.06813851028407998, 0.10188107030389015, 0.12403361149690655, 0.14211431622263265, 0.13829395377418516, 0.14289135336320447, 0.06052636121446275, -0.011077739682117822, 0.005598667203856668, -0.0019877269625674446],\r\n            [0.030406629901865028, 0.06871880785022819, 0.10066095698900927, 0.12619527453091425, 0.13848875677027936, 0.14450773783254642, 0.13065188915037962, 0.1211140113707743, 0.046555483263607714, -0.008026200095719123, 0.002437640226261747],\r\n            [0.030484119381553945, 0.06843924691254653, 0.10124184869598654, 0.1251873187759311, 0.14011843430039864, 0.14190386755377057, 0.13500342651951197, 0.11262869537051934, 0.08930604389562254, 0.028969664972192485, -0.0033116985395201413],\r\n            [0.03046254890606557, 0.06851684106660112, 0.10108155427001221, 0.1254626888485642, 0.13968066655169153, 0.14258278197050367, 0.1339335430948421, 0.11443306192448831, 0.08565880960332992, 0.04992304095398403, 0.008264462809917356],\r\n        ],\r\n        [\r\n            [0.03046254890606557, 0.06851684106660112, 0.10108155427001221, 0.1254626888485642, 0.13968066655169153, 0.14258278197050367, 0.1339335430948421, 0.11443306192448831, 0.08565880960332992, 0.04992304095398403, 0.008264462809917356],\r\n        ],\r\n        [0.011917613432415597, 0.061732071877148124, 0.14711144964307024, 0.26115967600845624, 0.39463984688578685, 0.5367387657156606, 0.6759444616766651, 0.8009789210368988, 0.9017109877901468, 0.9699709678385136, 1.0]\r\n    ),\r\n    \r\n    \"radau_iia_9s\": ( # https://github.com/ryanelandt/Radau.jl\r\n        [    \r\n            [0.022788378793458776, -0.008589639752938945, 0.0064510291769951465, -0.00525752869975012, 0.004388833809361376, -0.0036512155536904674, 0.0029404882137526148, -0.002149274163882554, 0.0008588433240576261],\r\n            [0.04890795244749932, 0.05070205048082808, -0.013523807196021316, 0.009209373774305071, -0.0071557133175369604, 0.005747246699432309, -0.004542582976394536, 0.003288161681791406, -0.0013090736941094112],\r\n            [0.04374276009157137, 0.10830189290274023, 0.07291956593742897, -0.016879877210016055, 0.010704551844802781, -0.007901946479238777, 0.005991406942179993, -0.0042480244399873135, 0.0016781498061495626],\r\n            [0.04624923745394712, 0.09656073072680009, 0.1542987697900386, 0.0867193693031384, -0.018451639643617873, 0.011036658729835513, -0.007673280940281649, 0.005228224999889903, -0.00203590583647778],\r\n            [0.044834436586910234, 0.10230684968594175, 0.13821763419236816, 0.18126393468214014, 0.09043360059943564, -0.018085063366782478, 0.010193387903855565, -0.006405265418866323, 0.0024271699384239612],\r\n            [0.045658755719323395, 0.09914547048938806, 0.14574704049699233, 0.16364828123387398, 0.18594458734451902, 0.08361326023153276, -0.015809936146309538, 0.00813825269404473, -0.002910469207795258],\r\n            [0.045200600187797244, 0.10085370671832047, 0.1419422367945749, 0.17118947183876332, 0.1697833861700019, 0.16776829117327952, 0.06707903432249304, -0.011792230536025322, 0.0036092462886493657],\r\n            [0.045416516657427734, 0.10006040244594375, 0.143652840987038, 0.16801908098069296, 0.17556076841841367, 0.15588627045003361, 0.12889391351650395, 0.04281082602522101, -0.004934574771244536],\r\n            [0.04535725246164146, 0.10027664901227598, 0.1431933481786156, 0.16884698348796479, 0.1741365013864833, 0.158421887835219, 0.12359468910229653, 0.0738270095231577, 0.012345679012345678],\r\n        ],\r\n        [\r\n            [0.04535725246164146, 0.10027664901227598, 0.1431933481786156, 0.16884698348796479, 0.1741365013864833, 0.158421887835219, 0.12359468910229653, 0.0738270095231577, 0.012345679012345678],\r\n        ],\r\n        [0.01777991514736345, 0.09132360789979396, 0.21430847939563075, 0.37193216458327233, 0.5451866848034267, 0.7131752428555694, 0.8556337429578544, 0.9553660447100302, 1.0]\r\n    ),\r\n    \r\n    \"radau_iia_7s\": ( # https://github.com/ryanelandt/Radau.jl\r\n        [    \r\n            [0.03754626499392133, -0.0140393345564604, 0.0103527896007423, -0.008158322540275011, 0.006388413879534685, -0.004602326779148656, 0.0018289425614706437],\r\n            [0.08014759651561897, 0.08106206398589154, -0.021237992120711036, 0.014000291238817119, -0.010234185730090163, 0.0071534651513645905, -0.0028126393724067235],\r\n            [0.0720638469418819, 0.17106835498388662, 0.10961456404007211, -0.024619871728984055, 0.014760377043950817, -0.009575259396791401, 0.0036726783971383057],\r\n            [0.07570512581982441, 0.15409015514217114, 0.2271077366732024, 0.11747818703702478, -0.023810827153044174, 0.012709985533661206, -0.004608844281289633],\r\n            [0.07391234216319184, 0.16135560761594242, 0.2068672415521042, 0.23700711534269422, 0.10308679353381345, -0.018854139152580447, 0.0058589009748887914],\r\n            [0.07470556205979623, 0.1583072238724687, 0.21415342326720002, 0.21987784703186003, 0.19875212168063527, 0.06926550160550914, -0.00811600819772829],\r\n            [0.07449423555601031, 0.15910211573365074, 0.21235188950297781, 0.22355491450728324, 0.19047493682211558, 0.1196137446126562, 0.02040816326530612],\r\n        ],\r\n        [\r\n            [0.07449423555601031, 0.15910211573365074, 0.21235188950297781, 0.22355491450728324, 0.19047493682211558, 0.1196137446126562, 0.02040816326530612],\r\n        ],\r\n        [0.029316427159784893, 0.1480785996684843, 0.3369846902811543, 0.5586715187715501, 0.7692338620300545, 0.9269456713197411, 1.0]\r\n    ),\r\n    \r\n    \"radau_iia_5s\": ( # https://github.com/ryanelandt/Radau.jl\r\n        [    \r\n            [0.07299886431790333, -0.02673533110794557, 0.018676929763984353, -0.01287910609330644, 0.005042839233882015],\r\n            [0.15377523147918246, 0.14621486784749352, -0.03644456890512809, 0.02123306311930472, -0.007935579902728777],\r\n            [0.14006304568480987, 0.29896712949128346, 0.16758507013524895, -0.03396910168661774, 0.010944288744192253],\r\n            [0.14489430810953477, 0.2765000687601592, 0.32579792291042103, 0.12875675325490976, -0.015708917378805327],\r\n            [0.14371356079122594, 0.28135601514946207, 0.31182652297574126, 0.22310390108357075, 0.04],\r\n        ],\r\n        [\r\n            [0.14371356079122594, 0.28135601514946207, 0.31182652297574126, 0.22310390108357075, 0.04],\r\n        ],\r\n        [0.05710419611451768, 0.2768430136381238, 0.5835904323689168, 0.8602401356562195, 1.0]\r\n    ),\r\n    \"radau_iia_3s\": ( \r\n        [    \r\n            [11/45 - 7*6**0.5 / 360, 37/225 - 169*6**0.5 / 1800, -2/225 + 6**0.5 / 75],\r\n            [37/225 + 169*6**0.5 / 1800, 11/45 + 7*6**0.5 / 360, -2/225 - 6**0.5 / 75],\r\n            [4/9 - 6**0.5 / 36, 4/9 + 6**0.5 / 36, 1/9],\r\n        ],\r\n        [\r\n            [4/9 - 6**0.5 / 36, 4/9 + 6**0.5 / 36, 1/9],\r\n        ],\r\n        [2/5 - 6**0.5 / 10, 2/5 + 6**0.5 / 10, 1.]\r\n    ),\r\n    \"radau_iia_3s_alt\": ( # https://www.unige.ch/~hairer/preprints/coimbra.pdf (page 7) Ehle [Eh69] and Axelsson [Ax69]\r\n        [    \r\n            [(88 - 7*6**0.5) / 360, (296 - 169*6**0.5) / 1800, (-2 + 3 * 6**0.5) / 225],\r\n            [(296 + 169*6**0.5) / 1800, (88 + 7*6**0.5) / 360, (-2 - 3*6**0.5) / 225],\r\n            [(16 - 6**0.5) / 36, (16 + 6**0.5) / 36, 1/9],\r\n        ],\r\n        [\r\n            [\r\n            (16 - 6**0.5) / 36, \r\n            (16 + 6**0.5) / 36, \r\n            1/9],\r\n        ],\r\n        [\r\n        (4 - 6**0.5) / 10, \r\n        (4 + 6**0.5) / 10, \r\n        1.]\r\n    ),\r\n    \"radau_iia_2s\": (\r\n        [    \r\n            [5/12, -1/12],\r\n            [3/4, 1/4],\r\n        ],\r\n        [\r\n            [3/4, 1/4],\r\n        ],\r\n        [1/3, 1]\r\n    ),\r\n    \"radau_ia_3s\": (\r\n        [    \r\n            [1/9, (-1-6**0.5)/18, (-1+6**0.5)/18],\r\n            [1/9, 11/45 + 7*6**0.5/360, 11/45-43*6**0.5/360],\r\n            [1/9, 11/45-43*6**0.5/360, 11/45 + 7*6**0.5/360],\r\n        ],\r\n        [\r\n            [1/9, 4/9 + 6**0.5/36, 4/9 - 6**0.5/36],\r\n        ],\r\n        [0, 3/5-6**0.5/10, 3/5+6**0.5/10]\r\n    ),\r\n    \"radau_ia_2s\": (\r\n        [    \r\n            [1/4, -1/4],\r\n            [1/4, 5/12],\r\n        ],\r\n        [\r\n            [1/4, 3/4],\r\n        ],\r\n        [0, 2/3]\r\n    ),\r\n    \"lobatto_iiia_4s\": ( #6th order\r\n        [    \r\n            [0, 0, 0, 0],\r\n            [(11+5**0.5)/120,   (25-5**0.5)/120, (25-13*5**0.5)/120, (-1+5**0.5)/120],\r\n            [(11-5**0.5)/120,   (25+13*5**0.5)/120, (25+5**0.5)/120, (-1-5**0.5)/120],\r\n            [1/12, 5/12, 5/12, 1/12],\r\n        ],\r\n        [\r\n            [1/12, 5/12, 5/12, 1/12],\r\n        ],\r\n        [0, (5-5**0.5)/10, (5+5**0.5)/10, 1]\r\n    ),\r\n    \"lobatto_iiib_4s\": ( #6th order\r\n        [    \r\n            [1/12, (-1-5**0.5)/24, (-1+5**0.5)/24, 0],\r\n            [1/12,   (25+5**0.5)/120, (25-13*5**0.5)/120, 0],\r\n            [1/12,   (25+13*5**0.5)/120, (25-5**0.5)/120, 0],\r\n            [1/12, (11-5**0.5)/24, (11+5**0.5)/24, 0],\r\n        ],\r\n        [\r\n            [1/12, 5/12, 5/12, 1/12],\r\n        ],\r\n        [0, (5-5**0.5)/10, (5+5**0.5)/10, 1]\r\n    ),\r\n    \"lobatto_iiic_4s\": ( #6th order\r\n        [    \r\n            [1/12, (-5**0.5)/12, (5**0.5)/12, -1/12],\r\n            [1/12,   1/4, (10-7*5**0.5)/60, (5**0.5)/60],\r\n            [1/12,   (10+7*5**0.5)/60, 1/4, (-5**0.5)/60],\r\n            [1/12, 5/12, 5/12, 1/12],\r\n        ],\r\n        [\r\n            [1/12, 5/12, 5/12, 1/12],\r\n        ],\r\n        [0, (5-5**0.5)/10, (5+5**0.5)/10, 1]\r\n    ),\r\n    \"lobatto_iiia_3s\": (\r\n        [    \r\n            [0, 0, 0],\r\n            [5/24, 1/3, -1/24],\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1]\r\n    ),\r\n    \"lobatto_iiia_2s\": (\r\n        [    \r\n            [0, 0],\r\n            [1/2, 1/2],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [0, 1]\r\n    ),\r\n    \r\n    \r\n    \r\n    \"lobatto_iiib_3s\": (\r\n        [    \r\n            [1/6, -1/6, 0],\r\n            [1/6, 1/3, 0],\r\n            [1/6, 5/6, 0],\r\n        ],\r\n        [\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1]\r\n    ),\r\n    \"lobatto_iiib_2s\": (\r\n        [    \r\n            [1/2, 0],\r\n            [1/2, 0],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [0, 1]\r\n    ),\r\n\r\n    \"lobatto_iiic_3s\": (\r\n        [    \r\n            [1/6, -1/3, 1/6],\r\n            [1/6, 5/12, -1/12],\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1]\r\n    ),\r\n    \"lobatto_iiic_2s\": (\r\n        [    \r\n            [1/2, -1/2],\r\n            [1/2, 1/2],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [0, 1]\r\n    ),\r\n    \r\n\r\n    \"lobatto_iiic_star_3s\": (\r\n        [    \r\n            [0, 0, 0],\r\n            [1/4, 1/4, 0],\r\n            [0, 1, 0],\r\n        ],\r\n        [\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1]\r\n    ),\r\n    \"lobatto_iiic_star_2s\": (\r\n        [    \r\n            [0, 0],\r\n            [1, 0],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [0, 1]\r\n    ),\r\n    \r\n    \"lobatto_iiid_3s\": (\r\n        [    \r\n            [1/6, 0, -1/6],\r\n            [1/12, 5/12, 0],\r\n            [1/2, 1/3, 1/6],\r\n        ],\r\n        [\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1]\r\n    ),\r\n    \"lobatto_iiid_2s\": (\r\n        [    \r\n            [1/2, 1/2],\r\n            [-1/2, 1/2],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [0, 1]\r\n    ),\r\n\r\n    \"kraaijevanger_spijker_2s\": ( #overshoots step\r\n        [    \r\n            [1/2, 0],\r\n            [-1/2, 2],\r\n        ],\r\n        [\r\n            [-1/2, 3/2],\r\n        ],\r\n        [1/2, 3/2]\r\n    ),\r\n    \r\n    \"qin_zhang_2s\": (\r\n        [    \r\n            [1/4, 0],\r\n            [1/2, 1/4],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [1/4, 3/4]\r\n    ),\r\n\r\n    \"pareschi_russo_2s\": (\r\n        [    \r\n            [(1-2**0.5/2), 0],\r\n            [1-2*(1-2**0.5/2), (1-2**0.5/2)],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [(1-2**0.5/2), 1-(1-2**0.5/2)]\r\n    ),\r\n\r\n    \"pareschi_russo_alt_2s\": (\r\n        [    \r\n            [(1-2**0.5/2), 0],\r\n            [1-(1-2**0.5/2), (1-2**0.5/2)],\r\n        ],\r\n        [\r\n            [1-(1-2**0.5/2), (1-2**0.5/2)],\r\n        ],\r\n        [(1-2**0.5/2), 1]\r\n    ),\r\n\r\n    \"crouzeix_3s_alt\": ( # Crouzeix & Raviart 1980; A-stable; pg 100 in Solving Ordinary Differential Equations II\r\n        [\r\n            [gamma_crouzeix, 0, 0],\r\n            [1/2 - gamma_crouzeix, gamma_crouzeix, 0],\r\n            [2*gamma_crouzeix, 1-4*gamma_crouzeix, gamma_crouzeix],\r\n        ],\r\n        [\r\n            [delta_crouzeix, 1-2*delta_crouzeix, delta_crouzeix],\r\n        ],\r\n        [gamma_crouzeix,   1/2,   1-gamma_crouzeix],\r\n    ),\r\n    \r\n    \"crouzeix_3s\": (\r\n        [\r\n            [(1+alpha_crouzeix)/2, 0, 0],\r\n            [-alpha_crouzeix/2, (1+alpha_crouzeix)/2, 0],\r\n            [1+alpha_crouzeix, -(1+2*alpha_crouzeix), (1+alpha_crouzeix)/2],\r\n        ],\r\n        [\r\n            [1/(6*alpha_crouzeix**2), 1-(1/(3*alpha_crouzeix**2)), 1/(6*alpha_crouzeix**2)],\r\n        ],\r\n        [(1+alpha_crouzeix)/2,   1/2,   (1-alpha_crouzeix)/2],\r\n    ),\r\n    \r\n    \"crouzeix_2s\": (\r\n        [\r\n            [1/2 + 3**0.5 / 6, 0],\r\n            [-(3**0.5 / 3), 1/2 + 3**0.5 / 6]\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [1/2 + 3**0.5 / 6, 1/2 - 3**0.5 / 6],\r\n    ),\r\n    \"verner_13s\": ( #verner9. some values are missing, need to revise\r\n        [\r\n            [],\r\n        ],\r\n        [\r\n            [],\r\n        ],\r\n        [\r\n            0.03462,\r\n            0.09702435063878045,\r\n            0.14553652595817068,\r\n            0.561,\r\n            0.22900791159048503,\r\n            0.544992088409515,\r\n            0.645,\r\n            0.48375,\r\n            0.06757,\r\n            0.25,\r\n            0.6590650618730999,\r\n            0.8206,\r\n            0.9012,\r\n        ]\r\n    ),\r\n    \"verner_robust_16s\": (\r\n        [\r\n            [],\r\n            [0.04],\r\n            [-0.01988527319182291, 0.11637263332969652],\r\n            [0.0361827600517026, 0, 0.10854828015510781],\r\n            [2.272114264290177, 0, -8.526886447976398, 6.830772183686221],\r\n            [0.050943855353893744, 0, 0, 0.1755865049809071, 0.007022961270757467],\r\n            [0.1424783668683285, 0, 0, -0.3541799434668684, 0.07595315450295101, 0.6765157656337123],\r\n            [0.07111111111111111, 0, 0, 0, 0, 0.3279909287605898, 0.24089796012829906],\r\n            [0.07125, 0, 0, 0, 0, 0.32688424515752457, 0.11561575484247544, -0.03375],\r\n            [0.0482267732246581, 0, 0, 0, 0, 0.039485599804954, 0.10588511619346581, -0.021520063204743093, -0.10453742601833482],\r\n            [-0.026091134357549235, 0, 0, 0, 0, 0.03333333333333333, -0.1652504006638105, 0.03434664118368617, 0.1595758283215209, 0.21408573218281934],\r\n            [-0.03628423396255658, 0, 0, 0, 0, -1.0961675974272087, 0.1826035504321331, 0.07082254444170683, -0.02313647018482431, 0.2711204726320933, 1.3081337494229808],\r\n            [-0.5074635056416975, 0, 0, 0, 0, -6.631342198657237, -0.2527480100908801, -0.49526123800360955, 0.2932525545253887, 1.440108693768281, 6.237934498647056, 0.7270192054526988],\r\n            [0.6130118256955932, 0, 0, 0, 0, 9.088803891640463, -0.40737881562934486, 1.7907333894903747, 0.714927166761755, -1.4385808578417227, -8.26332931206474, -1.537570570808865, 0.34538328275648716],\r\n            [-1.2116979103438739, 0, 0, 0, 0, -19.055818715595954, 1.263060675389875, -6.913916969178458, -0.6764622665094981, 3.367860445026608, 18.00675164312591, 6.83882892679428, -1.0315164519219504, 0.4129106232130623],\r\n            [2.1573890074940536, 0, 0, 0, 0, 23.807122198095804, 0.8862779249216555, 13.139130397598764, -2.604415709287715, -5.193859949783872, -20.412340711541507, -12.300856252505723, 1.5215530950085394],\r\n        ],\r\n        [\r\n            0.014588852784055396, 0, 0, 0, 0, 0, 0, 0.0020241978878893325, 0.21780470845697167,\r\n            0.12748953408543898, 0.2244617745463132, 0.1787254491259903, 0.07594344758096556,\r\n            0.12948458791975614, 0.029477447612619417, 0\r\n        ],\r\n        [\r\n            0, 0.04, 0.09648736013787361, 0.1447310402068104, 0.576, 0.2272326564618766,\r\n            0.5407673435381234, 0.64, 0.48, 0.06754, 0.25, 0.6770920153543243, 0.8115,\r\n            0.906, 1, 1\r\n        ],\r\n    ),\r\n\r\n    \"dormand-prince_13s\": ( #non-monotonic\r\n        [\r\n            [],\r\n            [1/18],\r\n            [1/48, 1/16],\r\n            [1/32, 0, 3/32],\r\n            [5/16, 0, -75/64, 75/64],\r\n            [3/80, 0, 0, 3/16, 3/20],\r\n            [29443841/614563906, 0, 0, 77736538/692538347, -28693883/1125000000, 23124283/1800000000],\r\n            [16016141/946692911, 0, 0, 61564180/158732637, 22789713/633445777, 545815736/2771057229, -180193667/1043307555],\r\n            [39632708/573591083, 0, 0, -433636366/683701615, -421739975/2616292301, 100302831/723423059, 790204164/839813087, 800635310/3783071287],\r\n            [246121993/1340847787, 0, 0, -37695042795/15268766246, -309121744/1061227803, -12992083/490766935, 6005943493/2108947869, 393006217/1396673457, 123872331/1001029789],\r\n            [-1028468189/846180014, 0, 0, 8478235783/508512852, 1311729495/1432422823, -10304129995/1701304382, -48777925059/3047939560, 15336726248/1032824649, -45442868181/3398467696, 3065993473/597172653],\r\n            [185892177/718116043, 0, 0, -3185094517/667107341, -477755414/1098053517, -703635378/230739211, 5731566787/1027545527, 5232866602/850066563, -4093664535/808688257, 3962137247/1805957418, 65686358/487910083],\r\n            [403863854/491063109, 0, 0, -5068492393/434740067, -411421997/543043805, 652783627/914296604, 11173962825/925320556, -13158990841/6184727034, 3936647629/1978049680, -160528059/685178525, 248638103/1413531060],\r\n        ],\r\n        [\r\n            [14005451/335480064, 0, 0, 0, 0, -59238493/1068277825, 181606767/758867731, 561292985/797845732, -1041891430/1371343529, 760417239/1151165299, 118820643/751138087, -528747749/2220607170, 1/4],\r\n        ],\r\n        [0, 1/18, 1/12, 1/8, 5/16, 3/8, 59/400, 93/200, 5490023248 / 9719169821, 13/20, 1201146811 / 1299019798, 1, 1],\r\n    ),\r\n    \"dormand-prince_6s\": (\r\n        [\r\n            [],\r\n            [1/5],\r\n            [3/40, 9/40],\r\n            [44/45, -56/15, 32/9],\r\n            [19372/6561, -25360/2187, 64448/6561, -212/729],\r\n            [9017/3168, -355/33, 46732/5247, 49/176, -5103/18656],\r\n        ],\r\n        [\r\n            [35/384, 0, 500/1113, 125/192, -2187/6784, 11/84],\r\n        ],\r\n        [0, 1/5, 3/10, 4/5, 8/9, 1],\r\n    ),\r\n    \"bogacki-shampine_7s\": ( #5th order\r\n        [\r\n            [],\r\n            [1/6],\r\n            [2/27, 4/27],\r\n            [183/1372, -162/343, 1053/1372],\r\n            [68/297, -4/11, 42/143, 1960/3861],\r\n            [597/22528, 81/352, 63099/585728, 58653/366080, 4617/20480],\r\n            [174197/959244, -30942/79937, 8152137/19744439, 666106/1039181, -29421/29068, 482048/414219],\r\n        ],\r\n        [\r\n            [587/8064, 0, 4440339/15491840, 24353/124800, 387/44800, 2152/5985, 7267/94080],\r\n        ],\r\n        [0, 1/6, 2/9, 3/7, 2/3, 3/4, 1] \r\n    ),\r\n    \"bogacki-shampine_4s\": ( #5th order\r\n        [\r\n            [],\r\n            [1/2],\r\n            [0, 3/4],\r\n            [2/9, 1/3, 4/9],\r\n        ],\r\n        [\r\n            [2/9, 1/3, 4/9, 0],\r\n        ],\r\n        [0, 1/2, 3/4, 1] \r\n    ),\r\n    \"tsi_7s\": ( #5th order \r\n        [\r\n            [],\r\n            [0.161],\r\n            [-0.008480655492356989, 0.335480655492357],\r\n            [2.8971530571054935, -6.359448489975075, 4.3622954328695815],\r\n            [5.325864828439257, -11.748883564062828, 7.4955393428898365, -0.09249506636175525],\r\n            [5.86145544294642, -12.92096931784711, 8.159367898576159, -0.071584973281401, -0.02826905039406838],\r\n            [0.09646076681806523, 0.01, 0.4798896504144996, 1.379008574103742, -3.290069515436081, 2.324710524099774],\r\n        ],\r\n        [\r\n            [0.09646076681806523, 0.01, 0.4798896504144996, 1.379008574103742, -3.290069515436081, 2.324710524099774, 0.0],\r\n        ],\r\n        [0.0, 0.161, 0.327, 0.9, 0.9800255409045097, 1.0, 1.0],\r\n    ),\r\n    \"rk6_7s\": ( #non-monotonic #5th order\r\n        [\r\n            [],\r\n            [1/3],\r\n            [0, 2/3],\r\n            [1/12, 1/3, -1/12],\r\n            [-1/16, 9/8, -3/16, -3/8],\r\n            [0, 9/8, -3/8, -3/4, 1/2],\r\n            [9/44, -9/11, 63/44, 18/11, 0, -16/11],\r\n        ],\r\n        [\r\n            [11/120, 0, 27/40, 27/40, -4/15, -4/15, 11/120],\r\n        ],\r\n        [0, 1/3, 2/3, 1/3, 1/2, 1/2, 1],\r\n    ),\r\n    \"rk5_7s\": ( #5th order\r\n        [\r\n            [],\r\n            [1/5],\r\n            [3/40, 9/40],\r\n            [44/45, -56/15, 32/9],\r\n            [19372/6561, -25360/2187, 64448/6561, 212/729], #flipped 212 sign\r\n            [-9017/3168, -355/33, 46732/5247, 49/176, -5103/18656],\r\n            [35/384, 0, 500/1113, 125/192, -2187/6784, 11/84],\r\n        ],\r\n        [\r\n            [5179/57600, 0, 7571/16695, 393/640, -92097/339200, 187/2100, 1/40],\r\n        ],\r\n        [0, 1/5, 3/10, 4/5, 8/9, 1, 1],\r\n    ),\r\n    \"ssprk4_4s\": ( #non-monotonic #https://link.springer.com/article/10.1007/s41980-022-00731-x\r\n        [ \r\n            [],\r\n            [1/2],\r\n            [1/2, 1/2],\r\n            [1/6, 1/6, 1/6],\r\n        ],\r\n        [\r\n            [1/6, 1/6, 1/6, 1/2],\r\n        ],\r\n        [0, 1/2, 1, 1/2],\r\n    ),\r\n    \"rk4_4s\": (\r\n        [\r\n            [],\r\n            [1/2],\r\n            [0, 1/2],\r\n            [0, 0, 1],\r\n        ],\r\n        [\r\n            [1/6, 1/3, 1/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1/2, 1],\r\n    ),\r\n    \"rk38_4s\": (\r\n        [\r\n            [],\r\n            [1/3],\r\n            [-1/3, 1],\r\n            [1, -1, 1],\r\n        ],\r\n        [\r\n            [1/8, 3/8, 3/8, 1/8],\r\n        ],\r\n        [0, 1/3, 2/3, 1],\r\n    ),\r\n    \"ralston_4s\": (\r\n        [\r\n            [],\r\n            [2/5],\r\n            [(-2889+1428 * 5**0.5)/1024,   (3785-1620 * 5**0.5)/1024],\r\n            [(-3365+2094 * 5**0.5)/6040,   (-975-3046 * 5**0.5)/2552,  (467040+203968*5**0.5)/240845],\r\n        ],\r\n        [\r\n            [(263+24*5**0.5)/1812, (125-1000*5**0.5)/3828, (3426304+1661952*5**0.5)/5924787, (30-4*5**0.5)/123],\r\n        ],\r\n        [0, 2/5, (14-3 * 5**0.5)/16, 1],\r\n    ),\r\n    \"heun_3s\": (\r\n        [\r\n            [],\r\n            [1/3],\r\n            [0, 2/3],\r\n        ],\r\n        [\r\n            [1/4, 0, 3/4],\r\n        ],\r\n        [0, 1/3, 2/3],\r\n    ),\r\n    \"kutta_3s\": (\r\n        [\r\n            [],\r\n            [1/2],\r\n            [-1, 2],\r\n        ],\r\n        [\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1],\r\n    ),\r\n    \"ralston_3s\": (\r\n        [\r\n            [],\r\n            [1/2],\r\n            [0, 3/4],\r\n        ],\r\n        [\r\n            [2/9, 1/3, 4/9],\r\n        ],\r\n        [0, 1/2, 3/4],\r\n    ),\r\n    \"houwen-wray_3s\": (\r\n        [\r\n            [],\r\n            [8/15],\r\n            [1/4, 5/12],\r\n        ],\r\n        [\r\n            [1/4, 0, 3/4],\r\n        ],\r\n        [0, 8/15, 2/3],\r\n    ),\r\n    \"ssprk3_3s\": ( #non-monotonic\r\n        [\r\n            [],\r\n            [1],\r\n            [1/4, 1/4],\r\n        ],\r\n        [\r\n            [1/6, 1/6, 2/3],\r\n        ],\r\n        [0, 1, 1/2], \r\n    ),\r\n    \"midpoint_2s\": (\r\n        [\r\n            [],\r\n            [1/2],\r\n        ],\r\n        [\r\n            [0, 1],\r\n        ],\r\n        [0, 1/2],\r\n    ),\r\n    \"heun_2s\": (\r\n        [\r\n            [],\r\n            [1],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [0, 1],\r\n    ),\r\n    \"ralston_2s\": (\r\n        [\r\n            [],\r\n            [2/3],\r\n        ],\r\n        [\r\n            [1/4, 3/4],\r\n        ],\r\n        [0, 2/3],\r\n    ),\r\n    \"euler\": (\r\n        [\r\n            [],\r\n        ],\r\n        [\r\n            [1],\r\n        ],\r\n        [0],\r\n    ),\r\n}\r\n\r\n\r\n\r\ndef get_rk_methods_beta(rk_type       : str,\r\n                        h             : Tensor,\r\n                        c1            : float            = 0.0,\r\n                        c2            : float            = 0.5,\r\n                        c3            : float            = 1.0,\r\n                        h_prev        : Optional[Tensor] = None,\r\n                        step          : int              = 0,\r\n                        sigmas        : Optional[Tensor] = None,\r\n                        sigma         : Optional[Tensor] = None,\r\n                        sigma_next    : Optional[Tensor] = None,\r\n                        sigma_down    : Optional[Tensor] = None,\r\n                        extra_options : Optional[str]    = None\r\n                        ):\r\n    \r\n    FSAL             = False\r\n    multistep_stages = 0\r\n    hybrid_stages    = 0\r\n    u                = None\r\n    v                = None\r\n    \r\n    EO                            = ExtraOptions(extra_options)\r\n    use_analytic_solution         = not EO(\"disable_analytic_solution\")\r\n    multistep_initial_sampler     = EO(\"multistep_initial_sampler\", \"\", debugMode=1)\r\n    multistep_fallback_sampler    = EO(\"multistep_fallback_sampler\", \"\")\r\n    multistep_extra_initial_steps = EO(\"multistep_extra_initial_steps\", 1)\r\n    \r\n    #if RK_Method_Beta.is_exponential(rk_type): \r\n    if rk_type.startswith((\"res\", \"dpmpp\", \"ddim\", \"pec\", \"etdrk\", \"lawson\")): \r\n        h_no_eta = -torch.log(sigma_next/sigma)\r\n        h_prev1_no_eta = -torch.log(sigmas[step]/sigmas[step-1]) if step >= 1 else None\r\n        h_prev2_no_eta = -torch.log(sigmas[step]/sigmas[step-2]) if step >= 2 else None\r\n        h_prev3_no_eta = -torch.log(sigmas[step]/sigmas[step-3]) if step >= 3 else None\r\n        h_prev4_no_eta = -torch.log(sigmas[step]/sigmas[step-4]) if step >= 4 else None\r\n\r\n    else:\r\n        h_no_eta = sigma_next - sigma\r\n        h_prev1_no_eta = sigmas[step] - sigmas[step-1] if step >= 1 else None\r\n        h_prev2_no_eta = sigmas[step] - sigmas[step-2] if step >= 2 else None\r\n        h_prev3_no_eta = sigmas[step] - sigmas[step-3] if step >= 3 else None\r\n        h_prev4_no_eta = sigmas[step] - sigmas[step-4] if step >= 4 else None\r\n        \r\n    if type(c1) == torch.Tensor:\r\n        c1 = c1.item()\r\n    if type(c2) == torch.Tensor:\r\n        c2 = c2.item()\r\n    if type(c3) == torch.Tensor:\r\n        c3 = c3.item()\r\n\r\n    if c1 == -1:\r\n        c1 = random.uniform(0, 1)\r\n    if c2 == -1:\r\n        c2 = random.uniform(0, 1)\r\n    if c3 == -1:\r\n        c3 = random.uniform(0, 1)\r\n        \r\n    if rk_type[:4] == \"deis\": \r\n        order = int(rk_type[-2])\r\n        if step < order + multistep_extra_initial_steps:\r\n            if order == 4:\r\n                #rk_type = \"res_4s_strehmel_weiner\"\r\n                rk_type = \"ralston_4s\"\r\n                rk_type = multistep_initial_sampler if multistep_initial_sampler else rk_type\r\n                order = 3\r\n            elif order == 3:\r\n                #rk_type = \"res_3s\"\r\n                rk_type = \"ralston_3s\"\r\n                rk_type = multistep_initial_sampler if multistep_initial_sampler else rk_type\r\n            elif order == 2:\r\n                #rk_type = \"res_2s\"\r\n                rk_type = \"ralston_2s\"\r\n                rk_type = multistep_initial_sampler if multistep_initial_sampler else rk_type\r\n        else:\r\n            rk_type = \"deis\"\r\n            multistep_stages = order-1\r\n    \r\n    if rk_type[-2:] == \"2m\": #multistep method\r\n        rk_type = rk_type[:-2] + \"2s\"\r\n        #if h_prev is not None and step >= 1: \r\n        if h_no_eta < 1.0:\r\n            if step >= 1 + multistep_extra_initial_steps:\r\n                multistep_stages = 1\r\n                c2 = (-h_prev1_no_eta / h_no_eta).item()\r\n            else:\r\n                rk_type = multistep_initial_sampler if multistep_initial_sampler else rk_type\r\n            if rk_type.startswith(\"abnorsett\"):\r\n                rk_type = \"res_2s\"\r\n                rk_type = multistep_initial_sampler if multistep_initial_sampler else rk_type\r\n        else:\r\n            #rk_type = \"res_2s\"\r\n            rk_type = \"euler\" if sigma < 0.1 else \"res_2s\"\r\n            rk_type = multistep_fallback_sampler if multistep_fallback_sampler else rk_type\r\n            \r\n    if rk_type[-2:] == \"3m\": #multistep method\r\n        rk_type = rk_type[:-2] + \"3s\"\r\n        #if h_prev2 is not None and step >= 2: \r\n        if h_no_eta < 1.0:\r\n            if step >= 2 + multistep_extra_initial_steps:\r\n                multistep_stages = 2\r\n\r\n                c2 = (-h_prev1_no_eta / h_no_eta).item()\r\n                c3 = (-h_prev2_no_eta / h_no_eta).item()      \r\n            else:\r\n                rk_type = multistep_initial_sampler if multistep_initial_sampler else rk_type \r\n            if rk_type.startswith(\"abnorsett\"):\r\n                rk_type = \"res_3s\"\r\n                rk_type = multistep_initial_sampler if multistep_initial_sampler else rk_type\r\n        else:\r\n            #rk_type = \"res_3s\"\r\n            rk_type = \"euler\" if sigma < 0.1 else \"res_3s\"\r\n            rk_type = multistep_fallback_sampler if multistep_fallback_sampler else rk_type\r\n            \r\n    if rk_type[-2:] == \"4m\": #multistep method\r\n        rk_type = rk_type[:-2] + \"4s\"\r\n        #if h_prev2 is not None and step >= 2: \r\n        if h_no_eta < 1.0:\r\n            if step >= 3 + multistep_extra_initial_steps:\r\n                multistep_stages = 3\r\n\r\n                c2 = (-h_prev1_no_eta / h_no_eta).item()\r\n                c3 = (-h_prev2_no_eta / h_no_eta).item()\r\n                # WOULD NEED A C4 (POW) TO IMPLEMENT RES_4M IF IT EXISTED\r\n            else:\r\n                rk_type = multistep_initial_sampler if multistep_initial_sampler else rk_type\r\n            if rk_type == \"res_4s\":\r\n                rk_type = \"res_4s_strehmel_weiner\"\r\n                rk_type = multistep_initial_sampler if multistep_initial_sampler else rk_type\r\n            if rk_type.startswith(\"abnorsett\"):\r\n                rk_type = \"res_4s_strehmel_weiner\"\r\n                rk_type = multistep_initial_sampler if multistep_initial_sampler else rk_type\r\n        else:\r\n            #rk_type = \"res_4s_strehmel_weiner\"\r\n            rk_type = \"euler\" if sigma < 0.1 else \"res_4s_strehmel_weiner\"\r\n            rk_type = multistep_fallback_sampler if multistep_fallback_sampler else rk_type\r\n                        \r\n    if rk_type[-3] == \"h\" and rk_type[-1] == \"s\": #hybrid method \r\n        if step < int(rk_type[-4]) + multistep_extra_initial_steps:\r\n            rk_type = \"res_\" + rk_type[-2:]\r\n            rk_type = multistep_initial_sampler if multistep_initial_sampler else rk_type\r\n        else:\r\n            hybrid_stages = int(rk_type[-4])  #+1 adjustment needed?\r\n        if rk_type == \"res_4s\":\r\n            rk_type = \"res_4s_strehmel_weiner\"\r\n            rk_type = multistep_initial_sampler if multistep_initial_sampler else rk_type\r\n        if rk_type == \"res_1s\":\r\n            rk_type = \"res_2s\"\r\n            rk_type = multistep_initial_sampler if multistep_initial_sampler else rk_type\r\n\r\n    if rk_type in rk_coeff:\r\n        a, b, ci = copy.deepcopy(rk_coeff[rk_type])\r\n        \r\n        a = [row + [0] * (len(ci) - len(row)) for row in a]\r\n\r\n    match rk_type:\r\n        case \"deis\": \r\n            coeff_list = get_deis_coeff_list(sigmas, multistep_stages+1, deis_mode=\"rhoab\")\r\n            coeff_list = [[elem / h for elem in inner_list] for inner_list in coeff_list]\r\n            if multistep_stages == 1:\r\n                b1, b2 = coeff_list[step]\r\n                a = [\r\n                        [0, 0],\r\n                        [0, 0],\r\n                ]\r\n                b = [\r\n                        [b1, b2],\r\n                ]\r\n                ci = [0, 0]\r\n            if multistep_stages == 2:\r\n                b1, b2, b3 = coeff_list[step]\r\n                a = [\r\n                        [0, 0, 0],\r\n                        [0, 0, 0],\r\n                        [0, 0, 0],\r\n                ]\r\n                b = [\r\n                        [b1, b2, b3],\r\n                ]\r\n                ci = [0, 0, 0]\r\n            if multistep_stages == 3:\r\n                b1, b2, b3, b4 = coeff_list[step]\r\n                a = [\r\n                        [0, 0, 0, 0],\r\n                        [0, 0, 0, 0],\r\n                        [0, 0, 0, 0],\r\n                        [0, 0, 0, 0],\r\n                ]\r\n                b = [\r\n                    [b1, b2, b3, b4],\r\n                ]\r\n                ci = [0, 0, 0, 0]\r\n            if multistep_stages > 0:\r\n                for i in range(len(b[0])): \r\n                    b[0][i] *= ((sigma_down - sigma) / (sigma_next - sigma))\r\n\r\n        case \"dormand-prince_6s\":\r\n            FSAL = True\r\n\r\n        case \"ddim\":\r\n            b1 = phi(1, -h)\r\n            a = [\r\n                    [0],\r\n            ]\r\n            b = [\r\n                    [b1],\r\n            ]\r\n            ci = [0]\r\n\r\n        case \"res_2s\":\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n\r\n            ci = [0, c2]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a2_1 = c2 * φ(1,2)\r\n            b2 = φ(2)/c2\r\n            b1 = φ(1) - b2\r\n\r\n            a = [\r\n                    [0,0],\r\n                    [a2_1, 0],\r\n            ]\r\n            b = [\r\n                    [b1, b2],\r\n            ]\r\n\r\n        case \"res_2s_stable\":\r\n            c2 = 1.0 #float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n\r\n            ci = [0, c2]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a2_1 = c2 * φ(1,2)\r\n            b2 = φ(2)/c2\r\n            b1 = φ(1) - b2\r\n\r\n            a = [\r\n                    [0,0],\r\n                    [a2_1, 0],\r\n            ]\r\n            b = [\r\n                    [b1, b2],\r\n            ]\r\n\r\n        case \"res_2s_rkmk2e\":\r\n\r\n            ci = [0, 1]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            b2 = φ(2)\r\n\r\n            a = [\r\n                    [0,0],\r\n                    [0, 0],\r\n            ]\r\n            b = [\r\n                    [0, b2],\r\n            ]\r\n\r\n            gen_first_col_exp(a, b, ci, φ)\r\n\r\n\r\n\r\n        case \"abnorsett2_1h2s\":\r\n\r\n            c1, c2 = 0, 1\r\n            ci = [c1, c2]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n\r\n            b1 = φ(1) #+ φ(2)\r\n\r\n            a = [\r\n                    [0, 0],\r\n                    [0, 0],\r\n            ]\r\n            b = [\r\n                    [0, 0],\r\n            ]\r\n\r\n            if extra_options_flag(\"h_prev_h_h_no_eta\", extra_options):\r\n                φ1 = Phi(h_prev1_no_eta * h/h_no_eta, ci)\r\n            elif extra_options_flag(\"h_only\", extra_options):\r\n                φ1 = Phi(h, ci, use_analytic_solution)\r\n            else:\r\n                φ1 = Phi(h_prev1_no_eta, ci)\r\n\r\n            u1 = -φ1(2)\r\n            v1 = -φ1(2)\r\n\r\n            u = [\r\n                    [0, 0],\r\n                    [u1, 0],\r\n            ]\r\n            v = [\r\n                    [v1, 0],\r\n            ]\r\n\r\n            gen_first_col_exp_uv(a, b, ci, u, v, φ) \r\n\r\n\r\n\r\n        case \"abnorsett_2m\":\r\n\r\n            c1, c2 = 0, 1\r\n            ci = [c1, c2]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n\r\n            a = [\r\n                    [0, 0],\r\n                    [0, 0],\r\n            ]\r\n            b = [\r\n                    [0, -φ(2)],\r\n            ]\r\n\r\n            gen_first_col_exp(a, b, ci, φ) \r\n\r\n\r\n        case \"abnorsett_3m\":\r\n\r\n            c1, c2, c3 = 0, 0, 1\r\n            ci = [c1, c2, c3]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n\r\n            a = [\r\n                    [0, 0, 0],\r\n                    [0, 0, 0],\r\n                    [0, 0, 0],\r\n            ]\r\n            b = [\r\n                    [0, -2*φ(2) - 2*φ(3), (1/2)*φ(2) + φ(3)],\r\n            ]\r\n\r\n            gen_first_col_exp(a, b, ci, φ) \r\n\r\n\r\n\r\n        case \"abnorsett_4m\":\r\n\r\n            c1, c2, c3, c4 = 0, 0, 0, 1\r\n            ci = [c1, c2, c3, c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n\r\n            a = [\r\n                    [0, 0, 0, 0],\r\n                    [0, 0, 0, 0],\r\n                    [0, 0, 0, 0],\r\n                    [0, 0, 0, 0],\r\n            ]\r\n            b = [\r\n                    [0, \r\n                    -3*φ(2) - 5*φ(3) - 3*φ(4),\r\n                    (3/2)*φ(2) + 4*φ(3) + 3*φ(4),\r\n                    (-1/3)*φ(2) - φ(3) - φ(4),\r\n                    ],\r\n            ]\r\n\r\n            gen_first_col_exp(a, b, ci, φ) \r\n\r\n\r\n        case \"abnorsett3_2h2s\":\r\n            \r\n            c1,c2 = 0,1\r\n            ci = [c1, c2]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            b2 = 0\r\n\r\n            a = [\r\n                    [0, 0],\r\n                    [0, 0],\r\n            ]\r\n            b = [\r\n                    [0, 0],\r\n            ]\r\n            \r\n            if extra_options_flag(\"h_prev_h_h_no_eta\", extra_options):\r\n                φ1 = Phi(h_prev1_no_eta * h/h_no_eta, ci)\r\n                φ2 = Phi(h_prev2_no_eta * h/h_no_eta, ci)\r\n            elif extra_options_flag(\"h_only\", extra_options):\r\n                φ1 = Phi(h, ci, use_analytic_solution)\r\n                φ2 = Phi(h, ci, use_analytic_solution)\r\n            else:\r\n                φ1 = Phi(h_prev1_no_eta, ci)\r\n                φ2 = Phi(h_prev2_no_eta, ci)\r\n                \r\n            u2_1 = -2*φ1(2) - 2*φ1(3)\r\n            u2_2 = (1/2)*φ2(2) + φ2(3)\r\n            \r\n            v1 = u2_1 # -φ1(2) + φ1(3) + 3*φ1(4)\r\n            v2 = u2_2 # (1/6)*φ2(2) - φ2(4)\r\n            \r\n            u = [\r\n                    [   0,    0],\r\n                    [u2_1, u2_2],\r\n            ]\r\n            v = [\r\n                    [v1, v2],\r\n            ]\r\n            \r\n            gen_first_col_exp_uv(a, b, ci, u, v, φ)\r\n            \r\n\r\n\r\n        case \"pec423_2h2s\": #https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            \r\n            c1,c2 = 0,1\r\n            ci = [c1, c2]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            b2 = (1/3)*φ(2) + φ(3) + φ(4)\r\n\r\n            a = [\r\n                    [0, 0],\r\n                    [0, 0],\r\n            ]\r\n            b = [\r\n                    [0, b2],\r\n            ]\r\n            \r\n            if extra_options_flag(\"h_prev_h_h_no_eta\", extra_options):\r\n                φ1 = Phi(h_prev1_no_eta * h/h_no_eta, ci)\r\n                φ2 = Phi(h_prev2_no_eta * h/h_no_eta, ci)\r\n            elif extra_options_flag(\"h_only\", extra_options):\r\n                φ1 = Phi(h, ci, use_analytic_solution)\r\n                φ2 = Phi(h, ci, use_analytic_solution)\r\n            else:\r\n                φ1 = Phi(h_prev1_no_eta, ci)\r\n                φ2 = Phi(h_prev2_no_eta, ci)\r\n                \r\n            u2_1 = -2*φ1(2) - 2*φ1(3)\r\n            u2_2 = (1/2)*φ2(2) + φ2(3)\r\n            \r\n            v1 = -φ1(2) + φ1(3) + 3*φ1(4)\r\n            v2 = (1/6)*φ2(2) - φ2(4)\r\n            \r\n            u = [\r\n                    [   0,    0],\r\n                    [u2_1, u2_2],\r\n            ]\r\n            v = [\r\n                    [v1, v2],\r\n            ]\r\n            \r\n            gen_first_col_exp_uv(a, b, ci, u, v, φ)\r\n            \r\n\r\n\r\n\r\n        case \"pec433_2h3s\": #https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            \r\n            c1,c2,c3 = 0, 1, 1\r\n            ci = [c1,c2,c3]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a3_2 = (1/3)*φ(2) + φ(3) + φ(4)\r\n            \r\n            b2 = 0\r\n            b3 = (1/3)*φ(2) + φ(3) + φ(4)\r\n\r\n            a = [\r\n                    [0,    0, 0],\r\n                    [0,    0, 0],\r\n                    [0, a3_2, 0],\r\n            ]\r\n            b = [\r\n                    [0, b2, b3],\r\n            ]\r\n            \r\n            if extra_options_flag(\"h_prev_h_h_no_eta\", extra_options):\r\n                φ1 = Phi(h_prev1_no_eta * h/h_no_eta, ci)\r\n                φ2 = Phi(h_prev2_no_eta * h/h_no_eta, ci)\r\n            elif extra_options_flag(\"h_only\", extra_options):\r\n                φ1 = Phi(h, ci, use_analytic_solution)\r\n                φ2 = Phi(h, ci, use_analytic_solution)\r\n            else:\r\n                φ1 = Phi(h_prev1_no_eta, ci)\r\n                φ2 = Phi(h_prev2_no_eta, ci)\r\n                \r\n            u2_1 = -2*φ1(2) - 2*φ1(3)\r\n            u3_1 =   -φ1(2) +   φ1(3) + 3*φ1(4)\r\n            v1   =   -φ1(2) +   φ1(3) + 3*φ1(4)\r\n            \r\n            u2_2 = (1/2)*φ2(2) + φ2(3)\r\n            u3_2 = (1/6)*φ2(2) - φ2(4)\r\n            v2   = (1/6)*φ2(2) - φ2(4)\r\n\r\n            \r\n            u = [\r\n                    [   0,    0, 0],\r\n                    [u2_1, u2_2, 0],\r\n                    [u3_1, u3_2, 0],\r\n            ]\r\n            v = [\r\n                    [v1, v2, 0],\r\n            ]\r\n            \r\n            gen_first_col_exp_uv(a, b, ci, u, v, φ)\r\n\r\n\r\n            \r\n        case \"res_3s\":\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n            c3 = float(get_extra_options_kv(\"c3\", str(c3), extra_options))\r\n            \r\n            ci = [0,c2,c3]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            gamma = calculate_gamma(c2, c3)\r\n\r\n            a3_2 = gamma * c2 * φ(2,2) + (c3 ** 2 / c2) * φ(2, 3)\r\n            \r\n            b3 = (1 / (gamma * c2 + c3)) * φ(2)   \r\n            b2 = gamma * b3  #simplified version of: b2 = (gamma / (gamma * c2 + c3)) * phi_2_h  \r\n            \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [0, 0, 0],\r\n                    [0, a3_2, 0],\r\n            ]\r\n            b = [\r\n                    [0, b2, b3],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"res_3s_non-monotonic\":\r\n            c2 = float(get_extra_options_kv(\"c2\", \"1.0\", extra_options))\r\n            c3 = float(get_extra_options_kv(\"c3\", \"0.5\", extra_options))\r\n            \r\n            ci = [0,c2,c3]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            gamma = calculate_gamma(c2, c3)\r\n\r\n            a3_2 = gamma * c2 * φ(2,2) + (c3 ** 2 / c2) * φ(2, 3)\r\n            \r\n            b3 = (1 / (gamma * c2 + c3)) * φ(2)   \r\n            b2 = gamma * b3  #simplified version of: b2 = (gamma / (gamma * c2 + c3)) * phi_2_h  \r\n            \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [0, 0, 0],\r\n                    [0, a3_2, 0],\r\n            ]\r\n            b = [\r\n                    [0, b2, b3],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n            \r\n        case \"res_3s_alt\":\r\n            c2 = 1/3\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n            \r\n            c1,c2,c3 = 0, c2, 2/3\r\n            ci = [c1,c2,c3]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a = [\r\n                    [0, 0,                   0],\r\n                    [0, 0,                   0],\r\n                    [0, (4/(9*c2)) * φ(2,3), 0],\r\n            ]\r\n            b = [\r\n                    [0, 0, (1/c3)*φ(2)],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"res_3s_strehmel_weiner\": # \r\n            c2 = 1/2\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n\r\n            ci = [0,c2,1]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [0, 0, 0],\r\n                    [0, (1/c2) * φ(2,3), 0],\r\n            ]\r\n            b = [\r\n                    [0, 0, φ(2)],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n            \r\n        case \"res_3s_cox_matthews\": # Cox & Matthews; known as ETD3RK\r\n            c2 = 1/2 # must be 1/2\r\n            ci = [0,c2,1]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [0, 0, 0],\r\n                    [0, (1/c2) * φ(1,3), 0],  # paper said 2 * φ(1,3), but this is the same and more consistent with res_3s_strehmel_weiner\r\n            ]\r\n            b = [\r\n                    [0, \r\n                    -8*φ(3) + 4*φ(2),\r\n                    4*φ(3) - φ(2)],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"res_3s_lie\": # Lie; known as ETD2CF3\r\n            c1,c2,c3 = 0, 1/3, 2/3\r\n            ci = [c1,c2,c3]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [0, 0, 0],\r\n                    [0, (4/3)*φ(2,3), 0],  # paper said 2 * φ(1,3), but this is the same and more consistent with res_3s_strehmel_weiner\r\n            ]\r\n            b = [\r\n                    [0, \r\n                    6*φ(2) - 18*φ(3),\r\n                    (-3/2)*φ(2) + 9*φ(3)],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n        \r\n        case \"res_3s_sunstar\": # https://arxiv.org/pdf/2410.00498 pg 5 (tableau 2.7)\r\n            c1,c2,c3 = 0, 1/3, 2/3\r\n            ci = [c1,c2,c3]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [0, 0, 0],\r\n                    [0, (8/9)*φ(2,3), 0],  # paper said 2 * φ(1,3), but this is the same and more consistent with res_3s_strehmel_weiner\r\n            ]\r\n            b = [\r\n                    [0, \r\n                    0,\r\n                    (3/2)*φ(2)],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n        \r\n        \r\n        case \"res_4s_cox_matthews\": # weak 4th order, Cox & Matthews; unresolved issue, see below\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a2_1 = c2 * φ(1,2)\r\n            a3_2 = c3 * φ(1,3)\r\n            a4_1 = (1/2) * φ(1,3) * (φ(0,3) - 1) # φ(0,3) == torch.exp(-h*c3)\r\n            a4_3 = φ(1,3)\r\n\r\n            b1 = φ(1) - 3*φ(2) + 4*φ(3)\r\n\r\n            b2 = 2*φ(2) - 4*φ(3)\r\n            b3 = 2*φ(2) - 4*φ(3)\r\n            b4 = 4*φ(3) - φ(2)\r\n\r\n            a = [\r\n                    [0,    0,0,0],\r\n                    [a2_1, 0,0,0],\r\n                    [0, a3_2,0,0],\r\n                    [a4_1, 0, a4_3,0],\r\n            ]\r\n            b = [\r\n                    [b1, b2, b3, b4],\r\n            ]\r\n            \r\n            \r\n        case \"res_4s_cfree4\": # weak 4th order, Cox & Matthews; unresolved issue, see below\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a2_1 = c2 * φ(1,2)\r\n            a3_2 = c3 * φ(1,2)\r\n            a4_1 = (1/2) * φ(1,2) * (φ(0,2) - 1) # φ(0,3) == torch.exp(-h*c3)\r\n            a4_3 = φ(1,2)\r\n\r\n            b1 = (1/2)*φ(1) - (1/3)*φ(1,2)\r\n\r\n            b2 = (1/3)*φ(1)\r\n            b3 = (1/3)*φ(1)\r\n            b4 = -(1/6)*φ(1) + (1/3)*φ(1,2)\r\n\r\n            a = [\r\n                    [0,    0,0,0],\r\n                    [a2_1, 0,0,0],\r\n                    [0, a3_2,0,0],\r\n                    [a4_1, 0, a4_3,0],\r\n            ]\r\n            b = [\r\n                    [b1, b2, b3, b4],\r\n            ]\r\n\r\n        case \"res_4s_friedli\": # https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a3_2 = 2*φ(2,2)\r\n            a4_2 = -(26/25)*φ(1) +  (2/25)*φ(2)\r\n            a4_3 =  (26/25)*φ(1) + (48/25)*φ(2)\r\n\r\n\r\n            b2 = 0\r\n            b3 = 4*φ(2) - 8*φ(3)\r\n            b4 =  -φ(2) + 4*φ(3)\r\n\r\n            a = [\r\n                    [0, 0,0,0],\r\n                    [0, 0,0,0],\r\n                    [0, a3_2,0,0],\r\n                    [0, a4_2, a4_3,0],\r\n            ]\r\n            b = [\r\n                    [0, b2, b3, b4],\r\n            ]\r\n\r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n\r\n        case \"res_4s_munthe-kaas\": # unstable RKMK4t\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n\r\n            a = [\r\n                    [0, 0,      0,        0],\r\n                    [c2*φ(1,2), 0,      0,        0],\r\n                    [(h/8)*φ(1,2), (1/2)*(1-h/4)*φ(1,2), 0,        0],\r\n                    [0, 0,      φ(1), 0],\r\n            ]\r\n            b = [\r\n                    [\r\n                    (1/6)*φ(1)*(1+h/2),\r\n                    (1/3)*φ(1),\r\n                    (1/3)*φ(1),\r\n                    (1/6)*φ(1)*(1-h/2)\r\n                    ],\r\n            ]\r\n\r\n        case \"res_4s_krogstad\": # weak 4th order, Krogstad\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a = [\r\n                    [0, 0,      0,        0],\r\n                    [0, 0,      0,        0],\r\n                    [0, φ(2,3), 0,        0],\r\n                    [0, 0,      2*φ(2,4), 0],\r\n            ]\r\n            b = [\r\n                    [\r\n                    0, \r\n                    2*φ(2) - 4*φ(3),\r\n                    2*φ(2) - 4*φ(3),\r\n                    -φ(2)  + 4*φ(3)\r\n                    ],\r\n            ]\r\n            \r\n            #a = [row + [0] * (len(ci) - len(row)) for row in a]\r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"res_4s_krogstad_alt\": # weak 4th order, Krogstad https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a = [\r\n                    [0, 0,        0,      0],\r\n                    [0, 0,        0,      0],\r\n                    [0, 4*φ(2,2), 0,      0],\r\n                    [0, 0,        2*φ(2), 0],\r\n            ]\r\n            b = [\r\n                    [\r\n                    0, \r\n                    2*φ(2) - 4*φ(3),\r\n                    2*φ(2) - 4*φ(3),\r\n                    -φ(2)  + 4*φ(3)\r\n                    ],\r\n            ]\r\n            \r\n            #a = [row + [0] * (len(ci) - len(row)) for row in a]\r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"res_4s_minchev\": # https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a3_2 = (4/25)*φ(1,2) + (24/25)*φ(2,2)\r\n            a4_2 = (21/5)*φ(2) - (108/5)*φ(3)\r\n            a4_3 = (1/20)*φ(1) - (33/10)*φ(2) + (123/5)*φ(3)\r\n\r\n\r\n            b2 = -(1/10)*φ(1) +  (1/5)*φ(2) - 4*φ(3) + 12*φ(4)\r\n            b3 =  (1/30)*φ(1) + (23/5)*φ(2) - 8*φ(3) -  4*φ(4)\r\n            b4 =  (1/30)*φ(1) -  (7/5)*φ(2) + 6*φ(3) -  4*φ(4)\r\n\r\n            a = [\r\n                    [0, 0,0,0],\r\n                    [0, 0,0,0],\r\n                    [0, a3_2,0,0],\r\n                    [0, 0, a4_3,0],\r\n            ]\r\n            b = [\r\n                    [0, b2, b3, b4],\r\n            ]\r\n\r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"res_4s_strehmel_weiner\": # weak 4th order, Strehmel & Weiner\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a = [\r\n                    [0, 0,         0,        0],\r\n                    [0, 0,         0,        0],\r\n                    [0, c3*φ(2,3), 0,        0],\r\n                    [0, -2*φ(2,4), 4*φ(2,4), 0],\r\n            ]\r\n            b = [\r\n                    [\r\n                    0, \r\n                    0,\r\n                    4*φ(2) - 8*φ(3), \r\n                    -φ(2) +  4*φ(3)\r\n                    ],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"res_4s_strehmel_weiner_alt\": # weak 4th order, Strehmel & Weiner https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a = [\r\n                    [0, 0,        0,      0],\r\n                    [0, 0,        0,      0],\r\n                    [0, 2*φ(2,2), 0,      0],\r\n                    [0,  -2*φ(2), 4*φ(2), 0],\r\n            ]\r\n            b = [\r\n                    [\r\n                    0, \r\n                    0,\r\n                    4*φ(2) - 8*φ(3), \r\n                    -φ(2) +  4*φ(3)\r\n                    ],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n            \r\n        case \"lawson2a_2s\": # based on midpoint rule, stiff order 1 https://cds.cern.ch/record/848126/files/cer-002531460.pdf\r\n            c1,c2 = 0,1/2\r\n            ci = [c1, c2]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a2_1 = c2 * φ(0,2)\r\n            b2 = φ(0,2)\r\n            b1 = 0\r\n\r\n            a = [\r\n                    [0,0],\r\n                    [a2_1, 0],\r\n            ]\r\n            b = [\r\n                    [b1, b2],\r\n            ]\r\n\r\n        case \"lawson2b_2s\": # based on trapezoidal rule, stiff order 1 https://cds.cern.ch/record/848126/files/cer-002531460.pdf\r\n            c1,c2 = 0,1\r\n            ci = [c1, c2]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a2_1 = φ(0)\r\n            b2 = 1/2\r\n            b1 = (1/2)*φ(0)\r\n\r\n            a = [\r\n                    [0,0],\r\n                    [a2_1, 0],\r\n            ]\r\n            b = [\r\n                    [b1, b2],\r\n            ]\r\n\r\n\r\n        case \"lawson4_4s\": \r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a2_1 = c2 * φ(0,2)\r\n            a3_2 = 1/2\r\n            a4_3 = φ(0,2)\r\n            \r\n            b1 = (1/6) * φ(0)\r\n            b2 = (1/3) * φ(0,2)\r\n            b3 = (1/3) * φ(0,2)\r\n            b4 = 1/6\r\n\r\n            a = [\r\n                    [0,    0,    0,    0],\r\n                    [a2_1, 0,    0,    0],\r\n                    [0,    a3_2, 0,    0],\r\n                    [0,    0,    a4_3, 0],\r\n            ]\r\n            b = [\r\n                    [b1,b2,b3,b4],\r\n            ]\r\n\r\n        case \"lawson41-gen_4s\": # GenLawson4 https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n\r\n\r\n            a3_2 = 1/2\r\n            a4_3 = φ(0,2)\r\n            \r\n            b2 = (1/3) * φ(0,2)\r\n            b3 = (1/3) * φ(0,2)\r\n            b4 = 1/6\r\n\r\n            a = [\r\n                    [0, 0,        0, 0],\r\n                    [0, 0,          0,        0],\r\n                    [0, a3_2, 0,        0],\r\n                    [0, 0, a4_3, 0],\r\n            ]\r\n            b = [\r\n                    [0,\r\n                    b2,\r\n                    b3,\r\n                    b4,],\r\n            ]\r\n\r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n\r\n        case \"lawson41-gen-mod_4s\": # GenLawson4 https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n\r\n\r\n            a3_2 = 1/2\r\n            a4_3 = φ(0,2)\r\n            \r\n            b2 = (1/3) * φ(0,2)\r\n            b3 = (1/3) * φ(0,2)\r\n            b4 = φ(2) - (1/3)*φ(0,2)\r\n\r\n            a = [\r\n                    [0, 0,        0, 0],\r\n                    [0, 0,          0,        0],\r\n                    [0, a3_2, 0,        0],\r\n                    [0, 0, a4_3, 0],\r\n            ]\r\n            b = [\r\n                    [0,\r\n                    b2,\r\n                    b3,\r\n                    b4,],\r\n            ]\r\n\r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n\r\n\r\n\r\n        case \"lawson42-gen-mod_1h4s\": # GenLawson4 https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n\r\n            a3_2 = 1/2\r\n            a4_3 = φ(0,2)\r\n            \r\n            b2 = (1/3) * φ(0,2)\r\n            b3 = (1/3) * φ(0,2)\r\n            b4 = (1/2)*φ(2) + φ(3) - (1/4)*φ(0,2)\r\n\r\n            a = [\r\n                    [0, 0,    0, 0],\r\n                    [0, 0,    0, 0],\r\n                    [0, a3_2, 0, 0],\r\n                    [0, 0, a4_3, 0],\r\n            ]\r\n            b = [\r\n                    [0, b2, b3, b4,],\r\n            ]\r\n\r\n            if extra_options_flag(\"h_prev_h_h_no_eta\", extra_options):\r\n                φ1 = Phi(h_prev1_no_eta * h/h_no_eta, ci, use_analytic_solution)\r\n            elif extra_options_flag(\"h_only\", extra_options):\r\n                φ1 = Phi(h, ci, use_analytic_solution)\r\n            else:\r\n                φ1 = Phi(h_prev1_no_eta, ci, use_analytic_solution)\r\n\r\n            u2_1 = -φ1(2,2)\r\n            u3_1 = -φ1(2,2) + 1/4\r\n            u4_1 = -φ1(2) + (1/2)*φ1(0,2)\r\n            v1 = -(1/2)*φ1(2) + φ1(3) + (1/12)*φ1(0,2)\r\n\r\n            u = [\r\n                    [   0, 0, 0, 0],\r\n                    [u2_1, 0, 0, 0],\r\n                    [u3_1, 0, 0, 0],\r\n                    [u4_1, 0, 0, 0],\r\n            ]\r\n            v = [\r\n                    [v1, 0, 0, 0,],\r\n            ]\r\n\r\n            a, b = gen_first_col_exp_uv(a,b,ci,u,v,φ)\r\n\r\n\r\n\r\n        case \"lawson43-gen-mod_2h4s\": # GenLawson4 https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n\r\n            a3_2 = 1/2\r\n            a4_3 = φ(0,2)\r\n            \r\n            b3 = b2 = (1/3) * a4_3\r\n            b4 = (1/3)*φ(2) + φ(3) + φ(4) - (5/24)*φ(0,2)\r\n\r\n            a = [\r\n                    [0, 0,    0, 0],\r\n                    [0, 0,    0, 0],\r\n                    [0, a3_2, 0, 0],\r\n                    [0, 0, a4_3, 0],\r\n            ]\r\n            b = [\r\n                    [0, b2, b3, b4,],\r\n            ]\r\n\r\n            if extra_options_flag(\"h_prev_h_h_no_eta\", extra_options):\r\n                φ1 = Phi(h_prev1_no_eta * h/h_no_eta, ci, use_analytic_solution)\r\n                φ2 = Phi(h_prev2_no_eta * h/h_no_eta, ci, use_analytic_solution)\r\n            elif extra_options_flag(\"h_only\", extra_options):\r\n                φ1 = Phi(h, ci, use_analytic_solution)\r\n                φ2 = Phi(h, ci, use_analytic_solution)\r\n            else:\r\n                φ1 = Phi(h_prev1_no_eta, ci, use_analytic_solution)\r\n                φ2 = Phi(h_prev2_no_eta, ci, use_analytic_solution)\r\n\r\n            u2_1 = -2*φ1(2,2) - 2*φ1(3,2)\r\n            u3_1 = -2*φ1(2,2) - 2*φ1(3,2) + 5/8\r\n            u4_1 = -2*φ1(2) - 2*φ1(3) + (5/4)*φ1(0,2)\r\n            v1 = -φ1(2) + φ1(3) + 3*φ1(4) + (5/24)*φ1(0,2)\r\n            \r\n            u2_2 = -(1/2)*φ2(2,2) + φ2(3,2)\r\n            u3_2 = (1/2)*φ2(2,2) + φ2(3,2) - 3/16\r\n            u4_2 = (1/2)*φ2(2) + φ2(3) - (3/8)*φ2(0,2)\r\n            v2 = (1/6)*φ2(2) - φ2(4) - (1/24)*φ2(0,2)\r\n            \r\n            u = [\r\n                    [   0,    0, 0, 0],\r\n                    [u2_1, u2_2, 0, 0],\r\n                    [u3_1, u3_2, 0, 0],\r\n                    [u4_1, u4_2, 0, 0],\r\n            ]\r\n            v = [\r\n                    [v1, v2, 0, 0,],\r\n            ]\r\n\r\n            a, b = gen_first_col_exp_uv(a,b,ci,u,v,φ)\r\n\r\n\r\n        case \"lawson44-gen-mod_3h4s\": # GenLawson4 https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n\r\n            a3_2 = 1/2\r\n            a4_3 = φ(0,2)\r\n            \r\n            b3 = b2 = (1/3) * a4_3\r\n            b4 = (1/4)*φ(2) + (11/12)*φ(3) + (3/2)*φ(4) + φ(5) - (35/192)*φ(0,2)\r\n\r\n            a = [\r\n                    [0, 0,    0, 0],\r\n                    [0, 0,    0, 0],\r\n                    [0, a3_2, 0, 0],\r\n                    [0, 0, a4_3, 0],\r\n            ]\r\n            b = [\r\n                    [0, b2, b3, b4,],\r\n            ]\r\n\r\n            if extra_options_flag(\"h_prev_h_h_no_eta\", extra_options):\r\n                φ1 = Phi(h_prev1_no_eta * h/h_no_eta, ci, use_analytic_solution)\r\n                φ2 = Phi(h_prev2_no_eta * h/h_no_eta, ci, use_analytic_solution)\r\n                φ3 = Phi(h_prev3_no_eta * h/h_no_eta, ci, use_analytic_solution)\r\n            elif extra_options_flag(\"h_only\", extra_options):\r\n                φ1 = Phi(h, ci, use_analytic_solution)\r\n                φ2 = Phi(h, ci, use_analytic_solution)\r\n                φ3 = Phi(h, ci, use_analytic_solution)\r\n            else:\r\n                φ1 = Phi(h_prev1_no_eta, ci, use_analytic_solution)\r\n                φ2 = Phi(h_prev2_no_eta, ci, use_analytic_solution)\r\n                φ3 = Phi(h_prev3_no_eta, ci, use_analytic_solution)\r\n                \r\n            u2_1 = -3*φ1(2,2) - 5*φ1(3,2) - 3*φ1(4,2)\r\n            u3_1 = u2_1 + 35/32\r\n            u4_1 = -3*φ1(2) - 5*φ1(3) - 3*φ1(4) + (35/16)*φ1(0,2)\r\n            v1 = -(3/2)*φ1(2) + (1/2)*φ1(3) + 6*φ1(4) + 6*φ1(5) + (35/96)*φ1(0,2)\r\n            \r\n            u2_2 = (3/2)*φ2(2,2) + 4*φ2(3,2) + 3*φ2(4,2)\r\n            u3_2 = u2_2 - 21/32\r\n            u4_2 = (3/2)*φ2(2) + 4*φ2(3) + 3*φ2(4) - (21/16)*φ2(0,2)\r\n            v2 = (1/2)*φ2(2) + (1/3)*φ2(3) - 3*φ2(4) - 4*φ2(5) - (7/48)*φ2(0,2)\r\n            \r\n            u2_3 = (-1/3)*φ3(2,2) - φ3(3,2) - φ3(4,2)\r\n            u3_3 = u2_3 + 5/32\r\n            u4_3 = -(1/3)*φ3(2) - φ3(3) - φ3(4) + (5/16)*φ3(0,2)\r\n            v3 = -(1/12)*φ3(2) - (1/12)*φ3(3) + (1/2)*φ3(4) + φ3(5) + (5/192)*φ3(0,2)\r\n            \r\n            u = [\r\n                    [   0,    0,    0, 0],\r\n                    [u2_1, u2_2, u2_3, 0],\r\n                    [u3_1, u3_2, u3_3, 0],\r\n                    [u4_1, u4_2, u4_3, 0],\r\n            ]\r\n            v = [\r\n                    [v1, v2, v3, 0,],\r\n            ]\r\n\r\n            a, b = gen_first_col_exp_uv(a,b,ci,u,v,φ)\r\n\r\n\r\n\r\n        case \"lawson45-gen-mod_4h4s\": # GenLawson4 https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n\r\n            a3_2 = 1/2\r\n            a4_3 = φ(0,2)\r\n            \r\n            b2 = (1/3) * φ(0,2)\r\n            b3 = (1/3) * φ(0,2)\r\n            b4 = (12/59)*φ(2) + (50/59)*φ(3) + (105/59)*φ(4) + (120/59)*φ(5) - (60/59)*φ(6) - (157/944)*φ(0,2)\r\n\r\n            a = [\r\n                    [0, 0,    0, 0],\r\n                    [0, 0,    0, 0],\r\n                    [0, a3_2, 0, 0],\r\n                    [0, 0, a4_3, 0],\r\n            ]\r\n            b = [\r\n                    [0, b2, b3, b4,],\r\n            ]\r\n\r\n            if extra_options_flag(\"h_prev_h_h_no_eta\", extra_options):\r\n                φ1 = Phi(h_prev1_no_eta * h/h_no_eta, ci, use_analytic_solution)\r\n                φ2 = Phi(h_prev2_no_eta * h/h_no_eta, ci, use_analytic_solution)\r\n                φ3 = Phi(h_prev3_no_eta * h/h_no_eta, ci, use_analytic_solution)\r\n                φ4 = Phi(h_prev4_no_eta * h/h_no_eta, ci, use_analytic_solution)\r\n            elif extra_options_flag(\"h_only\", extra_options):\r\n                φ1 = Phi(h, ci, use_analytic_solution)\r\n                φ2 = Phi(h, ci, use_analytic_solution)\r\n                φ3 = Phi(h, ci, use_analytic_solution)\r\n                φ4 = Phi(h, ci, use_analytic_solution)\r\n            else:\r\n                φ1 = Phi(h_prev1_no_eta, ci, use_analytic_solution)\r\n                φ2 = Phi(h_prev2_no_eta, ci, use_analytic_solution)\r\n                φ3 = Phi(h_prev3_no_eta, ci, use_analytic_solution)\r\n                φ4 = Phi(h_prev4_no_eta, ci, use_analytic_solution)\r\n                \r\n            u2_1 = -4*φ1(2,2) - (26/3)*φ1(3,2) - 9*φ1(4,2) - 4*φ1(5,2)\r\n            u3_1 = u2_1 + 105/64\r\n            u4_1 = -4*φ1(2) - (26/3)*φ1(3) - 9*φ1(4) - 4*φ1(5) + (105/32)*φ1(0,2)\r\n            v1 = -(116/59)*φ1(2) -  (34/177)*φ1(3) + (519/59)*φ1(4) + (964/59)*φ1(5) - (600/59)*φ1(6) +   (495/944)*φ1(0,2)\r\n            \r\n            u2_2 = 3*φ2(2,2) + (19/2)*φ2(3,2) + 12*φ2(4,2) + 6*φ2(5,2)\r\n            u3_2 = u2_2 - 189/128\r\n            u4_2 = 3*φ2(2) + (19/2)*φ2(3) + 12*φ2(4) + 6*φ2(5) - (189/64)*φ2(0,2)\r\n            v2 =  (57/59)*φ2(2) + (121/118)*φ2(3) - (342/59)*φ2(4) - (846/59)*φ2(5) + (600/59)*φ2(6) -  (577/1888)*φ2(0,2)\r\n            \r\n            u2_3 = -(4/3)*φ3(2,2) - (14/3)*φ3(3,2) - 7*φ3(4,2) - 4*φ3(5,2)\r\n            u3_3 = u2_3 + 45/64\r\n            u4_3 = -(4/3)*φ3(2) - (14/3)*φ3(3) - 7*φ3(4) - 4*φ3(5) +(45/32)*φ3(0,2)\r\n            v3 = -(56/177)*φ3(2) -  (76/177)*φ3(3) + (112/59)*φ3(4) + (364/59)*φ3(5) - (300/59)*φ3(6) +    (25/236)*φ3(0,2)\r\n            \r\n            u2_4 = (1/4)*φ4(2,2) + (88/96)*φ4(3,2) + (3/2)*φ4(4,2) + φ4(5,2)\r\n            u3_4 = u2_4 - 35/256\r\n            u4_4 = (1/4)*φ4(2) + (11/12)*φ4(3) + (3/2)*φ4(4) + φ4(5) - (35/128)*φ4(0,2)\r\n            v4 =  (11/236)*φ4(2) +  (49/708)*φ4(3) - (33/118)*φ4(4) -  (61/59)*φ4(5) + ( 60/59)*φ4(6) - (181/11328)*φ4(0,2)\r\n\r\n            u = [\r\n                    [   0,    0,    0,    0],\r\n                    [u2_1, u2_2, u2_3, u2_4],\r\n                    [u3_1, u3_2, u3_3, u3_4],\r\n                    [u4_1, u4_2, u4_3, u4_4],\r\n            ]\r\n            v = [\r\n                    [v1, v2, v3, v4,],\r\n            ]\r\n\r\n            a, b = gen_first_col_exp_uv(a,b,ci,u,v,φ)\r\n\r\n\r\n\r\n        case \"etdrk2_2s\": # https://arxiv.org/pdf/2402.15142v1\r\n            c1,c2 = 0, 1\r\n            ci = [c1,c2]\r\n            φ = Phi(h, ci, use_analytic_solution)   \r\n            \r\n            a = [\r\n                    [0, 0],\r\n                    [φ(1), 0],\r\n            ]\r\n            b = [\r\n                    [φ(1)-φ(2), φ(2)],\r\n            ]\r\n\r\n        case \"etdrk3_a_3s\": #non-monotonic # https://arxiv.org/pdf/2402.15142v1\r\n            c1,c2,c3 = 0, 1, 2/3\r\n            ci = [c1,c2,c3]\r\n            φ = Phi(h, ci, use_analytic_solution)   \r\n            \r\n            a2_1 = c2*φ(1)\r\n            a3_2 = (4/9)*φ(2,3)\r\n            a3_1 = c3*φ(1,3) - a3_2\r\n            \r\n            b2 = φ(2) - (1/2)*φ(1)\r\n            b3 = (3/4) * φ(1)\r\n            b1 = φ(1) - b2 - b3 \r\n            \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [a2_1, 0, 0],\r\n                    [a3_1, a3_2, 0 ]\r\n            ]\r\n            b = [\r\n                    [b1, b2, b3],\r\n            ]\r\n\r\n        case \"etdrk3_b_3s\": # https://arxiv.org/pdf/2402.15142v1\r\n            c1,c2,c3 = 0, 4/9, 2/3\r\n            ci = [c1,c2,c3]\r\n            φ = Phi(h, ci, use_analytic_solution)   \r\n            \r\n            a2_1 = c2*φ(1,2)\r\n            a3_2 = φ(2,3)\r\n            a3_1 = c3*φ(1,3) - a3_2\r\n            \r\n            b2 = 0\r\n            b3 = (3/2) * φ(2)\r\n            b1 = φ(1) - b2 - b3 \r\n            \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [a2_1, 0, 0],\r\n                    [a3_1, a3_2, 0 ]\r\n            ]\r\n            b = [\r\n                    [b1, b2, b3],\r\n            ]\r\n\r\n        case \"etdrk4_4s\": # https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a3_2 =   φ(1,2)\r\n            a4_3 = 2*φ(1,2)\r\n\r\n            b2 = 2*φ(2) - 4*φ(3)\r\n            b3 = 2*φ(2) - 4*φ(3)\r\n            b4 =  -φ(2) + 4*φ(3)\r\n\r\n            a = [\r\n                    [0, 0,0,0],\r\n                    [0, 0,0,0],\r\n                    [0, a3_2,0,0],\r\n                    [0, 0, a4_3,0],\r\n            ]\r\n            b = [\r\n                    [0, b2, b3, b4],\r\n            ]\r\n\r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n\r\n\r\n        case \"etdrk4_4s_alt\": # pg 70 col 1 computed with (4.9) https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a2_1 = φ(1,2)  #unsure about this, looks bad and is pretty different from col #1 implementations for everything else except the other 4s alt and 5s ostermann??? from the link\r\n            a3_1 = 0\r\n            a4_1 = φ(1) - 2*φ(1,2)\r\n            \r\n            a3_2 =   φ(1,2)\r\n            a4_3 = 2*φ(1,2)\r\n\r\n            b1 = φ(1) - 3*φ(2) + 4*φ(3)\r\n            b2 = 2*φ(2) - 4*φ(3)\r\n            b3 = 2*φ(2) - 4*φ(3)\r\n            b4 =  -φ(2) + 4*φ(3)\r\n\r\n            a = [\r\n                    [   0, 0,      0,0],\r\n                    [a2_1, 0,      0,0],\r\n                    [a3_1, a3_2,   0,0],\r\n                    [a4_1, 0,   a4_3,0],\r\n            ]\r\n            b = [\r\n                    [0, b2, b3, b4],\r\n            ]\r\n\r\n            #a, b = gen_first_col_exp(a,b,ci,φ)\r\n\r\n            \r\n        case \"dpmpp_2s\":\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n            \r\n            ci = [0,c2]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            b2 = (1/(2*c2)) * φ(1)\r\n\r\n            a = [\r\n                    [0, 0],\r\n                    [0, 0],\r\n            ]\r\n            b = [\r\n                    [0, b2],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"dpmpp_sde_2s\":\r\n            c2 = 1.0 #hardcoded to 1.0 to more closely emulate the configuration for k-diffusion's implementation\r\n            \r\n            ci = [0,c2]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            b2 = (1/(2*c2)) * φ(1)\r\n\r\n            a = [\r\n                    [0, 0],\r\n                    [0, 0],\r\n            ]\r\n            b = [\r\n                    [0, b2],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n\r\n        case \"dpmpp_3s\":\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n            c3 = float(get_extra_options_kv(\"c3\", str(c3), extra_options))\r\n            \r\n            ci = [0,c2,c3]\r\n            φ = Phi(h, ci, use_analytic_solution)   \r\n            \r\n            a3_2 = (c3**2 / c2) * φ(2,3)\r\n            b3 = (1/c3) * φ(2)\r\n\r\n            a = [\r\n                    [0, 0, 0],\r\n                    [0, 0, 0],\r\n                    [0, a3_2, 0],  \r\n            ]\r\n            b = [\r\n                    [0, 0, b3],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"res_5s\": #non-monotonic #4th order\r\n                \r\n            c1, c2, c3, c4, c5 = 0, 1/2, 1/2, 1, 1/2\r\n            ci = [c1,c2,c3,c4,c5]\r\n            φ = Phi(h, ci, use_analytic_solution)   \r\n\r\n            a3_2 = φ(2,3)\r\n            a4_2 = φ(2,4)\r\n            a5_2 = (1/2)*φ(2,5) - φ(3,4) + (1/4)*φ(2,4) - (1/2)*φ(3,5)\r\n            \r\n            a4_3 = a4_2\r\n            a5_3 = a5_2\r\n            \r\n            a5_4 = (1/4)*φ(2,5) - a5_2\r\n            \r\n            b4 = -φ(2) + 4*φ(3)\r\n            b5 = 4*φ(2) - 8*φ(3)\r\n            \r\n            a = [\r\n                    [0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0],\r\n                    [0, a3_2, 0, 0, 0],\r\n                    [0, a4_2, a4_3, 0, 0],\r\n                    [0, a5_2, a5_3, a5_4, 0],\r\n            ]\r\n            b = [\r\n                    [0, 0, 0, b4, b5],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"res_5s_hochbruck-ostermann\": #non-monotonic #4th order\r\n                \r\n            c1, c2, c3, c4, c5 = 0, 1/2, 1/2, 1, 1/2\r\n            ci = [c1,c2,c3,c4,c5]\r\n            φ = Phi(h, ci, use_analytic_solution)   \r\n            \r\n            a3_2 = 4*φ(2,2)\r\n            a4_2 = φ(2)\r\n            a5_2 = (1/4)*φ(2) - φ(3) + 2*φ(2,2) - 4*φ(3,2)\r\n            \r\n            a4_3 = φ(2)\r\n            a5_3 = a5_2\r\n            \r\n            a5_4 = φ(2,2) - a5_2\r\n            \r\n            b4 =  -φ(2) + 4*φ(3)\r\n            b5 = 4*φ(2) - 8*φ(3)\r\n\r\n            a = [\r\n                    [0, 0   , 0   , 0   , 0],\r\n                    [0, 0   , 0   , 0   , 0],\r\n                    [0, a3_2, 0   , 0   , 0],\r\n                    [0, a4_2, a4_3, 0   , 0],\r\n                    [0, a5_2, a5_3, a5_4, 0],\r\n            ]\r\n            b = [\r\n                    [0, 0, 0, b4, b5],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n\r\n            \r\n        case \"res_6s\": #non-monotonic #4th order\r\n                \r\n            c1, c2, c3, c4, c5, c6 = 0, 1/2, 1/2, 1/3, 1/3, 5/6\r\n            ci = [c1, c2, c3, c4, c5, c6]\r\n            φ = Phi(h, ci, use_analytic_solution)\r\n            \r\n            a2_1 = c2 * φ(1,2)\r\n            \r\n            a3_1 = 0\r\n            a3_2 = (c3**2 / c2) * φ(2,3)\r\n            \r\n            a4_1 = 0\r\n            a4_2 = (c4**2 / c2) * φ(2,4)\r\n            a4_3 = (c4**2 * φ(2,4) - a4_2 * c2) / c3\r\n            \r\n            a5_1 = 0\r\n            a5_2 = 0 #zero\r\n            a5_3 = (-c4 * c5**2 * φ(2,5) + 2*c5**3 * φ(3,5))   /   (c3 * (c3 - c4))\r\n            a5_4 = (-c3 * c5**2 * φ(2,5) + 2*c5**3 * φ(3,5))   /   (c4 * (c4 - c3))\r\n            \r\n            a6_1 = 0\r\n            a6_2 = 0 #zero\r\n            a6_3 = (-c4 * c6**2 * φ(2,6) + 2*c6**3 * φ(3,6))   /   (c3 * (c3 - c4))\r\n            a6_4 = (-c3 * c6**2 * φ(2,6) + 2*c6**3 * φ(3,6))   /   (c4 * (c4 - c3))\r\n            a6_5 = (c6**2 * φ(2,6) - a6_3*c3 - a6_4*c4)   /   c5\r\n            #a6_5_alt = (2*c6**3 * φ(3,6) - a6_3*c3**2 - a6_4*c4**2)   /   c5**2\r\n                    \r\n            b1 = 0\r\n            b2 = 0\r\n            b3 = 0\r\n            b4 = 0\r\n            b5 = (-c6*φ(2) + 2*φ(3)) / (c5 * (c5 - c6))\r\n            b6 = (-c5*φ(2) + 2*φ(3)) / (c6 * (c6 - c5))\r\n\r\n            a = [\r\n                    [0, 0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0, 0],\r\n                    [0, a3_2, 0, 0, 0, 0],\r\n                    [0, a4_2, a4_3, 0, 0, 0],\r\n                    [0, a5_2, a5_3, a5_4, 0, 0],\r\n                    [0, a6_2, a6_3, a6_4, a6_5, 0],\r\n            ]\r\n            b = [\r\n                    [0, b2, b3, b4, b5, b6],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n\r\n        case \"res_8s\": #non-monotonic # this is not EXPRK5S8 https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n                \r\n            c1, c2, c3, c4, c5, c6, c7, c8 = 0, 1/2, 1/2, 1/4,    1/2, 1/5, 2/3, 1\r\n            ci = [c1, c2, c3, c4, c5, c6, c7, c8]\r\n            #φ = Phi(h, ci, analytic_solution=use_analytic_solution)\r\n            \r\n            ci = [mpf(c_val) for c_val in ci]\r\n            c1, c2, c3, c4, c5, c6, c7, c8 = [c_val for c_val in ci]\r\n\r\n            φ = Phi(mpf(h.item()), ci, analytic_solution=use_analytic_solution)\r\n            \r\n            a3_2 = (1/2) * φ(2,3)\r\n            \r\n            a4_3 = (1/8) * φ(2,4)\r\n\r\n            a5_3 = (-1/2) * φ(2,5) + 2 * φ(3,5)\r\n            a5_4 =      2 * φ(2,5) - 4 * φ(3,5)\r\n            \r\n            a6_4 = (8/25) * φ(2,6) - (32/125) * φ(3,6)\r\n            a6_5 = (2/25) * φ(2,6) -  (1/2)   * a6_4\r\n            \r\n            a7_4 = (-125/162)  * a6_4\r\n            a7_5 =  (125/1944) * a6_4 -  (16/27) * φ(2,7) + (320/81) * φ(3,7)\r\n            a7_6 = (3125/3888) * a6_4 + (100/27) * φ(2,7) - (800/81) * φ(3,7)\r\n            \r\n            Φ = (5/32)*a6_4 - (1/28)*φ(2,6) + (36/175)*φ(2,7) - (48/25)*φ(3,7) + (6/175)*φ(4,6) + (192/35)*φ(4,7) + 6*φ(4,8)\r\n            \r\n            a8_5 =  (208/3)*φ(3,8) -  (16/3) *φ(2,8) -      40*Φ\r\n            a8_6 = (-250/3)*φ(3,8) + (250/21)*φ(2,8) + (250/7)*Φ\r\n            a8_7 =      -27*φ(3,8) +  (27/14)*φ(2,8) + (135/7)*Φ\r\n            \r\n            b6 = (125/14)*φ(2) - (625/14)*φ(3) + (1125/14)*φ(4)\r\n            b7 = (-27/14)*φ(2) + (162/7) *φ(3) -  (405/7) *φ(4)\r\n            b8 =   (1/2) *φ(2) -  (13/2) *φ(3) +   (45/2) *φ(4)\r\n            \r\n            b1   =    φ(1)   - b6 - b7 - b8\r\n            \r\n            a = [\r\n                    [0 , 0   , 0   , 0   , 0   , 0   , 0   , 0],\r\n                    [0 , 0   , 0   , 0   , 0   , 0   , 0   , 0],\r\n                    \r\n                    [0 , a3_2, 0   , 0   , 0   , 0   , 0   , 0],\r\n                    [0 , 0   , a4_3, 0   , 0   , 0   , 0   , 0],\r\n                    \r\n                    [0 , 0   , a5_3, a5_4, 0   , 0   , 0   , 0],\r\n                    [0 , 0   , 0   , a6_4, a6_5, 0   , 0   , 0],\r\n                    \r\n                    [0 , 0   , 0   , a7_4, a7_5, a7_6, 0   , 0],\r\n                    [0 , 0   , 0   , 0   , a8_5, a8_6, a8_7, 0],\r\n            ]\r\n            b = [\r\n                    [0, 0, 0, 0, 0, b6, b7, b8],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n\r\n            a = [[float(val) for val in row] for row in a]\r\n            b = [[float(val) for val in row] for row in b]\r\n            ci = [c1, c2, c3, c4, c5, c6, c7, c8]\r\n\r\n\r\n\r\n        case \"res_8s_alt\": # this is EXPRK5S8 https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n                \r\n            c1, c2, c3, c4, c5, c6, c7, c8 = 0, 1/2, 1/2, 1/4,    1/2, 1/5, 2/3, 1\r\n            #ci = [c1, c2, c3, c4, c5, c6, c7, c8]\r\n            #φ = Phi(h, ci, analytic_solution=use_analytic_solution)\r\n            \r\n            ci = [mpf(c_val) for c_val in ci]\r\n            c1, c2, c3, c4, c5, c6, c7, c8 = [c_val for c_val in ci]\r\n\r\n            φ = Phi(mpf(h.item()), ci, analytic_solution=use_analytic_solution)\r\n            \r\n            a3_2 = 2*φ(2,2)\r\n            \r\n            a4_3 = 2*φ(2,4)\r\n\r\n            a5_3 = -2*φ(2,2) + 16*φ(3,2)\r\n            a5_4 =  8*φ(2,2) - 32*φ(3,2)\r\n            \r\n            a6_4 =  8*φ(2,6) - 32*φ(3,6)\r\n            a6_5 = -2*φ(2,6) + 16*φ(3,6)\r\n            \r\n            a7_4 = (-125/162)  * a6_4\r\n            a7_5 =  (125/1944) * a6_4 -  (4/3) * φ(2,7) +  (40/3)*φ(3,7)\r\n            a7_6 = (3125/3888) * a6_4 + (25/3) * φ(2,7) - (100/3)*φ(3,7)\r\n            \r\n            Φ = (5/32)*a6_4 - (25/28)*φ(2,6) + (81/175)*φ(2,7) - (162/25)*φ(3,7) + (150/7)*φ(4,6) + (972/35)*φ(4,7) + 6*φ(4)\r\n            \r\n            a8_5 =  -(16/3)*φ(2) + (208/3)*φ(3) -      40*Φ\r\n            a8_6 = (250/21)*φ(2) - (250/3)*φ(3) + (250/7)*Φ\r\n            a8_7 =  (27/14)*φ(2) -      27*φ(3) + (135/7)*Φ\r\n            \r\n            b6 = (125/14)*φ(2) - (625/14)*φ(3) + (1125/14)*φ(4)\r\n            b7 = (-27/14)*φ(2) + (162/7) *φ(3) -  (405/7) *φ(4)\r\n            b8 =   (1/2) *φ(2) -  (13/2) *φ(3) +   (45/2) *φ(4)\r\n            \r\n            a = [\r\n                    [0 , 0   , 0   , 0   , 0   , 0   , 0   , 0],\r\n                    [0 , 0   , 0   , 0   , 0   , 0   , 0   , 0],\r\n                    \r\n                    [0 , a3_2, 0   , 0   , 0   , 0   , 0   , 0],\r\n                    [0 , 0   , a4_3, 0   , 0   , 0   , 0   , 0],\r\n                    \r\n                    [0 , 0   , a5_3, a5_4, 0   , 0   , 0   , 0],\r\n                    [0 , 0   , 0   , a6_4, a6_5, 0   , 0   , 0],\r\n                    \r\n                    [0 , 0   , 0   , a7_4, a7_5, a7_6, 0   , 0],\r\n                    [0 , 0   , 0   , 0   , a8_5, a8_6, a8_7, 0],\r\n            ]\r\n            b = [\r\n                    [0, 0, 0, 0, 0, b6, b7, b8],\r\n            ]\r\n\r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n\r\n            a = [[float(val) for val in row] for row in a]\r\n            b = [[float(val) for val in row] for row in b]\r\n            ci = [c1, c2, c3, c4, c5, c6, c7, c8]\r\n\r\n\r\n        case \"res_10s\":\r\n                \r\n            c1, c2, c3, c4, c5, c6, c7, c8, c9, c10 = 0, 1/2, 1/2, 1/3, 1/2,     1/3, 1/4, 3/10, 3/4, 1\r\n            ci = [c1, c2, c3, c4, c5, c6, c7, c8, c9, c10]\r\n            #φ = Phi(h, ci, analytic_solution=use_analytic_solution)\r\n            \r\n            ci = [mpf(c_val) for c_val in ci]\r\n            c1, c2, c3, c4, c5, c6, c7, c8, c9, c10 = [c_val for c_val in ci]\r\n\r\n            φ = Phi(mpf(h.item()), ci, analytic_solution=use_analytic_solution)\r\n\r\n            a3_2 = (c3**2 / c2) * φ(2,3)\r\n            a4_2 = (c4**2 / c2) * φ(2,4)\r\n                        \r\n            b8 =  (c9*c10*φ(2) - 2*(c9+c10)*φ(3) + 6*φ(4))   /   (c8 * (c8-c9) * (c8-c10))\r\n            b9 =  (c8*c10*φ(2) - 2*(c8+c10)*φ(3) + 6*φ(4))   /   (c9 * (c9-c8) * (c9-c10))\r\n            \r\n            b10 = (c8*c9*φ(2)  - 2*(c8+c9) *φ(3) + 6*φ(4))   /   (c10 * (c10-c8) * (c10-c9))\r\n            \r\n            a = [\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, a3_2, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, a4_2, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    \r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n            ]\r\n            b = [\r\n                    [0, 0, 0, 0, 0,      0, 0, b8, b9, b10],\r\n            ]\r\n            \r\n            # a5_3, a5_4\r\n            # a6_3, a6_4\r\n            # a7_3, a7_4\r\n            for i in range(5, 8): # i=5,6,7   j,k ∈ {3, 4}, j != k\r\n                jk = [(3, 4), (4, 3)]\r\n                jk = list(permutations([3, 4], 2)) \r\n                for j,k in jk:\r\n                    a[i-1][j-1] = (-ci[i-1]**2 * ci[k-1] * φ(2,i)    +   2*ci[i-1]**3 * φ(3,i))   /   (ci[j-1] * (ci[j-1] - ci[k-1]))\r\n                \r\n            for i in range(8, 11): # i=8,9,10   j,k,l ∈ {5, 6, 7}, j != k != l      [    (5, 6, 7), (5, 7, 6),    (6, 5, 7), (6, 7, 5),    (7, 5, 6), (7, 6, 5)]    6 total coeff\r\n                jkl = list(permutations([5, 6, 7], 3)) \r\n                for j,k,l in jkl:\r\n                    a[i-1][j-1] = (ci[i-1]**2 * ci[k-1] * ci[l-1] * φ(2,i)   -   2*ci[i-1]**3 * (ci[k-1] + ci[l-1]) * φ(3,i)   +   6*ci[i-1]**4 * φ(4,i))    /    (ci[j-1] * (ci[j-1] - ci[k-1]) * (ci[j-1] - ci[l-1]))\r\n            \r\n            gen_first_col_exp(a, b, ci, φ)\r\n            \r\n            a = [[float(val) for val in row] for row in a]\r\n            b = [[float(val) for val in row] for row in b]\r\n            c1, c2, c3, c4, c5, c6, c7, c8, c9, c10 = 0, 1/2, 1/2, 1/3, 1/2,     1/3, 1/4, 3/10, 3/4, 1\r\n            ci = [c1, c2, c3, c4, c5, c6, c7, c8, c9, c10]\r\n            \r\n\r\n        case \"res_15s\":\r\n                \r\n            c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,c15 = 0, 1/2, 1/2, 1/3, 1/2,    1/5, 1/4, 18/25, 1/3, 3/10,    1/6, 90/103, 1/3, 3/10, 1/5\r\n            c1 = 0\r\n            c2 = c3 = c5 = 1/2\r\n            c4 = c9 = c13 = 1/3\r\n            c6 = c15 = 1/5\r\n            c7 = 1/4\r\n            c8 = 18/25\r\n            c10 = c14 = 3/10\r\n            c11 = 1/6\r\n            c12 = 90/103\r\n            c15 = 1/5\r\n            ci = [c1, c2, c3, c4, c5, c6, c7, c8, c9, c10, c11, c12, c13, c14, c15]\r\n            ci = [mpf(c_val) for c_val in ci]\r\n\r\n            φ = Phi(mpf(h.item()), ci, analytic_solution=use_analytic_solution)\r\n\r\n            a = [[mpf(0) for _ in range(15)] for _ in range(15)]\r\n            b = [[mpf(0) for _ in range(15)]]\r\n\r\n            for i in range(3, 5): # i=3,4     j=2\r\n                j=2\r\n                a[i-1][j-1] = (ci[i-1]**2 / ci[j-1]) * φ(j,i)\r\n            \r\n            \r\n            for i in range(5, 8): # i=5,6,7   j,k ∈ {3, 4}, j != k\r\n                jk = list(permutations([3, 4], 2)) \r\n                for j,k in jk:\r\n                    a[i-1][j-1] = (-ci[i-1]**2 * ci[k-1] * φ(2,i)    +   2*ci[i-1]**3 * φ(3,i))   /   prod_diff(ci[j-1], ci[k-1])\r\n\r\n            for i in range(8, 12): # i=8,9,10,11  j,k,l ∈ {5, 6, 7}, j != k != l      [    (5, 6, 7), (5, 7, 6),    (6, 5, 7), (6, 7, 5),    (7, 5, 6), (7, 6, 5)]    6 total coeff\r\n                jkl = list(permutations([5, 6, 7], 3)) \r\n                for j,k,l in jkl:\r\n                    a[i-1][j-1] = (ci[i-1]**2 * ci[k-1] * ci[l-1] * φ(2,i)   -   2*ci[i-1]**3 * (ci[k-1] + ci[l-1]) * φ(3,i)   +   6*ci[i-1]**4 * φ(4,i))    /    (ci[j-1] * (ci[j-1] - ci[k-1]) * (ci[j-1] - ci[l-1]))\r\n\r\n            for i in range(12,16): # i=12,13,14,15\r\n                jkld = list(permutations([8,9,10,11], 4)) \r\n                for j,k,l,d in jkld:\r\n                    numerator = -ci[i-1]**2  *  ci[d-1]*ci[k-1]*ci[l-1]  *  φ(2,i)     +     2*ci[i-1]**3  *  (ci[d-1]*ci[k-1] + ci[d-1]*ci[l-1] + ci[k-1]*ci[l-1])  *  φ(3,i)     -     6*ci[i-1]**4  *  (ci[d-1] + ci[k-1] + ci[l-1])  *  φ(4,i)     +     24*ci[i-1]**5  *  φ(5,i)\r\n                    a[i-1][j-1] = numerator / prod_diff(ci[j-1], ci[k-1], ci[l-1], ci[d-1])\r\n\r\n            \"\"\"ijkl = list(permutations([12,13,14,15], 4)) \r\n            for i,j,k,l in ijkl:\r\n                #numerator = -ci[j-1]*ci[k-1]*ci[l-1]*φ(2)   +   2*(ci[j-1]*ci[k-1]   +   ci[j-1]*ci[l-1]   +   ci[k-1]*ci[l-1])*φ(3)   -   6*(ci[j-1] + ci[k-1]   +   ci[l-1])*φ(4)   +   24*φ(5)\r\n                #b[0][i-1] = numerator / prod_diff(ci[i-1], ci[j-1], ci[k-1], ci[l-1])\r\n                for jjj in range (2, 6): # 2,3,4,5\r\n                    b[0][i-1] += mu_numerator(jjj, ci[j-1], ci[i-1], ci[k-1], ci[l-1]) * φ(jjj) \r\n                b[0][i-1] /= prod_diff(ci[i-1], ci[j-1], ci[k-1], ci[l-1])\"\"\"\r\n                    \r\n            ijkl = list(permutations([12,13,14,15], 4)) \r\n            for i,j,k,l in ijkl:\r\n                numerator = 0\r\n                for jjj in range(2, 6):  # 2, 3, 4, 5\r\n                    numerator += mu_numerator(jjj, ci[j-1], ci[i-1], ci[k-1], ci[l-1]) * φ(jjj)\r\n                #print(i,j,k,l)\r\n\r\n                b[0][i-1] = numerator / prod_diff(ci[i-1], ci[j-1], ci[k-1], ci[l-1])\r\n            \r\n            \r\n            ijkl = list(permutations([12, 13, 14, 15], 4))\r\n            selected_permutations = {} \r\n            sign = 1  \r\n\r\n            for i in range(12, 16):\r\n                results = []\r\n                for j, k, l, d in ijkl:\r\n                    if i != j and i != k and i != l and i != d:\r\n                        numerator = 0\r\n                        for jjj in range(2, 6):  # 2, 3, 4, 5\r\n                            numerator += mu_numerator(jjj, ci[j-1], ci[i-1], ci[k-1], ci[l-1]) * φ(jjj)\r\n                        theta_value = numerator / prod_diff(ci[i-1], ci[j-1], ci[k-1], ci[l-1])\r\n                        results.append((theta_value, (i, j, k, l, d)))\r\n\r\n                results.sort(key=lambda x: abs(x[0]))\r\n\r\n                for theta_value, permutation in results:\r\n                    if sign == 1 and theta_value > 0:\r\n                        selected_permutations[i] = (theta_value, permutation)\r\n                        sign *= -1  \r\n                        break\r\n                    elif sign == -1 and theta_value < 0:  \r\n                        selected_permutations[i] = (theta_value, permutation)\r\n                        sign *= -1 \r\n                        break\r\n\r\n            for i in range(12, 16):\r\n                if i in selected_permutations:\r\n                    theta_value, (i, j, k, l, d) = selected_permutations[i]\r\n                    b[0][i-1] = theta_value  \r\n                    \r\n            for i in selected_permutations:\r\n                theta_value, permutation = selected_permutations[i]\r\n                print(f\"i={i}\")\r\n                print(f\"  Selected Theta: {theta_value:.6f}, Permutation: {permutation}\")\r\n            \r\n            \r\n            gen_first_col_exp(a, b, ci, φ)\r\n\r\n            a = [[float(val) for val in row] for row in a]\r\n            b = [[float(val) for val in row] for row in b]\r\n            ci = [c1, c2, c3, c4, c5, c6, c7, c8, c9, c10, c11, c12, c13, c14, c15]\r\n            \r\n\r\n        case \"res_16s\": # 6th order without weakened order conditions\r\n                \r\n            c1 = 0\r\n            c2 = c3 = c5 = c8 = c12 = 1/2\r\n            c4 = c11 = c15 = 1/3\r\n            c6 = c9 = c13 = 1/5\r\n            c7 = c10 = c14 = 1/4\r\n            c16 = 1\r\n            ci = [c1, c2, c3, c4, c5, c6, c7, c8, c9, c10, c11, c12, c13, c14, c15, c16]\r\n            ci = [mpf(c_val) for c_val in ci]\r\n            φ = Phi(mpf(h.item()), ci, analytic_solution=use_analytic_solution)\r\n            \r\n            a3_2 = (1/2) * φ(2,3)\r\n\r\n            a = [[mpf(0) for _ in range(16)] for _ in range(16)]\r\n            b = [[mpf(0) for _ in range(16)]]\r\n\r\n            for i in range(3, 5): # i=3,4     j=2\r\n                j=2\r\n                a[i-1][j-1] = (ci[i-1]**2 / ci[j-1]) * φ(j,i)\r\n            \r\n            for i in range(5, 8): # i=5,6,7   j,k ∈ {3, 4}, j != k\r\n                jk = list(permutations([3, 4], 2)) \r\n                for j,k in jk:\r\n                    a[i-1][j-1] = (-ci[i-1]**2 * ci[k-1] * φ(2,i)    +   2*ci[i-1]**3 * φ(3,i))   /   prod_diff(ci[j-1], ci[k-1])\r\n                    \r\n            for i in range(8, 12): # i=8,9,10,11  j,k,l ∈ {5, 6, 7}, j != k != l      [    (5, 6, 7), (5, 7, 6),    (6, 5, 7), (6, 7, 5),    (7, 5, 6), (7, 6, 5)]    6 total coeff\r\n                jkl = list(permutations([5, 6, 7], 3)) \r\n                for j,k,l in jkl:\r\n                    a[i-1][j-1] = (ci[i-1]**2 * ci[k-1] * ci[l-1] * φ(2,i)   -   2*ci[i-1]**3 * (ci[k-1] + ci[l-1]) * φ(3,i)   +   6*ci[i-1]**4 * φ(4,i))    /    (ci[j-1] * (ci[j-1] - ci[k-1]) * (ci[j-1] - ci[l-1]))\r\n\r\n            for i in range(12,17): # i=12,13,14,15,16\r\n                jkld = list(permutations([8,9,10,11], 4)) \r\n                for j,k,l,d in jkld:\r\n                    numerator = -ci[i-1]**2  *  ci[d-1]*ci[k-1]*ci[l-1]  *  φ(2,i)     +     2*ci[i-1]**3  *  (ci[d-1]*ci[k-1] + ci[d-1]*ci[l-1] + ci[k-1]*ci[l-1])  *  φ(3,i)     -     6*ci[i-1]**4  *  (ci[d-1] + ci[k-1] + ci[l-1])  *  φ(4,i)     +     24*ci[i-1]**5  *  φ(5,i)\r\n                    a[i-1][j-1] = numerator / prod_diff(ci[j-1], ci[k-1], ci[l-1], ci[d-1])\r\n            \r\n            \"\"\"ijdkl = list(permutations([12,13,14,15,16], 5)) \r\n            for i,j,d,k,l in ijdkl:\r\n                #numerator = -ci[j-1]*ci[k-1]*ci[l-1]*φ(2)   +   2*(ci[j-1]*ci[k-1]   +   ci[j-1]*ci[l-1]   +   ci[k-1]*ci[l-1])*φ(3)   -   6*(ci[j-1] + ci[k-1]   +   ci[l-1])*φ(4)   +   24*φ(5)\r\n                b[0][i-1] = theta(2, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(2)   +  theta(3, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(3)   +   theta(4, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(4)   +   theta(5, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(5)    +    theta(6, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(6)\r\n                #b[0][i-1] = numerator / prod_diff(ci[i-1], ci[j-1], ci[k-1], ci[l-1])\"\"\"\r\n                    \r\n                \r\n            ijdkl = list(permutations([12,13,14,15,16], 5)) \r\n            for i,j,d,k,l in ijdkl:\r\n                #numerator = -ci[j-1]*ci[k-1]*ci[l-1]*φ(2)   +   2*(ci[j-1]*ci[k-1]   +   ci[j-1]*ci[l-1]   +   ci[k-1]*ci[l-1])*φ(3)   -   6*(ci[j-1] + ci[k-1]   +   ci[l-1])*φ(4)   +   24*φ(5)\r\n                #numerator = theta_numerator(2, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(2)   +  theta_numerator(3, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(3)   +   theta_numerator(4, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(4)   +   theta_numerator(5, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(5)    +    theta_numerator(6, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(6)\r\n                #b[0][i-1] = numerator / (ci[i-1] *, ci[d-1], ci[j-1], ci[k-1], ci[l-1])\r\n                #b[0][i-1] = numerator / denominator(ci[i-1], ci[d-1], ci[j-1], ci[k-1], ci[l-1])\r\n                b[0][i-1] = theta(2, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(2)   +  theta(3, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(3)   +   theta(4, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(4)   +   theta(5, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(5)    +    theta(6, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(6)\r\n\r\n            \r\n            ijdkl = list(permutations([12,13,14,15,16], 5)) \r\n            for i,j,d,k,l in ijdkl:\r\n                numerator = 0\r\n                for jjj in range(2, 7):  # 2, 3, 4, 5, 6\r\n                    numerator += theta_numerator(jjj, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(jjj)\r\n                #print(i,j,d,k,l)\r\n                b[0][i-1] = numerator / (ci[i-1] *   (ci[i-1] - ci[k-1])   *   (ci[i-1] - ci[j-1]   *   (ci[i-1] - ci[d-1])   *   (ci[i-1] - ci[l-1])))\r\n\r\n            gen_first_col_exp(a, b, ci, φ)\r\n\r\n            a = [[float(val) for val in row] for row in a]\r\n            b = [[float(val) for val in row] for row in b]\r\n            ci = [c1, c2, c3, c4, c5, c6, c7, c8, c9, c10, c11, c12, c13, c14, c15, c16]\r\n            \r\n        case \"irk_exp_diag_2s\":\r\n            c1 = 1/3\r\n            c2 = 2/3\r\n            c1 = float(get_extra_options_kv(\"c1\", str(c1), extra_options))\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n            \r\n            lam = (1 - torch.exp(-c1 * h)) / h\r\n            a2_1 = ( torch.exp(c2*h) - torch.exp(c1*h))    /    (h * torch.exp(2*c1*h))\r\n            b1 =  (1 + c2*h + torch.exp(h) * (-1 + h - c2*h)) / ((c1-c2) * h**2 * torch.exp(c1*h))\r\n            b2 = -(1 + c1*h - torch.exp(h) * ( 1 - h + c1*h)) / ((c1-c2) * h**2 * torch.exp(c2*h))\r\n\r\n            a = [\r\n                    [lam, 0],\r\n                    [a2_1, lam],\r\n            ]\r\n            b = [\r\n                    [b1, b2],\r\n            ]\r\n            ci = [c1, c2]\r\n\r\n    ci = ci[:]\r\n    #if rk_type.startswith(\"lob\") == False:\r\n    ci.append(1)\r\n    \r\n    if EO(\"exp2lin_override_coeff\") and is_exponential(rk_type):\r\n        a = scale_all(a, -sigma.item())\r\n        b = scale_all(b, -sigma.item())\r\n    \r\n    return a, b, u, v, ci, multistep_stages, hybrid_stages, FSAL\r\n\r\n\r\ndef scale_all(data, scalar):\r\n    if isinstance(data, torch.Tensor):\r\n        return data * scalar\r\n    elif isinstance(data, list):\r\n        return [scale_all(x, scalar) for x in data]\r\n    elif isinstance(data, (float, int)):\r\n        return data * scalar\r\n    else:\r\n        return data  # passthrough unscaled if unknown type... or None, etc\r\n\r\n\r\ndef gen_first_col_exp(a, b, c, φ):\r\n    for i in range(len(c)): \r\n        a[i][0] = c[i] * φ(1,i+1) - sum(a[i])\r\n    for i in range(len(b)): \r\n        b[i][0] =        φ(1)     - sum(b[i])\r\n    return a, b\r\n\r\ndef gen_first_col_exp_uv(a, b, c, u, v, φ):\r\n    for i in range(len(c)): \r\n        a[i][0] = c[i] * φ(1,i+1) - sum(a[i]) - sum(u[i])\r\n    for i in range(len(b)): \r\n        b[i][0] =        φ(1)     - sum(b[i]) - sum(v[i])\r\n    return a, b\r\n\r\ndef rho(j, ci, ck, cl):\r\n    if j == 2:\r\n        numerator = ck*cl\r\n    if j == 3:\r\n        numerator = (-2 * (ck + cl))\r\n    if j == 4:\r\n        numerator = 6\r\n    return numerator / denominator(ci, ck, cl)\r\n    \r\n    \r\ndef mu(j, cd, ci, ck, cl):\r\n    if j == 2:\r\n        numerator = -cd * ck * cl\r\n    if j == 3:\r\n        numerator = 2 * (cd * ck + cd * cl + ck * cl)\r\n    if j == 4:\r\n        numerator = -6 * (cd + ck + cl)\r\n    if j == 5:\r\n        numerator = 24\r\n    return numerator / denominator(ci, cd, ck, cl)\r\n\r\ndef mu_numerator(j, cd, ci, ck, cl):\r\n    if j == 2:\r\n        numerator = -cd * ck * cl\r\n    if j == 3:\r\n        numerator = 2 * (cd * ck + cd * cl + ck * cl)\r\n    if j == 4:\r\n        numerator = -6 * (cd + ck + cl)\r\n    if j == 5:\r\n        numerator = 24\r\n    return numerator #/ denominator(ci, cd, ck, cl)\r\n\r\n\r\n\r\ndef theta_numerator(j, cd, ci, ck, cj, cl):\r\n    if j == 2:\r\n        numerator = -cj * cd * ck * cl\r\n    if j == 3:\r\n        numerator = 2 * (cj * ck * cd + cj*ck*cl + ck*cd*cl + cd*cl*cj)\r\n    if j == 4:\r\n        numerator = -6*(cj*ck + cj*cd + cj*cl + ck*cd + ck*cl + cd*cl)\r\n    if j == 5:\r\n        numerator = 24 * (cj + ck + cl + cd)\r\n    if j == 6:\r\n        numerator = -120\r\n    return numerator # / denominator(ci, cj, ck, cl, cd)\r\n\r\n\r\ndef theta(j, cd, ci, ck, cj, cl):\r\n    if j == 2:\r\n        numerator = -cj * cd * ck * cl\r\n    if j == 3:\r\n        numerator = 2 * (cj * ck * cd + cj*ck*cl + ck*cd*cl + cd*cl*cj)\r\n    if j == 4:\r\n        numerator = -6*(cj*ck + cj*cd + cj*cl + ck*cd + ck*cl + cd*cl)\r\n    if j == 5:\r\n        numerator = 24 * (cj + ck + cl + cd)\r\n    if j == 6:\r\n        numerator = -120\r\n    return numerator / ( ci * (ci - cj) * (ci - ck) * (ci - cl) * (ci - cd))\r\n    return numerator / denominator(ci, cj, ck, cl, cd)\r\n\r\n\r\ndef prod_diff(cj, ck, cl=None, cd=None):\r\n    if cl is None and cd is None:\r\n        return cj * (cj - ck)\r\n    if cd is None:\r\n        return cj * (cj - ck) * (cj - cl)\r\n    else:\r\n        return cj * (cj - ck) * (cj - cl) * (cj - cd)\r\n\r\ndef denominator(ci, *args):\r\n    result = ci \r\n    for arg in args:\r\n        result *= (ci - arg)\r\n    return result\r\n\r\n\r\n\r\ndef check_condition_4_2(nodes):\r\n\r\n    c12, c13, c14, c15 = nodes\r\n\r\n    term_1 = (1 / 5) * (c12 + c13 + c14 + c15)\r\n    term_2 = (1 / 4) * (c12 * c13 + c12 * c14 + c12 * c15 + c13 * c14 + c13 * c15 + c14 * c15)\r\n    term_3 = (1 / 3) * (c12 * c13 * c14 + c12 * c13 * c15 + c12 * c14 * c15 + c13 * c14 * c15)\r\n    term_4 = (1 / 2) * (c12 * c13 * c14 * c15)\r\n\r\n    result = term_1 - term_2 + term_3 - term_4\r\n\r\n    return abs(result - (1 / 6)) < 1e-6  \r\n\r\n"
  },
  {
    "path": "beta/rk_guide_func_beta.py",
    "content": "import torch\r\nimport torch.nn.functional as F\r\nfrom torch import Tensor\r\n\r\nimport itertools\r\nimport copy\r\n\r\nfrom typing          import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKING, TypeVar\r\n\r\nif TYPE_CHECKING:\r\n    from .noise_classes import NoiseGenerator\r\n    NoiseGeneratorSubclass = TypeVar(\"NoiseGeneratorSubclass\", bound=\"NoiseGenerator\") \r\n\r\nfrom einops          import rearrange\r\n\r\nfrom ..sigmas        import get_sigmas\r\nfrom ..helper        import ExtraOptions, FrameWeightsManager, initialize_or_scale, is_video_model\r\nfrom ..latents       import normalize_zscore, get_collinear, get_orthogonal, get_cosine_similarity, get_pearson_similarity, \\\r\n                            get_slerp_weight_for_cossim, normalize_latent, hard_light_blend, slerp_tensor, get_orthogonal_noise_from_channelwise, get_edge_mask\r\n\r\nfrom .rk_method_beta import RK_Method_Beta\r\nfrom .constants      import MAX_STEPS\r\n\r\nfrom ..models import PRED\r\n\r\n\r\n#from ..latents import hard_light_blend, normalize_latent\r\n\r\n\r\n\r\nclass LatentGuide:\r\n    def __init__(self,\r\n                model,\r\n                sigmas               : Tensor,\r\n                UNSAMPLE             : bool,\r\n                VE_MODEL             : bool,\r\n                LGW_MASK_RESCALE_MIN : bool,\r\n                extra_options        : str,\r\n                device               : str = 'cpu',\r\n                dtype                : torch.dtype = torch.float64,\r\n                frame_weights_mgr    : FrameWeightsManager = None,\r\n                ):\r\n        \r\n        self.dtype                    = dtype\r\n        self.device                   = device\r\n        self.model                    = model\r\n\r\n        if hasattr(model, \"model\"):\r\n            model_sampling = model.model.model_sampling\r\n        elif hasattr(model, \"inner_model\"):\r\n            model_sampling = model.inner_model.inner_model.model_sampling\r\n        \r\n        self.sigma_min                 = model_sampling.sigma_min.to(dtype=dtype, device=device)\r\n        self.sigma_max                 = model_sampling.sigma_max.to(dtype=dtype, device=device)\r\n        self.sigmas                    = sigmas                  .to(dtype=dtype, device=device)\r\n        self.UNSAMPLE                  = UNSAMPLE\r\n        self.VE_MODEL                  = VE_MODEL\r\n        self.VIDEO                     = is_video_model(model)\r\n        self.SAMPLE                    = (sigmas[0] > sigmas[1])    # type torch.bool\r\n        self.y0                        = None\r\n        self.y0_inv                    = None\r\n        self.y0_mean                   = None\r\n        self.y0_adain                  = None\r\n        self.y0_attninj                = None\r\n        self.y0_style_pos              = None\r\n        self.y0_style_neg              = None\r\n\r\n        self.guide_mode                = \"\"\r\n        self.max_steps                 = MAX_STEPS\r\n        self.mask                      = None\r\n        self.mask_inv                  = None\r\n        self.mask_sync                 = None\r\n        self.mask_drift_x              = None\r\n        self.mask_drift_y              = None\r\n        self.mask_lure_x               = None\r\n        self.mask_lure_y               = None\r\n        self.mask_mean                 = None\r\n        self.mask_adain                = None\r\n        self.mask_attninj              = None\r\n        self.mask_style_pos            = None\r\n        self.mask_style_neg            = None\r\n        self.x_lying_                  = None\r\n        self.s_lying_                  = None\r\n        \r\n        self.LGW_MASK_RESCALE_MIN      = LGW_MASK_RESCALE_MIN\r\n        self.HAS_LATENT_GUIDE          = False\r\n        self.HAS_LATENT_GUIDE_INV      = False\r\n        self.HAS_LATENT_GUIDE_MEAN     = False\r\n        self.HAS_LATENT_GUIDE_ADAIN    = False\r\n        self.HAS_LATENT_GUIDE_ATTNINJ  = False\r\n        self.HAS_LATENT_GUIDE_STYLE_POS= False\r\n        self.HAS_LATENT_GUIDE_STYLE_NEG= False\r\n        \r\n        self.lgw                       = torch.full_like(sigmas, 0., dtype=dtype) \r\n        self.lgw_inv                   = torch.full_like(sigmas, 0., dtype=dtype)\r\n        self.lgw_mean                  = torch.full_like(sigmas, 0., dtype=dtype)\r\n        self.lgw_adain                 = torch.full_like(sigmas, 0., dtype=dtype)\r\n        self.lgw_attninj               = torch.full_like(sigmas, 0., dtype=dtype)\r\n        self.lgw_style_pos             = torch.full_like(sigmas, 0., dtype=dtype)\r\n        self.lgw_style_neg             = torch.full_like(sigmas, 0., dtype=dtype)\r\n        \r\n        self.cossim_tgt                = torch.full_like(sigmas, 0., dtype=dtype) \r\n        self.cossim_tgt_inv            = torch.full_like(sigmas, 0., dtype=dtype) \r\n        \r\n        self.guide_cossim_cutoff_          = 1.0\r\n        self.guide_bkg_cossim_cutoff_      = 1.0\r\n        self.guide_mean_cossim_cutoff_     = 1.0\r\n        self.guide_adain_cossim_cutoff_    = 1.0\r\n        self.guide_attninj_cossim_cutoff_  = 1.0\r\n        self.guide_style_pos_cossim_cutoff_= 1.0\r\n        self.guide_style_neg_cossim_cutoff_= 1.0\r\n\r\n        self.frame_weights_mgr        = frame_weights_mgr\r\n        self.frame_weights            = None\r\n        self.frame_weights_inv        = None\r\n        \r\n        #self.freqsep_lowpass_method   = \"none\"\r\n        #self.freqsep_sigma            = 0.\r\n        #self.freqsep_kernel_size      = 0 \r\n        \r\n        self.extra_options            = extra_options\r\n        self.EO                       = ExtraOptions(extra_options)\r\n\r\n\r\n    def init_guides(self,\r\n            x             : Tensor,\r\n            RK_IMPLICIT   : bool,\r\n            guides        : Optional[Tensor]                   = None,\r\n            noise_sampler : Optional[\"NoiseGeneratorSubclass\"] = None,\r\n            batch_num     : int                                = 0,\r\n            sigma_init                                         = None,\r\n            guide_inversion_y0                                 = None,\r\n            guide_inversion_y0_inv                             = None,\r\n        ) -> Tensor:\r\n        \r\n        latent_guide_weight              = 0.0\r\n        latent_guide_weight_inv          = 0.0\r\n        latent_guide_weight_sync         = 0.0\r\n        latent_guide_weight_sync_inv     = 0.0\r\n        latent_guide_weight_drift_x      = 0.0\r\n        latent_guide_weight_drift_x_inv  = 0.0\r\n        latent_guide_weight_drift_y      = 0.0\r\n        latent_guide_weight_drift_y_inv  = 0.0\r\n        latent_guide_weight_lure_x       = 0.0\r\n        latent_guide_weight_lure_x_inv   = 0.0\r\n        latent_guide_weight_lure_y       = 0.0\r\n        latent_guide_weight_lure_y_inv   = 0.0\r\n        \r\n        latent_guide_weight_mean         = 0.0\r\n        latent_guide_weight_adain        = 0.0\r\n        latent_guide_weight_attninj      = 0.0\r\n        latent_guide_weight_style_pos    = 0.0\r\n        latent_guide_weight_style_neg    = 0.0\r\n\r\n        latent_guide_weights             = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_inv         = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_sync        = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_sync_inv    = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_drift_x     = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_drift_x_inv = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_drift_y     = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_drift_y_inv = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_lure_x      = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_lure_x_inv  = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_lure_y      = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_lure_y_inv  = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_mean        = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_adain       = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_attninj     = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_style_pos   = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n        latent_guide_weights_style_neg   = torch.zeros_like(self.sigmas, dtype=self.dtype, device=self.device)\r\n\r\n        latent_guide           = None\r\n        latent_guide_inv       = None\r\n        latent_guide_mean      = None\r\n        latent_guide_adain     = None\r\n        latent_guide_attninj   = None\r\n        latent_guide_style_pos = None\r\n        latent_guide_style_neg = None\r\n        \r\n        self.drift_x_data  = 0.0\r\n        self.drift_x_sync  = 0.0\r\n        self.drift_y_data  = 0.0\r\n        self.drift_y_sync  = 0.0\r\n        self.drift_y_guide = 0.0\r\n        \r\n        if guides is not None:\r\n            self.guide_mode                 = guides.get(\"guide_mode\", \"none\")\r\n            \r\n            if self.guide_mode.startswith(\"inversion\"):\r\n                self.guide_mode = self.guide_mode.replace(\"inversion\", \"epsilon\", 1)\r\n            else:\r\n                self.SAMPLE   = True\r\n                self.UNSAMPLE = False\r\n            \r\n            latent_guide_weight              = guides.get(\"weight_masked\",           0.)\r\n            latent_guide_weight_inv          = guides.get(\"weight_unmasked\",         0.)\r\n            latent_guide_weight_sync         = guides.get(\"weight_masked_sync\",      0.)\r\n            latent_guide_weight_sync_inv     = guides.get(\"weight_unmasked_sync\",    0.)\r\n            latent_guide_weight_drift_x      = guides.get(\"weight_masked_drift_x\",   0.)\r\n            latent_guide_weight_drift_x_inv  = guides.get(\"weight_unmasked_drift_x\", 0.)\r\n            latent_guide_weight_drift_y      = guides.get(\"weight_masked_drift_y\",   0.)\r\n            latent_guide_weight_drift_y_inv  = guides.get(\"weight_unmasked_drift_y\", 0.)\r\n            latent_guide_weight_lure_x       = guides.get(\"weight_masked_lure_x\",    0.)\r\n            latent_guide_weight_lure_x_inv   = guides.get(\"weight_unmasked_lure_x\",  0.)\r\n            latent_guide_weight_lure_y       = guides.get(\"weight_masked_lure_y\",    0.)\r\n            latent_guide_weight_lure_y_inv   = guides.get(\"weight_unmasked_lure_y\",  0.)\r\n            latent_guide_weight_mean         = guides.get(\"weight_mean\",             0.)\r\n            latent_guide_weight_adain        = guides.get(\"weight_adain\",            0.)\r\n            latent_guide_weight_attninj      = guides.get(\"weight_attninj\",          0.)\r\n            latent_guide_weight_style_pos    = guides.get(\"weight_style_pos\",        0.)\r\n            latent_guide_weight_style_neg    = guides.get(\"weight_style_neg\",        0.)\r\n            #latent_guide_synweight_style_pos = guides.get(\"synweight_style_pos\", 0.)\r\n            #latent_guide_synweight_style_neg = guides.get(\"synweight_style_neg\", 0.)\r\n            \r\n            self.drift_x_data                = guides.get(\"drift_x_data\", 0.)\r\n            self.drift_x_sync                = guides.get(\"drift_x_sync\", 0.)\r\n            self.drift_y_data                = guides.get(\"drift_y_data\", 0.)\r\n            self.drift_y_sync                = guides.get(\"drift_y_sync\", 0.)\r\n            self.drift_y_guide               = guides.get(\"drift_y_guide\", 0.)\r\n\r\n            latent_guide_weights             = guides.get(\"weights_masked\")\r\n            latent_guide_weights_inv         = guides.get(\"weights_unmasked\")\r\n            latent_guide_weights_sync        = guides.get(\"weights_masked_sync\")\r\n            latent_guide_weights_sync_inv    = guides.get(\"weights_unmasked_sync\")\r\n            latent_guide_weights_drift_x     = guides.get(\"weights_masked_drift_x\")\r\n            latent_guide_weights_drift_x_inv = guides.get(\"weights_unmasked_drift_x\")\r\n            latent_guide_weights_drift_y     = guides.get(\"weights_masked_drift_y\")\r\n            latent_guide_weights_drift_y_inv = guides.get(\"weights_unmasked_drift_y\")\r\n            latent_guide_weights_lure_x      = guides.get(\"weights_masked_lure_x\")\r\n            latent_guide_weights_lure_x_inv  = guides.get(\"weights_unmasked_lure_x\")\r\n            latent_guide_weights_lure_y      = guides.get(\"weights_masked_lure_y\")\r\n            latent_guide_weights_lure_y_inv  = guides.get(\"weights_unmasked_lure_y\")\r\n            latent_guide_weights_mean        = guides.get(\"weights_mean\")\r\n            latent_guide_weights_adain       = guides.get(\"weights_adain\")\r\n            latent_guide_weights_attninj     = guides.get(\"weights_attninj\")\r\n            latent_guide_weights_style_pos   = guides.get(\"weights_style_pos\")\r\n            latent_guide_weights_style_neg   = guides.get(\"weights_style_neg\")\r\n            #latent_guide_synweights_style_p os = guides.get(\"synweights_style_pos\")\r\n            #latent_guide_synweights_style_neg = guides.get(\"synweights_style_neg\")\r\n\r\n            latent_guide                     = guides.get(\"guide_masked\")\r\n            latent_guide_inv                 = guides.get(\"guide_unmasked\")\r\n            latent_guide_mean                = guides.get(\"guide_mean\")\r\n            latent_guide_adain               = guides.get(\"guide_adain\")\r\n            latent_guide_attninj             = guides.get(\"guide_attninj\")\r\n            latent_guide_style_pos           = guides.get(\"guide_style_pos\")\r\n            latent_guide_style_neg           = guides.get(\"guide_style_neg\")\r\n\r\n            self.mask                        = guides.get(\"mask\")\r\n            self.mask_inv                    = guides.get(\"unmask\")\r\n            self.mask_sync                   = guides.get(\"mask_sync\")\r\n            self.mask_drift_x                = guides.get(\"mask_drift_x\")\r\n            self.mask_drift_y                = guides.get(\"mask_drift_y\")\r\n            self.mask_lure_x                 = guides.get(\"mask_lure_x\")\r\n            self.mask_lure_y                 = guides.get(\"mask_lure_y\")\r\n            self.mask_mean                   = guides.get(\"mask_mean\")\r\n            self.mask_adain                  = guides.get(\"mask_adain\")\r\n            self.mask_attninj                = guides.get(\"mask_attninj\")\r\n            self.mask_style_pos              = guides.get(\"mask_style_pos\")\r\n            self.mask_style_neg              = guides.get(\"mask_style_neg\")\r\n\r\n            scheduler_                       = guides.get(\"weight_scheduler_masked\")\r\n            scheduler_inv_                   = guides.get(\"weight_scheduler_unmasked\")\r\n            scheduler_sync_                  = guides.get(\"weight_scheduler_masked_sync\")\r\n            scheduler_sync_inv_              = guides.get(\"weight_scheduler_unmasked_sync\")\r\n            scheduler_drift_x_               = guides.get(\"weight_scheduler_masked_drift_x\")\r\n            scheduler_drift_x_inv_           = guides.get(\"weight_scheduler_unmasked_drift_x\")\r\n            scheduler_drift_y_               = guides.get(\"weight_scheduler_masked_drift_y\")\r\n            scheduler_drift_y_inv_           = guides.get(\"weight_scheduler_unmasked_drift_y\")\r\n            scheduler_lure_x_                = guides.get(\"weight_scheduler_masked_lure_x\")\r\n            scheduler_lure_x_inv_            = guides.get(\"weight_scheduler_unmasked_lure_x\")\r\n            scheduler_lure_y_                = guides.get(\"weight_scheduler_masked_lure_y\")\r\n            scheduler_lure_y_inv_            = guides.get(\"weight_scheduler_unmasked_lure_y\")\r\n            scheduler_mean_                  = guides.get(\"weight_scheduler_mean\")\r\n            scheduler_adain_                 = guides.get(\"weight_scheduler_adain\")\r\n            scheduler_attninj_               = guides.get(\"weight_scheduler_attninj\")\r\n            scheduler_style_pos_             = guides.get(\"weight_scheduler_style_pos\")\r\n            scheduler_style_neg_             = guides.get(\"weight_scheduler_style_neg\")\r\n\r\n            start_steps_                     = guides.get(\"start_step_masked\",   0)\r\n            start_steps_inv_                 = guides.get(\"start_step_unmasked\", 0)\r\n            start_steps_sync_                = guides.get(\"start_step_masked_sync\",   0)\r\n            start_steps_sync_inv_            = guides.get(\"start_step_unmasked_sync\", 0)\r\n            start_steps_drift_x_             = guides.get(\"start_step_masked_drift_x\",   0)\r\n            start_steps_drift_x_inv_         = guides.get(\"start_step_unmasked_drift_x\", 0)\r\n            start_steps_drift_y_             = guides.get(\"start_step_masked_drift_y\",   0)\r\n            start_steps_drift_y_inv_         = guides.get(\"start_step_unmasked_drift_y\", 0)\r\n            start_steps_lure_x_              = guides.get(\"start_step_masked_lure_x\",   0)\r\n            start_steps_lure_x_inv_          = guides.get(\"start_step_unmasked_lure_x\", 0)\r\n            start_steps_lure_y_              = guides.get(\"start_step_masked_lure_y\",   0)\r\n            start_steps_lure_y_inv_          = guides.get(\"start_step_unmasked_lure_y\", 0)\r\n            start_steps_mean_                = guides.get(\"start_step_mean\",     0)\r\n            start_steps_adain_               = guides.get(\"start_step_adain\",    0)\r\n            start_steps_attninj_             = guides.get(\"start_step_attninj\",  0)\r\n            start_steps_style_pos_           = guides.get(\"start_step_style_pos\", 0)\r\n            start_steps_style_neg_           = guides.get(\"start_step_style_neg\", 0)\r\n\r\n            steps_                           = guides.get(\"end_step_masked\",     1)\r\n            steps_inv_                       = guides.get(\"end_step_unmasked\",   1)\r\n            steps_sync_                      = guides.get(\"end_step_masked_sync\",     1)\r\n            steps_sync_inv_                  = guides.get(\"end_step_unmasked_sync\",   1)\r\n            steps_drift_x_                   = guides.get(\"end_step_masked_drift_x\",     1)\r\n            steps_drift_x_inv_               = guides.get(\"end_step_unmasked_drift_x\",   1)\r\n            steps_drift_y_                   = guides.get(\"end_step_masked_drift_y\",     1)\r\n            steps_drift_y_inv_               = guides.get(\"end_step_unmasked_drift_y\",   1)\r\n            steps_lure_x_                    = guides.get(\"end_step_masked_lure_x\",     1)\r\n            steps_lure_x_inv_                = guides.get(\"end_step_unmasked_lure_x\",   1)\r\n            steps_lure_y_                    = guides.get(\"end_step_masked_lure_y\",     1)\r\n            steps_lure_y_inv_                = guides.get(\"end_step_unmasked_lure_y\",   1)\r\n            \r\n            steps_mean_                      = guides.get(\"end_step_mean\",       1)\r\n            steps_adain_                     = guides.get(\"end_step_adain\",      1)\r\n            steps_attninj_                   = guides.get(\"end_step_attninj\",    1)\r\n            steps_style_pos_                 = guides.get(\"end_step_style_pos\",  1)\r\n            steps_style_neg_                 = guides.get(\"end_step_style_neg\",  1)\r\n\r\n            self.guide_cossim_cutoff_           = guides.get(\"cutoff_masked\",       1.)\r\n            self.guide_bkg_cossim_cutoff_       = guides.get(\"cutoff_unmasked\",     1.)\r\n            self.guide_mean_cossim_cutoff_      = guides.get(\"cutoff_mean\",         1.)\r\n            self.guide_adain_cossim_cutoff_     = guides.get(\"cutoff_adain\",        1.)\r\n            self.guide_attninj_cossim_cutoff_   = guides.get(\"cutoff_attninj\",      1.)\r\n            self.guide_style_pos_cossim_cutoff_ = guides.get(\"cutoff_style_pos\",  1.)\r\n            self.guide_style_neg_cossim_cutoff_ = guides.get(\"cutoff_style_neg\",  1.)\r\n            \r\n            self.sync_lure_iter                 = guides.get(\"sync_lure_iter\",  0)\r\n            self.sync_lure_sequence             = guides.get(\"sync_lure_sequence\")\r\n            \r\n            #self.SYNC_SEPARATE = False\r\n            #if scheduler_sync_ is not None:\r\n            #    self.SYNC_SEPARATE = True\r\n            self.SYNC_SEPARATE = True\r\n            if scheduler_sync_ is None and scheduler_ is not None:\r\n\r\n                latent_guide_weight_sync      = latent_guide_weight\r\n                latent_guide_weight_sync_inv  = latent_guide_weight_inv\r\n                latent_guide_weights_sync     = latent_guide_weights\r\n                latent_guide_weights_sync_inv = latent_guide_weights_inv\r\n                \r\n                scheduler_sync_               = scheduler_\r\n                scheduler_sync_inv_           = scheduler_inv_\r\n                \r\n                start_steps_sync_             = start_steps_\r\n                start_steps_sync_inv_         = start_steps_inv_\r\n                \r\n                steps_sync_                   = steps_\r\n                steps_sync_inv_               = steps_inv_\r\n            \r\n            self.SYNC_drift_X = True\r\n            if scheduler_drift_x_ is None and scheduler_ is not None:\r\n                self.SYNC_drift_X = False\r\n\r\n                latent_guide_weight_drift_x      = latent_guide_weight\r\n                latent_guide_weight_drift_x_inv  = latent_guide_weight_inv\r\n                latent_guide_weights_drift_x     = latent_guide_weights\r\n                latent_guide_weights_drift_x_inv = latent_guide_weights_inv\r\n                \r\n                scheduler_drift_x_               = scheduler_\r\n                scheduler_drift_x_inv_           = scheduler_inv_\r\n                \r\n                start_steps_drift_x_             = start_steps_\r\n                start_steps_drift_x_inv_         = start_steps_inv_\r\n                \r\n                steps_drift_x_                   = steps_\r\n                steps_drift_x_inv_               = steps_inv_\r\n                \r\n            self.SYNC_drift_Y = True\r\n            if scheduler_drift_y_ is None and scheduler_ is not None:\r\n                self.SYNC_drift_Y = False\r\n\r\n                latent_guide_weight_drift_y      = latent_guide_weight\r\n                latent_guide_weight_drift_y_inv  = latent_guide_weight_inv\r\n                latent_guide_weights_drift_y     = latent_guide_weights\r\n                latent_guide_weights_drift_y_inv = latent_guide_weights_inv\r\n                \r\n                scheduler_drift_y_               = scheduler_\r\n                scheduler_drift_y_inv_           = scheduler_inv_\r\n                \r\n                start_steps_drift_y_             = start_steps_\r\n                start_steps_drift_y_inv_         = start_steps_inv_\r\n                \r\n                steps_drift_y_                   = steps_\r\n                steps_drift_y_inv_               = steps_inv_\r\n            \r\n            self.SYNC_LURE_X = True\r\n            if scheduler_lure_x_ is None and scheduler_ is not None:\r\n                self.SYNC_LURE_X = False\r\n\r\n                latent_guide_weight_lure_x      = latent_guide_weight\r\n                latent_guide_weight_lure_x_inv  = latent_guide_weight_inv\r\n                latent_guide_weights_lure_x     = latent_guide_weights\r\n                latent_guide_weights_lure_x_inv = latent_guide_weights_inv\r\n                \r\n                scheduler_lure_x_               = scheduler_\r\n                scheduler_lure_x_inv_           = scheduler_inv_\r\n                \r\n                start_steps_lure_x_             = start_steps_\r\n                start_steps_lure_x_inv_         = start_steps_inv_\r\n                \r\n                steps_lure_x_                   = steps_\r\n                steps_lure_x_inv_               = steps_inv_\r\n                \r\n            self.SYNC_LURE_Y = True\r\n            if scheduler_lure_y_ is None and scheduler_ is not None:\r\n                self.SYNC_LURE_Y = False\r\n\r\n                latent_guide_weight_lure_y      = latent_guide_weight\r\n                latent_guide_weight_lure_y_inv  = latent_guide_weight_inv\r\n                latent_guide_weights_lure_y     = latent_guide_weights\r\n                latent_guide_weights_lure_y_inv = latent_guide_weights_inv\r\n                \r\n                scheduler_lure_y_               = scheduler_\r\n                scheduler_lure_y_inv_           = scheduler_inv_\r\n                \r\n                start_steps_lure_y_             = start_steps_\r\n                start_steps_lure_y_inv_         = start_steps_inv_\r\n                \r\n                steps_lure_y_                   = steps_\r\n                steps_lure_y_inv_               = steps_inv_\r\n\r\n            if self.mask         is not None and self.mask.shape    [0] > 1 and self.VIDEO     is False:\r\n                self.mask         = self.mask    [batch_num].unsqueeze(0)\r\n            if self.mask_inv     is not None and self.mask_inv.shape[0] > 1 and self.VIDEO     is False:\r\n                self.mask_inv     = self.mask_inv[batch_num].unsqueeze(0)\r\n            if self.mask_sync    is not None and self.mask_sync.shape[0] > 1 and self.VIDEO    is False:\r\n                self.mask_sync    = self.mask_sync[batch_num].unsqueeze(0)\r\n            if self.mask_drift_x is not None and self.mask_drift_x.shape[0] > 1 and self.VIDEO is False:\r\n                self.mask_drift_x = self.mask_drift_x[batch_num].unsqueeze(0)\r\n            if self.mask_drift_y is not None and self.mask_drift_y.shape[0] > 1 and self.VIDEO is False:\r\n                self.mask_drift_y = self.mask_drift_y[batch_num].unsqueeze(0)\r\n            if self.mask_lure_x  is not None and self.mask_lure_x.shape[0] > 1 and self.VIDEO  is False:\r\n                self.mask_lure_x  = self.mask_lure_x[batch_num].unsqueeze(0)\r\n            if self.mask_lure_y  is not None and self.mask_lure_y.shape[0] > 1 and self.VIDEO  is False:\r\n                self.mask_lure_y  = self.mask_lure_y[batch_num].unsqueeze(0)\r\n                \r\n            if self.guide_mode.startswith(\"fully_\") and not RK_IMPLICIT:\r\n                self.guide_mode = self.guide_mode[6:]   # fully_pseudoimplicit is only supported for implicit samplers, default back to pseudoimplicit\r\n\r\n            guide_sigma_shift = self.EO(\"guide_sigma_shift\", 0.0)                                                                         # effectively hardcoding shift to 0 !!!!!!\r\n            \r\n            if latent_guide_weights is None and scheduler_ is not None:\r\n                total_steps                     = steps_ - start_steps_\r\n                latent_guide_weights            = get_sigmas(self.model, scheduler_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_,                               dtype=self.dtype, device=self.device)\r\n                latent_guide_weights            = torch.cat((prepend, latent_guide_weights.to(self.device)), dim=0)\r\n                \r\n            if latent_guide_weights_inv is None and scheduler_inv_ is not None:\r\n                total_steps                     = steps_inv_ - start_steps_inv_\r\n                latent_guide_weights_inv        = get_sigmas(self.model, scheduler_inv_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_inv_,                               dtype=self.dtype, device=self.device) \r\n                latent_guide_weights_inv        = torch.cat((prepend, latent_guide_weights_inv.to(self.device)), dim=0)\r\n\r\n            if latent_guide_weights_sync is None and scheduler_sync_ is not None:\r\n                total_steps                     = steps_sync_ - start_steps_sync_\r\n                latent_guide_weights_sync       = get_sigmas(self.model, scheduler_sync_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_sync_,                               dtype=self.dtype, device=self.device)\r\n                latent_guide_weights_sync       = torch.cat((prepend, latent_guide_weights_sync.to(self.device)), dim=0)\r\n                \r\n            if latent_guide_weights_sync_inv is None and scheduler_sync_inv_ is not None:\r\n                total_steps                     = steps_sync_inv_ - start_steps_sync_inv_\r\n                latent_guide_weights_sync_inv   = get_sigmas(self.model, scheduler_sync_inv_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_sync_inv_,                               dtype=self.dtype, device=self.device) \r\n                latent_guide_weights_sync_inv   = torch.cat((prepend, latent_guide_weights_sync_inv.to(self.device)), dim=0)\r\n                \r\n            if latent_guide_weights_drift_x is None and scheduler_drift_x_ is not None:\r\n                total_steps                     = steps_drift_x_ - start_steps_drift_x_\r\n                latent_guide_weights_drift_x    = get_sigmas(self.model, scheduler_drift_x_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_drift_x_,                               dtype=self.dtype, device=self.device)\r\n                latent_guide_weights_drift_x    = torch.cat((prepend, latent_guide_weights_drift_x.to(self.device)), dim=0)\r\n                \r\n            if latent_guide_weights_drift_x_inv is None and scheduler_drift_x_inv_ is not None:\r\n                total_steps                      = steps_drift_x_inv_ - start_steps_drift_x_inv_\r\n                latent_guide_weights_drift_x_inv = get_sigmas(self.model, scheduler_drift_x_inv_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                          = torch.zeros(start_steps_drift_x_inv_,                               dtype=self.dtype, device=self.device) \r\n                latent_guide_weights_drift_x_inv = torch.cat((prepend, latent_guide_weights_drift_x_inv.to(self.device)), dim=0)\r\n                \r\n            if latent_guide_weights_drift_y is None and scheduler_drift_y_ is not None:\r\n                total_steps                     = steps_drift_y_ - start_steps_drift_y_\r\n                latent_guide_weights_drift_y    = get_sigmas(self.model, scheduler_drift_y_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_drift_y_,                               dtype=self.dtype, device=self.device)\r\n                latent_guide_weights_drift_y    = torch.cat((prepend, latent_guide_weights_drift_y.to(self.device)), dim=0)\r\n                \r\n            if latent_guide_weights_drift_y_inv is None and scheduler_drift_y_inv_ is not None:\r\n                total_steps                      = steps_drift_y_inv_ - start_steps_drift_y_inv_\r\n                latent_guide_weights_drift_y_inv = get_sigmas(self.model, scheduler_drift_y_inv_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                          = torch.zeros(start_steps_drift_y_inv_,                               dtype=self.dtype, device=self.device) \r\n                latent_guide_weights_drift_y_inv = torch.cat((prepend, latent_guide_weights_drift_y_inv.to(self.device)), dim=0)\r\n                \r\n            if latent_guide_weights_lure_x is None and scheduler_lure_x_ is not None:\r\n                total_steps                     = steps_lure_x_ - start_steps_lure_x_\r\n                latent_guide_weights_lure_x     = get_sigmas(self.model, scheduler_lure_x_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_lure_x_,                               dtype=self.dtype, device=self.device)\r\n                latent_guide_weights_lure_x     = torch.cat((prepend, latent_guide_weights_lure_x.to(self.device)), dim=0)\r\n                \r\n            if latent_guide_weights_lure_x_inv is None and scheduler_lure_x_inv_ is not None:\r\n                total_steps                     = steps_lure_x_inv_ - start_steps_lure_x_inv_\r\n                latent_guide_weights_lure_x_inv = get_sigmas(self.model, scheduler_lure_x_inv_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_lure_x_inv_,                               dtype=self.dtype, device=self.device) \r\n                latent_guide_weights_lure_x_inv = torch.cat((prepend, latent_guide_weights_lure_x_inv.to(self.device)), dim=0)\r\n                \r\n            if latent_guide_weights_lure_y is None and scheduler_lure_y_ is not None:\r\n                total_steps                     = steps_lure_y_ - start_steps_lure_y_\r\n                latent_guide_weights_lure_y     = get_sigmas(self.model, scheduler_lure_y_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_lure_y_,                               dtype=self.dtype, device=self.device)\r\n                latent_guide_weights_lure_y     = torch.cat((prepend, latent_guide_weights_lure_y.to(self.device)), dim=0)\r\n                \r\n            if latent_guide_weights_lure_y_inv is None and scheduler_lure_y_inv_ is not None:\r\n                total_steps                     = steps_lure_y_inv_ - start_steps_lure_y_inv_\r\n                latent_guide_weights_lure_y_inv = get_sigmas(self.model, scheduler_lure_y_inv_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_lure_y_inv_,                               dtype=self.dtype, device=self.device) \r\n                latent_guide_weights_lure_y_inv = torch.cat((prepend, latent_guide_weights_lure_y_inv.to(self.device)), dim=0)\r\n                \r\n\r\n            if latent_guide_weights_mean is None and scheduler_mean_ is not None:\r\n                total_steps                     = steps_mean_ - start_steps_mean_\r\n                latent_guide_weights_mean       = get_sigmas(self.model, scheduler_mean_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_mean_,                                                        dtype=self.dtype, device=self.device) \r\n                latent_guide_weights_mean       = torch.cat((prepend, latent_guide_weights_mean.to(self.device)), dim=0)\r\n            \r\n            if latent_guide_weights_adain is None and scheduler_adain_ is not None:\r\n                total_steps                     = steps_adain_ - start_steps_adain_\r\n                latent_guide_weights_adain      = get_sigmas(self.model, scheduler_adain_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_adain_,                                                         dtype=self.dtype, device=self.device) \r\n                latent_guide_weights_adain      = torch.cat((prepend, latent_guide_weights_adain.to(self.device)), dim=0)\r\n            \r\n            if latent_guide_weights_attninj is None and scheduler_attninj_ is not None:\r\n                total_steps                     = steps_attninj_ - start_steps_attninj_\r\n                latent_guide_weights_attninj    = get_sigmas(self.model, scheduler_attninj_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_attninj_,                                                         dtype=self.dtype, device=self.device) \r\n                latent_guide_weights_attninj    = torch.cat((prepend, latent_guide_weights_attninj.to(self.device)), dim=0)\r\n            \r\n            if latent_guide_weights_style_pos is None and scheduler_style_pos_ is not None:\r\n                total_steps                     = steps_style_pos_ - start_steps_style_pos_\r\n                latent_guide_weights_style_pos  = get_sigmas(self.model, scheduler_style_pos_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_style_pos_,                                                         dtype=self.dtype, device=self.device) \r\n                latent_guide_weights_style_pos  = torch.cat((prepend, latent_guide_weights_style_pos.to(self.device)), dim=0)\r\n            \r\n            if latent_guide_weights_style_neg is None and scheduler_style_neg_ is not None:\r\n                total_steps                     = steps_style_neg_ - start_steps_style_neg_\r\n                latent_guide_weights_style_neg  = get_sigmas(self.model, scheduler_style_neg_, total_steps, 1.0, shift=guide_sigma_shift).to(dtype=self.dtype, device=self.device) / self.sigma_max\r\n                prepend                         = torch.zeros(start_steps_style_neg_,                                                         dtype=self.dtype, device=self.device) \r\n                latent_guide_weights_style_neg  = torch.cat((prepend, latent_guide_weights_style_neg.to(self.device)), dim=0)\r\n            \r\n            if scheduler_ != \"constant\":\r\n                latent_guide_weights            = initialize_or_scale(latent_guide_weights,      latent_guide_weight,      self.max_steps)\r\n            if scheduler_inv_ != \"constant\":\r\n                latent_guide_weights_inv        = initialize_or_scale(latent_guide_weights_inv,  latent_guide_weight_inv,  self.max_steps)\r\n            if scheduler_sync_ != \"constant\":\r\n                latent_guide_weights_sync       = initialize_or_scale(latent_guide_weights_sync,      latent_guide_weight_sync,      self.max_steps)\r\n            if scheduler_sync_inv_ != \"constant\": \r\n                latent_guide_weights_sync_inv   = initialize_or_scale(latent_guide_weights_sync_inv,  latent_guide_weight_sync_inv,  self.max_steps)\r\n                \r\n            latent_guide_weights_sync     = 1 - latent_guide_weights_sync     if latent_guide_weights_sync     is not None else latent_guide_weights\r\n            latent_guide_weights_sync_inv = 1 - latent_guide_weights_sync_inv if latent_guide_weights_sync_inv is not None else latent_guide_weights_inv\r\n            latent_guide_weight_sync      = 1 - latent_guide_weight_sync\r\n            latent_guide_weight_sync_inv  = 1 - latent_guide_weight_sync_inv# these are more intuitive to use if these are reversed... so that sync weight = 1.0 means \"maximum guide strength\"\r\n            \r\n            \r\n            if scheduler_drift_x_ != \"constant\": \r\n                latent_guide_weights_drift_x     = initialize_or_scale(latent_guide_weights_drift_x,      latent_guide_weight_drift_x,      self.max_steps)\r\n            if scheduler_drift_x_inv_ != \"constant\": \r\n                latent_guide_weights_drift_x_inv = initialize_or_scale(latent_guide_weights_drift_x_inv,  latent_guide_weight_drift_x_inv,  self.max_steps)\r\n            if scheduler_drift_y_ != \"constant\": \r\n                latent_guide_weights_drift_y     = initialize_or_scale(latent_guide_weights_drift_y,      latent_guide_weight_drift_y,      self.max_steps)\r\n            if scheduler_drift_y_inv_ != \"constant\": \r\n                latent_guide_weights_drift_y_inv = initialize_or_scale(latent_guide_weights_drift_y_inv,  latent_guide_weight_drift_y_inv,  self.max_steps)\r\n            if scheduler_lure_x_ != \"constant\": \r\n                latent_guide_weights_lure_x      = initialize_or_scale(latent_guide_weights_lure_x,      latent_guide_weight_lure_x,      self.max_steps)\r\n            if scheduler_lure_x_inv_ != \"constant\": \r\n                latent_guide_weights_lure_x_inv  = initialize_or_scale(latent_guide_weights_lure_x_inv,  latent_guide_weight_lure_x_inv,  self.max_steps)\r\n            if scheduler_lure_y_ != \"constant\": \r\n                latent_guide_weights_lure_y      = initialize_or_scale(latent_guide_weights_lure_y,      latent_guide_weight_lure_y,      self.max_steps)\r\n            if scheduler_lure_y_inv_ != \"constant\": \r\n                latent_guide_weights_lure_y_inv  = initialize_or_scale(latent_guide_weights_lure_y_inv,  latent_guide_weight_lure_y_inv,  self.max_steps)\r\n            if scheduler_mean_ != \"constant\": \r\n                latent_guide_weights_mean        = initialize_or_scale(latent_guide_weights_mean, latent_guide_weight_mean, self.max_steps)\r\n            if scheduler_adain_ != \"constant\": \r\n                latent_guide_weights_adain       = initialize_or_scale(latent_guide_weights_adain, latent_guide_weight_adain, self.max_steps)\r\n            if scheduler_attninj_ != \"constant\": \r\n                latent_guide_weights_attninj     = initialize_or_scale(latent_guide_weights_attninj, latent_guide_weight_attninj, self.max_steps)\r\n            if scheduler_style_pos_ != \"constant\": \r\n                latent_guide_weights_style_pos   = initialize_or_scale(latent_guide_weights_style_pos, latent_guide_weight_style_pos, self.max_steps)\r\n            if scheduler_style_neg_ != \"constant\": \r\n                latent_guide_weights_style_neg   = initialize_or_scale(latent_guide_weights_style_neg, latent_guide_weight_style_neg, self.max_steps)\r\n\r\n            latent_guide_weights            [steps_            :] = 0\r\n            latent_guide_weights_inv        [steps_inv_        :] = 0\r\n            latent_guide_weights_sync       [steps_sync_       :] = 1 #one\r\n            latent_guide_weights_sync_inv   [steps_sync_inv_   :] = 1 #one\r\n            latent_guide_weights_drift_x    [steps_drift_x_    :] = 0\r\n            latent_guide_weights_drift_x_inv[steps_drift_x_inv_:] = 0\r\n            latent_guide_weights_drift_y    [steps_drift_y_    :] = 0\r\n            latent_guide_weights_drift_y_inv[steps_drift_y_inv_:] = 0\r\n            latent_guide_weights_lure_x     [steps_lure_x_     :] = 0\r\n            latent_guide_weights_lure_x_inv [steps_lure_x_inv_ :] = 0\r\n            latent_guide_weights_lure_y     [steps_lure_y_     :] = 0\r\n            latent_guide_weights_lure_y_inv [steps_lure_y_inv_ :] = 0\r\n            latent_guide_weights_mean       [steps_mean_       :] = 0\r\n            latent_guide_weights_adain      [steps_adain_      :] = 0\r\n            latent_guide_weights_attninj    [steps_attninj_    :] = 0\r\n            latent_guide_weights_style_pos  [steps_style_pos_  :] = 0\r\n            latent_guide_weights_style_neg  [steps_style_neg_  :] = 0\r\n        \r\n        self.lgw             = F.pad(latent_guide_weights,             (0, self.max_steps), value=0.0)\r\n        self.lgw_inv         = F.pad(latent_guide_weights_inv,         (0, self.max_steps), value=0.0)\r\n        self.lgw_sync        = F.pad(latent_guide_weights_sync,        (0, self.max_steps), value=1.0) #one\r\n        self.lgw_sync_inv    = F.pad(latent_guide_weights_sync_inv,    (0, self.max_steps), value=1.0) #one\r\n        self.lgw_drift_x     = F.pad(latent_guide_weights_drift_x,     (0, self.max_steps), value=0.0)\r\n        self.lgw_drift_x_inv = F.pad(latent_guide_weights_drift_x_inv, (0, self.max_steps), value=0.0)\r\n        self.lgw_drift_y     = F.pad(latent_guide_weights_drift_y,     (0, self.max_steps), value=0.0)\r\n        self.lgw_drift_y_inv = F.pad(latent_guide_weights_drift_y_inv, (0, self.max_steps), value=0.0)\r\n        self.lgw_lure_x      = F.pad(latent_guide_weights_lure_x,      (0, self.max_steps), value=0.0)\r\n        self.lgw_lure_x_inv  = F.pad(latent_guide_weights_lure_x_inv,  (0, self.max_steps), value=0.0)\r\n        self.lgw_lure_y      = F.pad(latent_guide_weights_lure_y,      (0, self.max_steps), value=0.0)\r\n        self.lgw_lure_y_inv  = F.pad(latent_guide_weights_lure_y_inv,  (0, self.max_steps), value=0.0)\r\n        self.lgw_mean        = F.pad(latent_guide_weights_mean,        (0, self.max_steps), value=0.0)\r\n        self.lgw_adain       = F.pad(latent_guide_weights_adain,       (0, self.max_steps), value=0.0)\r\n        self.lgw_attninj     = F.pad(latent_guide_weights_attninj,     (0, self.max_steps), value=0.0)\r\n        self.lgw_style_pos   = F.pad(latent_guide_weights_style_pos,   (0, self.max_steps), value=0.0)\r\n        self.lgw_style_neg   = F.pad(latent_guide_weights_style_neg,   (0, self.max_steps), value=0.0)\r\n        \r\n        mask, self.LGW_MASK_RESCALE_MIN = prepare_mask(x, self.mask, self.LGW_MASK_RESCALE_MIN)\r\n        self.mask = mask.to(dtype=self.dtype, device=self.device)\r\n\r\n        if self.mask_inv is not None:\r\n            mask_inv, self.LGW_MASK_RESCALE_MIN = prepare_mask(x, self.mask_inv, self.LGW_MASK_RESCALE_MIN)\r\n            self.mask_inv = mask_inv.to(dtype=self.dtype, device=self.device)\r\n        else:\r\n            self.mask_inv = (1-self.mask)\r\n            \r\n        if self.mask_sync is not None:\r\n            mask_sync, self.LGW_MASK_RESCALE_MIN = prepare_mask(x, self.mask_sync, self.LGW_MASK_RESCALE_MIN)\r\n            self.mask_sync = mask_sync.to(dtype=self.dtype, device=self.device)\r\n        else:\r\n            self.mask_sync = self.mask\r\n            \r\n        if self.mask_drift_x is not None:\r\n            mask_drift_x, self.LGW_MASK_RESCALE_MIN = prepare_mask(x, self.mask_drift_x, self.LGW_MASK_RESCALE_MIN)\r\n            self.mask_drift_x = mask_drift_x.to(dtype=self.dtype, device=self.device)\r\n        else:\r\n            self.mask_drift_x = self.mask\r\n            \r\n        if self.mask_drift_y is not None:\r\n            mask_drift_y, self.LGW_MASK_RESCALE_MIN = prepare_mask(x, self.mask_drift_y, self.LGW_MASK_RESCALE_MIN)\r\n            self.mask_drift_y = mask_drift_y.to(dtype=self.dtype, device=self.device)\r\n        else:\r\n            self.mask_drift_y = self.mask\r\n            \r\n        if self.mask_lure_x is not None:\r\n            mask_lure_x, self.LGW_MASK_RESCALE_MIN = prepare_mask(x, self.mask_lure_x, self.LGW_MASK_RESCALE_MIN)\r\n            self.mask_lure_x = mask_lure_x.to(dtype=self.dtype, device=self.device)\r\n        else:\r\n            self.mask_lure_x = self.mask\r\n            \r\n        if self.mask_lure_y is not None:\r\n            mask_lure_y, self.LGW_MASK_RESCALE_MIN = prepare_mask(x, self.mask_lure_y, self.LGW_MASK_RESCALE_MIN)\r\n            self.mask_lure_y = mask_lure_y.to(dtype=self.dtype, device=self.device)\r\n        else:\r\n            self.mask_lure_y = self.mask\r\n            \r\n        mask_style_pos, self.LGW_MASK_RESCALE_MIN = prepare_mask(x, self.mask_style_pos, self.LGW_MASK_RESCALE_MIN)\r\n        self.mask_style_pos = mask_style_pos.to(dtype=self.dtype, device=self.device)\r\n\r\n        \r\n        mask_style_neg, self.LGW_MASK_RESCALE_MIN = prepare_mask(x, self.mask_style_neg, self.LGW_MASK_RESCALE_MIN)\r\n        self.mask_style_neg = mask_style_neg.to(dtype=self.dtype, device=self.device)\r\n    \r\n        if latent_guide is not None:\r\n            self.HAS_LATENT_GUIDE = True\r\n            if type(latent_guide) is dict:\r\n                if latent_guide    ['samples'].shape[0] > 1:\r\n                    latent_guide['samples']     = latent_guide    ['samples'][batch_num].unsqueeze(0)\r\n                latent_guide_samples = self.model.inner_model.inner_model.process_latent_in(latent_guide['samples']).clone().to(dtype=self.dtype, device=self.device)\r\n            elif type(latent_guide) is torch.Tensor:\r\n                latent_guide_samples = latent_guide.to(dtype=self.dtype, device=self.device)\r\n            else:\r\n                raise ValueError(f\"Invalid latent type: {type(latent_guide)}\")\r\n\r\n            if self.VIDEO and latent_guide_samples.shape[2] == 1:\r\n                latent_guide_samples = latent_guide_samples.repeat(1, 1, x.shape[2], 1, 1)\r\n\r\n            if self.SAMPLE:\r\n                self.y0 = latent_guide_samples\r\n            elif sigma_init != 0.0:\r\n                pass\r\n            elif self.UNSAMPLE: # and self.mask is not None:\r\n                mask = self.mask.to(x.device)\r\n                x = (1-mask) * x + mask * latent_guide_samples.to(x.device)\r\n            else:\r\n                x = latent_guide_samples.to(x.device)\r\n        else:\r\n            self.y0 = torch.zeros_like(x, dtype=self.dtype, device=self.device)\r\n\r\n        if latent_guide_inv is not None:\r\n            self.HAS_LATENT_GUIDE_INV = True\r\n            if type(latent_guide_inv) is dict:\r\n                if latent_guide_inv['samples'].shape[0] > 1:\r\n                    latent_guide_inv['samples'] = latent_guide_inv['samples'][batch_num].unsqueeze(0)\r\n                latent_guide_inv_samples = self.model.inner_model.inner_model.process_latent_in(latent_guide_inv['samples']).clone().to(dtype=self.dtype, device=self.device)\r\n            elif type(latent_guide_inv) is torch.Tensor:\r\n                latent_guide_inv_samples = latent_guide_inv.to(dtype=self.dtype, device=self.device)\r\n            else:\r\n                raise ValueError(f\"Invalid latent type: {type(latent_guide_inv)}\")\r\n\r\n            if self.VIDEO and latent_guide_inv_samples.shape[2] == 1:\r\n                latent_guide_inv_samples = latent_guide_inv_samples.repeat(1, 1, x.shape[2], 1, 1)\r\n\r\n            if self.SAMPLE:\r\n                self.y0_inv = latent_guide_inv_samples\r\n            elif sigma_init != 0.0:\r\n                pass\r\n            elif self.UNSAMPLE: # and self.mask is not None:\r\n                mask_inv = self.mask_inv.to(x.device)\r\n                x = (1-mask_inv) * x + mask_inv * latent_guide_inv_samples.to(x.device) #fixed old approach, which was mask, (1-mask)\r\n            else:\r\n                x = latent_guide_inv_samples.to(x.device)   #THIS COULD LEAD TO WEIRD BEHAVIOR! OVERWRITING X WITH LG_INV AFTER SETTING TO LG above!\r\n        else:\r\n            self.y0_inv = torch.zeros_like(x, dtype=self.dtype, device=self.device)\r\n\r\n        if latent_guide_mean is not None:\r\n            self.HAS_LATENT_GUIDE_MEAN = True\r\n            if type(latent_guide_mean) is dict:\r\n                if latent_guide_mean['samples'].shape[0] > 1:\r\n                    latent_guide_mean['samples'] = latent_guide_mean['samples'][batch_num].unsqueeze(0)\r\n                latent_guide_mean_samples = self.model.inner_model.inner_model.process_latent_in(latent_guide_mean['samples']).clone().to(dtype=self.dtype, device=self.device)\r\n            elif type(latent_guide_mean) is torch.Tensor:\r\n                latent_guide_mean_samples = latent_guide_mean.to(dtype=self.dtype, device=self.device)\r\n            else:\r\n                raise ValueError(f\"Invalid latent type: {type(latent_guide_mean)}\")\r\n\r\n            if self.VIDEO and latent_guide_mean_samples.shape[2] == 1:\r\n                latent_guide_mean_samples = latent_guide_mean_samples.repeat(1, 1, x.shape[2], 1, 1)\r\n\r\n            self.y0_mean = latent_guide_mean_samples\r\n            \"\"\"if self.SAMPLE:\r\n                self.y0_mean = latent_guide_mean_samples\r\n            elif self.UNSAMPLE: # and self.mask is not None:\r\n                mask_mean = self.mask_mean.to(x.device)\r\n                x = (1-mask_mean) * x + mask_mean * latent_guide_mean_samples.to(x.device) #fixed old approach, which was mask, (1-mask)     # NECESSARY?\r\n            else:\r\n                x = latent_guide_mean_samples.to(x.device)   #THIS COULD LEAD TO WEIRD BEHAVIOR! OVERWRITING X WITH LG_MEAN AFTER SETTING TO LG above!\"\"\"\r\n        else:\r\n            self.y0_mean = torch.zeros_like(x, dtype=self.dtype, device=self.device)\r\n\r\n        if latent_guide_adain is not None:\r\n            self.HAS_LATENT_GUIDE_ADAIN = True\r\n            if type(latent_guide_adain) is dict:\r\n                if latent_guide_adain['samples'].shape[0] > 1:\r\n                    latent_guide_adain['samples'] = latent_guide_adain['samples'][batch_num].unsqueeze(0)\r\n                latent_guide_adain_samples = self.model.inner_model.inner_model.process_latent_in(latent_guide_adain['samples']).clone().to(dtype=self.dtype, device=self.device)\r\n            elif type(latent_guide_adain) is torch.Tensor:\r\n                latent_guide_adain_samples = latent_guide_adain.to(dtype=self.dtype, device=self.device)\r\n            else:\r\n                raise ValueError(f\"Invalid latent type: {type(latent_guide_adain)}\")\r\n\r\n            if self.VIDEO and latent_guide_adain_samples.shape[2] == 1:\r\n                latent_guide_adain_samples = latent_guide_adain_samples.repeat(1, 1, x.shape[2], 1, 1)\r\n\r\n            self.y0_adain = latent_guide_adain_samples\r\n            \"\"\"if self.SAMPLE:\r\n                self.y0_adain = latent_guide_adain_samples\r\n            elif self.UNSAMPLE: # and self.mask is not None:\r\n                if self.mask_adain is not None:\r\n                    mask_adain = self.mask_adain.to(x.device)\r\n                    x = (1-mask_adain) * x + mask_adain * latent_guide_adain_samples.to(x.device) #fixed old approach, which was mask, (1-mask)     # NECESSARY?\r\n                else:\r\n                    x = latent_guide_adain_samples.to(x.device)\r\n            else:\r\n                x = latent_guide_adain_samples.to(x.device)   #THIS COULD LEAD TO WEIRD BEHAVIOR! OVERWRITING X WITH LG_ADAIN AFTER SETTING TO LG above!\"\"\"\r\n        else:\r\n            self.y0_adain = torch.zeros_like(x, dtype=self.dtype, device=self.device)\r\n\r\n        if latent_guide_attninj is not None:\r\n            self.HAS_LATENT_GUIDE_ATTNINJ = True\r\n            if type(latent_guide_attninj) is dict:\r\n                if latent_guide_attninj['samples'].shape[0] > 1:\r\n                    latent_guide_attninj['samples'] = latent_guide_attninj['samples'][batch_num].unsqueeze(0)\r\n                latent_guide_attninj_samples = self.model.inner_model.inner_model.process_latent_in(latent_guide_attninj['samples']).clone().to(dtype=self.dtype, device=self.device)\r\n            elif type(latent_guide_attninj) is torch.Tensor:\r\n                latent_guide_attninj_samples = latent_guide_attninj.to(dtype=self.dtype, device=self.device)\r\n            else:\r\n                raise ValueError(f\"Invalid latent type: {type(latent_guide_attninj)}\")\r\n\r\n            if self.VIDEO and latent_guide_attninj_samples.shape[2] == 1:\r\n                latent_guide_attninj_samples = latent_guide_attninj_samples.repeat(1, 1, x.shape[2], 1, 1)\r\n\r\n            self.y0_attninj = latent_guide_attninj_samples\r\n            \"\"\"if self.SAMPLE:\r\n                self.y0_attninj = latent_guide_attninj_samples\r\n            elif self.UNSAMPLE: # and self.mask is not None:\r\n                if self.mask_attninj is not None:\r\n                    mask_attninj = self.mask_attninj.to(x.device)\r\n                    x = (1-mask_attninj) * x + mask_attninj * latent_guide_attninj_samples.to(x.device) #fixed old approach, which was mask, (1-mask)     # NECESSARY?\r\n                else:\r\n                    x = latent_guide_attninj_samples.to(x.device)  \r\n            else:\r\n                x = latent_guide_attninj_samples.to(x.device)   #THIS COULD LEAD TO WEIRD BEHAVIOR! OVERWRITING X WITH LG_ADAIN AFTER SETTING TO LG above!\"\"\"\r\n        else:\r\n            self.y0_attninj = torch.zeros_like(x, dtype=self.dtype, device=self.device)\r\n\r\n\r\n        if latent_guide_style_pos is not None:\r\n            self.HAS_LATENT_GUIDE_STYLE_POS = True\r\n            if type(latent_guide_style_pos) is dict:\r\n                if latent_guide_style_pos['samples'].shape[0] > 1:\r\n                    latent_guide_style_pos['samples'] = latent_guide_style_pos['samples'][batch_num].unsqueeze(0)\r\n                latent_guide_style_pos_samples = self.model.inner_model.inner_model.process_latent_in(latent_guide_style_pos['samples']).clone().to(dtype=self.dtype, device=self.device)\r\n            elif type(latent_guide_style_pos) is torch.Tensor:\r\n                latent_guide_style_pos_samples = latent_guide_style_pos.to(dtype=self.dtype, device=self.device)\r\n            else:\r\n                raise ValueError(f\"Invalid latent type: {type(latent_guide_style_pos)}\")\r\n\r\n            if self.VIDEO and latent_guide_style_pos_samples.shape[2] == 1:\r\n                latent_guide_style_pos_samples = latent_guide_style_pos_samples.repeat(1, 1, x.shape[2], 1, 1)\r\n\r\n            self.y0_style_pos = latent_guide_style_pos_samples\r\n            \"\"\"if self.SAMPLE:\r\n                self.y0_style_pos = latent_guide_style_pos_samples\r\n            elif self.UNSAMPLE: # and self.mask is not None:\r\n                if self.mask_style_pos is not None:\r\n                    mask_style_pos = self.mask_style_pos.to(x.device)\r\n                    x = (1-mask_style_pos) * x + mask_style_pos * latent_guide_style_pos_samples.to(x.device) #fixed old approach, which was mask, (1-mask)     # NECESSARY?\r\n                else:\r\n                    x = latent_guide_style_pos_samples.to(x.device)  \r\n            else:\r\n                x = latent_guide_style_pos_samples.to(x.device)   #THIS COULD LEAD TO WEIRD BEHAVIOR! OVERWRITING X WITH LG_ADAIN AFTER SETTING TO LG above!\"\"\"\r\n        else:\r\n            self.y0_style_pos = torch.zeros_like(x, dtype=self.dtype, device=self.device)\r\n\r\n\r\n        if latent_guide_style_neg is not None:\r\n            self.HAS_LATENT_GUIDE_STYLE_NEG = True\r\n            if type(latent_guide_style_neg) is dict:\r\n                if latent_guide_style_neg['samples'].shape[0] > 1:\r\n                    latent_guide_style_neg['samples'] = latent_guide_style_neg['samples'][batch_num].unsqueeze(0)\r\n                latent_guide_style_neg_samples = self.model.inner_model.inner_model.process_latent_in(latent_guide_style_neg['samples']).clone().to(dtype=self.dtype, device=self.device)\r\n            elif type(latent_guide_style_neg) is torch.Tensor:\r\n                latent_guide_style_neg_samples = latent_guide_style_neg.to(dtype=self.dtype, device=self.device)\r\n            else:\r\n                raise ValueError(f\"Invalid latent type: {type(latent_guide_style_neg)}\")\r\n\r\n            if self.VIDEO and latent_guide_style_neg_samples.shape[2] == 1:\r\n                latent_guide_style_neg_samples = latent_guide_style_neg_samples.repeat(1, 1, x.shape[2], 1, 1)\r\n\r\n            self.y0_style_neg = latent_guide_style_neg_samples\r\n            \"\"\"if self.SAMPLE:\r\n                self.y0_style_neg = latent_guide_style_neg_samples\r\n            elif self.UNSAMPLE: # and self.mask is not None:\r\n                if self.mask_style_neg is not None:\r\n                    mask_style_neg = self.mask_style_neg.to(x.device)\r\n                    x = (1-mask_style_neg) * x + mask_style_neg * latent_guide_style_neg_samples.to(x.device) #fixed old approach, which was mask, (1-mask)     # NECESSARY?\r\n                else:\r\n                    x = latent_guide_style_neg_samples.to(x.device)  \r\n            else:\r\n                x = latent_guide_style_neg_samples.to(x.device)   #THIS COULD LEAD TO WEIRD BEHAVIOR! OVERWRITING X WITH LG_ADAIN AFTER SETTING TO LG above!\"\"\"\r\n        else:\r\n            self.y0_style_neg = torch.zeros_like(x, dtype=self.dtype, device=self.device)\r\n\r\n        if self.UNSAMPLE and not self.SAMPLE: #sigma_next > sigma:   # TODO: VERIFY APPROACH FOR INVERSION\r\n            if guide_inversion_y0 is not None:\r\n                self.y0 = guide_inversion_y0\r\n            else:\r\n                self.y0     = noise_sampler(sigma=self.sigma_max, sigma_next=self.sigma_min).to(dtype=self.dtype, device=self.device)\r\n                self.y0     = normalize_zscore(self.y0,     channelwise=True, inplace=True)\r\n                self.y0    *= self.sigma_max\r\n                \r\n            if guide_inversion_y0_inv is not None:\r\n                self.y0_inv = guide_inversion_y0_inv\r\n            else:\r\n                self.y0_inv = noise_sampler(sigma=self.sigma_max, sigma_next=self.sigma_min).to(dtype=self.dtype, device=self.device)\r\n                self.y0_inv = normalize_zscore(self.y0_inv, channelwise=True, inplace=True)\r\n                self.y0_inv*= self.sigma_max\r\n\r\n            \r\n        if self.VIDEO and self.frame_weights_mgr is not None:\r\n            num_frames = x.shape[2]\r\n            self.frame_weights     = self.frame_weights_mgr.get_frame_weights_by_name('frame_weights', num_frames)\r\n            self.frame_weights_inv = self.frame_weights_mgr.get_frame_weights_by_name('frame_weights_inv', num_frames)\r\n            \r\n        x, self.y0, self.y0_inv = self.normalize_inputs(x, self.y0, self.y0_inv)       # ???\r\n\r\n        return x\r\n\r\n    def prepare_weighted_masks(self, step:int, lgw_type=\"default\") -> Tuple[Tensor, Tensor]:\r\n        if lgw_type == \"sync\":\r\n            lgw_     = self.lgw_sync    [step]\r\n            lgw_inv_ = self.lgw_sync_inv[step]\r\n            mask     = torch.ones_like (self.y0) if self.mask_sync is None else   self.mask_sync\r\n            mask_inv = torch.zeros_like(self.y0) if self.mask_sync is None else 1-self.mask_sync\r\n        elif lgw_type == \"drift_x\":\r\n            lgw_     = self.lgw_drift_x    [step]\r\n            lgw_inv_ = self.lgw_drift_x_inv[step]\r\n            mask     = torch.ones_like (self.y0) if self.mask_drift_x is None else   self.mask_drift_x\r\n            mask_inv = torch.zeros_like(self.y0) if self.mask_drift_x is None else 1-self.mask_drift_x\r\n        elif lgw_type == \"drift_y\":\r\n            lgw_     = self.lgw_drift_y    [step]\r\n            lgw_inv_ = self.lgw_drift_y_inv[step]\r\n            mask     = torch.ones_like (self.y0) if self.mask_drift_y is None else   self.mask_drift_y\r\n            mask_inv = torch.zeros_like(self.y0) if self.mask_drift_y is None else 1-self.mask_drift_y\r\n        elif lgw_type == \"lure_x\":\r\n            lgw_     = self.lgw_lure_x    [step]\r\n            lgw_inv_ = self.lgw_lure_x_inv[step]\r\n            mask     = torch.ones_like (self.y0) if self.mask_lure_x is None else   self.mask_lure_x\r\n            mask_inv = torch.zeros_like(self.y0) if self.mask_lure_x is None else 1-self.mask_lure_x\r\n        elif lgw_type == \"lure_y\":\r\n            lgw_     = self.lgw_lure_y    [step]\r\n            lgw_inv_ = self.lgw_lure_y_inv[step]\r\n            mask     = torch.ones_like (self.y0) if self.mask_lure_y is None else   self.mask_lure_y\r\n            mask_inv = torch.zeros_like(self.y0) if self.mask_lure_y is None else 1-self.mask_lure_y\r\n        else:\r\n            lgw_     = self.lgw    [step]\r\n            lgw_inv_ = self.lgw_inv[step]\r\n            mask     = torch.ones_like (self.y0) if self.mask     is None else self.mask\r\n            mask_inv = torch.zeros_like(self.y0) if self.mask_inv is None else self.mask_inv\r\n\r\n        if self.LGW_MASK_RESCALE_MIN: \r\n            lgw_mask     =    mask  * (1-lgw_)     + lgw_\r\n            lgw_mask_inv = (1-mask) * (1-lgw_inv_) + lgw_inv_\r\n        else:\r\n            if self.HAS_LATENT_GUIDE:\r\n                lgw_mask = mask * lgw_\r\n            else:\r\n                lgw_mask = torch.zeros_like(mask)\r\n            \r\n            if self.HAS_LATENT_GUIDE_INV:\r\n                if mask_inv is not None:\r\n                    lgw_mask_inv = torch.minimum(mask_inv, (1-mask) * lgw_inv_)\r\n                    #lgw_mask_inv = torch.minimum(1-mask_inv, (1-mask) * lgw_inv_)\r\n                else:\r\n                    lgw_mask_inv = (1-mask) * lgw_inv_\r\n            else:\r\n                lgw_mask_inv = torch.zeros_like(mask)\r\n\r\n        return lgw_mask, lgw_mask_inv\r\n\r\n\r\n    def get_masks_for_step(self, step:int, lgw_type=\"default\") -> Tuple[Tensor, Tensor]:\r\n        lgw_mask, lgw_mask_inv = self.prepare_weighted_masks(step, lgw_type=lgw_type)\r\n        normalize_frame_weights_per_step = self.EO(\"normalize_frame_weights_per_step\")\r\n        normalize_frame_weights_per_step_inv = self.EO(\"normalize_frame_weights_per_step_inv\")\r\n\r\n        if self.VIDEO and self.frame_weights_mgr:\r\n            num_frames = lgw_mask.shape[2]\r\n            if self.HAS_LATENT_GUIDE:\r\n                frame_weights = self.frame_weights_mgr.get_frame_weights_by_name('frame_weights', num_frames, step)\r\n                apply_frame_weights(lgw_mask, frame_weights, normalize_frame_weights_per_step)\r\n            if self.HAS_LATENT_GUIDE_INV:\r\n                frame_weights_inv = self.frame_weights_mgr.get_frame_weights_by_name('frame_weights_inv', num_frames, step)\r\n                apply_frame_weights(lgw_mask_inv, frame_weights_inv, normalize_frame_weights_per_step_inv)\r\n\r\n        return lgw_mask.to(self.device), lgw_mask_inv.to(self.device)\r\n\r\n\r\n\r\n    def get_cossim_adjusted_lgw_masks(self, data:Tensor, step:int) -> Tuple[Tensor, Tensor, Tensor, Tensor]:\r\n        \r\n        if self.HAS_LATENT_GUIDE:\r\n            y0     = self.y0.clone()\r\n        else:\r\n            y0     = torch.zeros_like(data)\r\n            \r\n        if self.HAS_LATENT_GUIDE_INV:\r\n            y0_inv = self.y0_inv.clone()\r\n        else:\r\n            y0_inv = torch.zeros_like(data)\r\n\r\n        if y0.shape[0] > 1:                                    # this is for changing the guide on a per-step basis\r\n            y0 = y0[min(step, y0.shape[0]-1)].unsqueeze(0)\r\n        \r\n        lgw_mask, lgw_mask_inv = self.get_masks_for_step(step)\r\n        \r\n        y0_cossim, y0_cossim_inv  = 1.0, 1.0\r\n        if self.HAS_LATENT_GUIDE:\r\n            y0_cossim     = get_pearson_similarity(data, y0,     mask=lgw_mask)\r\n        if self.HAS_LATENT_GUIDE_INV:\r\n            y0_cossim_inv = get_pearson_similarity(data, y0_inv, mask=lgw_mask_inv)\r\n        \r\n        #if y0_cossim < self.guide_cossim_cutoff_ or y0_cossim_inv < self.guide_bkg_cossim_cutoff_:\r\n        if y0_cossim     >= self.guide_cossim_cutoff_:\r\n            lgw_mask     *= 0\r\n        if y0_cossim_inv >= self.guide_bkg_cossim_cutoff_:\r\n            lgw_mask_inv *= 0\r\n        \r\n        return y0, y0_inv, lgw_mask, lgw_mask_inv\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n    @torch.no_grad\r\n    def process_pseudoimplicit_guides_substep(self,\r\n                                            x_0                         : Tensor,\r\n                                            x_                          : Tensor,\r\n                                            eps_                        : Tensor,\r\n                                            eps_prev_                   : Tensor,\r\n                                            data_                       : Tensor,\r\n                                            denoised_prev               : Tensor,\r\n                                            row                         : int,\r\n                                            step                        : int,\r\n                                            step_sched                  : int,\r\n                                            sigmas                      : Tensor,\r\n                                            NS                                  ,\r\n                                            RK                                  ,\r\n                                            pseudoimplicit_row_weights  : Tensor,\r\n                                            pseudoimplicit_step_weights : Tensor,\r\n                                            full_iter                   : int,\r\n                                            BONGMATH                    : bool,\r\n                                            ):\r\n        \r\n        if \"pseudoimplicit\" not in self.guide_mode or (self.lgw[step_sched] == 0 and self.lgw_inv[step_sched] == 0):\r\n            return x_0, x_, eps_, None, None\r\n        \r\n        sigma = sigmas[step]\r\n\r\n        if self.s_lying_ is not None:\r\n            if row >= len(self.s_lying_):\r\n                return x_0, x_, eps_, None, None\r\n        \r\n        if self.guide_mode.startswith(\"fully_\"):\r\n            data_cossim_test = denoised_prev\r\n        else:\r\n            data_cossim_test = data_[row]\r\n            \r\n        y0, y0_inv, lgw_mask, lgw_mask_inv = self.get_cossim_adjusted_lgw_masks(data_cossim_test, step_sched)\r\n        \r\n        if not (lgw_mask.any() != 0 or lgw_mask_inv.any() != 0):  # cossim score too similar! deactivate guide for this step\r\n            return x_0, x_, eps_, None, None\r\n\r\n\r\n        if \"fully_pseudoimplicit\" in self.guide_mode:\r\n            if self.x_lying_ is None:\r\n                return x_0, x_, eps_, None, None        \r\n            else:\r\n                x_row_pseudoimplicit     = self.x_lying_[row]\r\n                sub_sigma_pseudoimplicit = self.s_lying_[row]\r\n        \r\n        \r\n        \r\n        if RK.IMPLICIT:\r\n            x_ = RK.update_substep(x_0,\r\n                                    x_,\r\n                                    eps_,\r\n                                    eps_prev_,\r\n                                    row,\r\n                                    RK.row_offset,\r\n                                    NS.h_new,\r\n                                    NS.h_new_orig,\r\n                                    )\r\n            \r\n            x_[row] = NS.rebound_overshoot_substep(x_0, x_[row])\r\n            \r\n            if row > 0:\r\n                x_[row] = NS.swap_noise_substep(x_0, x_[row])\r\n                if BONGMATH and step < sigmas.shape[0]-1 and not self.EO(\"disable_pseudoimplicit_bongmath\"):\r\n                    x_0, x_, eps_ = RK.bong_iter(x_0,\r\n                                                x_,\r\n                                                eps_,\r\n                                                eps_prev_,\r\n                                                data_,\r\n                                                sigma,\r\n                                                NS.s_,\r\n                                                row,\r\n                                                RK.row_offset,\r\n                                                NS.h,\r\n                                                step,\r\n                                                step_sched,\r\n                                                )\r\n        else:\r\n            eps_[row] = RK.get_epsilon(x_0, x_[row], denoised_prev, sigma, NS.s_[row])\r\n            \r\n        if self.EO(\"pseudoimplicit_denoised_prev\"):\r\n            eps_[row] = RK.get_epsilon(x_0, x_[row], denoised_prev, sigma, NS.s_[row])\r\n\r\n        eps_substep_guide     = torch.zeros_like(x_0)\r\n        eps_substep_guide_inv = torch.zeros_like(x_0)\r\n        \r\n        if self.HAS_LATENT_GUIDE:\r\n            eps_substep_guide     = RK.get_guide_epsilon(x_0, x_[row], y0,     sigma, NS.s_[row], NS.sigma_down, None)  \r\n        if self.HAS_LATENT_GUIDE_INV:\r\n            eps_substep_guide_inv = RK.get_guide_epsilon(x_0, x_[row], y0_inv, sigma, NS.s_[row], NS.sigma_down, None)  \r\n\r\n\r\n\r\n        if self.guide_mode in {\"pseudoimplicit\", \"pseudoimplicit_cw\", \"pseudoimplicit_projection\", \"pseudoimplicit_projection_cw\"}:\r\n            maxmin_ratio = (NS.sub_sigma - RK.sigma_min) / NS.sub_sigma\r\n            \r\n            if   self.EO(\"guide_pseudoimplicit_power_substep_flip_maxmin_scaling\"):\r\n                maxmin_ratio *= (RK.rows-row) / RK.rows\r\n            elif self.EO(\"guide_pseudoimplicit_power_substep_maxmin_scaling\"):\r\n                maxmin_ratio *= row / RK.rows\r\n            \r\n            sub_sigma_2 = NS.sub_sigma - maxmin_ratio * (NS.sub_sigma * pseudoimplicit_row_weights[row] * pseudoimplicit_step_weights[full_iter] * self.lgw[step_sched])\r\n\r\n            eps_tmp_ = eps_.clone()\r\n\r\n            eps_ = self.process_channelwise(x_0,\r\n                                            eps_,\r\n                                            data_,\r\n                                            row,\r\n                                            eps_substep_guide,\r\n                                            eps_substep_guide_inv,\r\n                                            y0,\r\n                                            y0_inv,\r\n                                            lgw_mask,\r\n                                            lgw_mask_inv,\r\n                                            use_projection = self.guide_mode in {\"pseudoimplicit_projection\", \"pseudoimplicit_projection_cw\"},\r\n                                            channelwise    = self.guide_mode in {\"pseudoimplicit_cw\",         \"pseudoimplicit_projection_cw\"},\r\n                                            )\r\n\r\n            x_row_tmp = x_[row] + RK.h_fn(sub_sigma_2, NS.sub_sigma) * eps_[row]\r\n            \r\n            eps_                     = eps_tmp_\r\n            x_row_pseudoimplicit     = x_row_tmp\r\n            sub_sigma_pseudoimplicit = sub_sigma_2\r\n\r\n\r\n        if RK.IMPLICIT and BONGMATH and step < sigmas.shape[0]-1 and not self.EO(\"disable_pseudobongmath\"):\r\n            x_[row] = NS.sigma_from_to(x_0, x_row_pseudoimplicit, sigma, sub_sigma_pseudoimplicit, NS.s_[row])\r\n            \r\n            x_0, x_, eps_ = RK.bong_iter(x_0,\r\n                                        x_,\r\n                                        eps_,\r\n                                        eps_prev_,\r\n                                        data_,\r\n                                        sigma,\r\n                                        NS.s_,\r\n                                        row,\r\n                                        RK.row_offset,\r\n                                        NS.h,\r\n                                        step,\r\n                                        step_sched,\r\n                                        ) \r\n            \r\n        return x_0, x_, eps_, x_row_pseudoimplicit, sub_sigma_pseudoimplicit\r\n\r\n\r\n\r\n    @torch.no_grad\r\n    def prepare_fully_pseudoimplicit_guides_substep(self,\r\n                                                    x_0,\r\n                                                    x_,\r\n                                                    eps_,\r\n                                                    eps_prev_,\r\n                                                    data_,\r\n                                                    denoised_prev,\r\n                                                    row,\r\n                                                    step,\r\n                                                    step_sched,\r\n                                                    sigmas,\r\n                                                    eta_substep,\r\n                                                    overshoot_substep,\r\n                                                    s_noise_substep,\r\n                                                    NS,\r\n                                                    RK,\r\n                                                    pseudoimplicit_row_weights,\r\n                                                    pseudoimplicit_step_weights,\r\n                                                    full_iter,\r\n                                                    BONGMATH,\r\n                                                    ):\r\n        \r\n        if \"fully_pseudoimplicit\" not in self.guide_mode or (self.lgw[step_sched] == 0 and self.lgw_inv[step_sched] == 0):\r\n            return x_0, x_, eps_ \r\n        \r\n        sigma = sigmas[step]\r\n        \r\n        y0, y0_inv, lgw_mask, lgw_mask_inv = self.get_cossim_adjusted_lgw_masks(denoised_prev, step_sched)\r\n        \r\n        if not (lgw_mask.any() != 0 or lgw_mask_inv.any() != 0):  # cossim score too similar! deactivate guide for this step\r\n            return x_0, x_, eps_\r\n        \r\n\r\n        # PREPARE FULLY PSEUDOIMPLICIT GUIDES\r\n        if self.guide_mode in {\"fully_pseudoimplicit\", \"fully_pseudoimplicit_cw\", \"fully_pseudoimplicit_projection\", \"fully_pseudoimplicit_projection_cw\"} and (self.lgw[step_sched] > 0 or self.lgw_inv[step_sched] > 0):\r\n            x_lying_   = x_.clone()\r\n            eps_lying_ = eps_.clone()\r\n            s_lying_   = []\r\n            \r\n            for r in range(RK.rows):\r\n                \r\n                NS.set_sde_substep(r, RK.multistep_stages, eta_substep, overshoot_substep, s_noise_substep)\r\n\r\n                maxmin_ratio      = (NS.sub_sigma - RK.sigma_min) / NS.sub_sigma\r\n                fully_sub_sigma_2 =  NS.sub_sigma - maxmin_ratio * (NS.sub_sigma * pseudoimplicit_row_weights[r] * pseudoimplicit_step_weights[full_iter] * self.lgw[step_sched])\r\n                \r\n                s_lying_.append(fully_sub_sigma_2)\r\n\r\n                if RK.IMPLICIT:\r\n                    x_ = RK.update_substep(x_0,\r\n                                            x_,\r\n                                            eps_,\r\n                                            eps_prev_,\r\n                                            r,\r\n                                            RK.row_offset,\r\n                                            NS.h_new,\r\n                                            NS.h_new_orig,\r\n                                            ) \r\n                    \r\n                    x_[r] = NS.rebound_overshoot_substep(x_0, x_[r])\r\n\r\n                    if r > 0:\r\n                        x_[r] = NS.swap_noise_substep(x_0, x_[r])\r\n                        if BONGMATH and step < sigmas.shape[0]-1 and not self.EO(\"disable_fully_pseudoimplicit_bongmath\"):\r\n                            x_0, x_, eps_ = RK.bong_iter(x_0,\r\n                                                        x_,\r\n                                                        eps_,\r\n                                                        eps_prev_,\r\n                                                        data_,\r\n                                                        sigma,\r\n                                                        NS.s_,\r\n                                                        r,\r\n                                                        RK.row_offset,\r\n                                                        NS.h,\r\n                                                        step,\r\n                                                        step_sched,\r\n                                                        )\r\n                            \r\n                if self.EO(\"fully_pseudoimplicit_denoised_prev\"):\r\n                    eps_[r] = RK.get_epsilon(x_0, x_[r], denoised_prev, sigma, NS.s_[r])\r\n                \r\n                eps_substep_guide     = torch.zeros_like(x_0)\r\n                eps_substep_guide_inv = torch.zeros_like(x_0)\r\n                \r\n                if self.HAS_LATENT_GUIDE:\r\n                    eps_substep_guide     = RK.get_guide_epsilon(x_0, x_[r], y0,     sigma, NS.s_[r], NS.sigma_down, None)  \r\n                if self.HAS_LATENT_GUIDE_INV:\r\n                    eps_substep_guide_inv = RK.get_guide_epsilon(x_0, x_[r], y0_inv, sigma, NS.s_[r], NS.sigma_down, None)  \r\n                \r\n                eps_ = self.process_channelwise(x_0,\r\n                                                eps_,\r\n                                                data_,\r\n                                                row,\r\n                                                eps_substep_guide,\r\n                                                eps_substep_guide_inv,\r\n                                                y0,\r\n                                                y0_inv,\r\n                                                lgw_mask,\r\n                                                lgw_mask_inv,\r\n                                                use_projection = self.guide_mode in {\"fully_pseudoimplicit_projection\", \"fully_pseudoimplicit_projection_cw\"},\r\n                                                channelwise    = self.guide_mode in {\"fully_pseudoimplicit_cw\",         \"fully_pseudoimplicit_projection_cw\"},\r\n                                                )\r\n\r\n                x_lying_[r]   = x_[r] + RK.h_fn(fully_sub_sigma_2, NS.sub_sigma) * eps_[r]\r\n                data_lying    = x_[r] + RK.h_fn(0,                 NS.s_[r])     * eps_[r] \r\n                \r\n                eps_lying_[r] = RK.get_epsilon(x_0, x_[r], data_lying, sigma, NS.s_[r])\r\n                \r\n            if not self.EO(\"pseudoimplicit_disable_eps_lying\"):\r\n                eps_ = eps_lying_\r\n            \r\n            if not self.EO(\"pseudoimplicit_disable_newton_iter\"):\r\n                x_, eps_ = RK.newton_iter(x_0,\r\n                                        x_,\r\n                                        eps_,\r\n                                        eps_prev_,\r\n                                        data_,\r\n                                        NS.s_,\r\n                                        0,\r\n                                        NS.h,\r\n                                        sigmas,\r\n                                        step,\r\n                                        \"lying\",\r\n                                        )\r\n            \r\n            self.x_lying_ = x_lying_\r\n            self.s_lying_ = s_lying_\r\n\r\n        return x_0, x_, eps_ \r\n\r\n\r\n\r\n    @torch.no_grad\r\n    def process_guides_data_substep(self,\r\n                                x_row         : Tensor,\r\n                                data_row      : Tensor,\r\n                                step          : int,\r\n                                sigma_row     : Tensor,\r\n                                frame_targets : Optional[Tensor] = None,\r\n                                ):\r\n        if not self.HAS_LATENT_GUIDE and not self.HAS_LATENT_GUIDE_INV:\r\n            return x_row\r\n\r\n        y0, y0_inv, lgw_mask, lgw_mask_inv = self.get_cossim_adjusted_lgw_masks(data_row, step)\r\n        \r\n        if not (lgw_mask.any() != 0 or lgw_mask_inv.any() != 0):  # cossim score too similar! deactivate guide for this step\r\n            return x_row\r\n\r\n        if self.VIDEO and self.frame_weights_mgr is not None and frame_targets is None:\r\n            num_frames = data_row.shape[2]\r\n            frame_targets = self.frame_weights_mgr.get_frame_weights_by_name('frame_targets', num_frames, step)\r\n            if frame_targets is None:\r\n                frame_targets = torch.tensor(self.EO(\"frame_targets\", [1.0]))\r\n            frame_targets = torch.clamp(frame_targets, 0.0, 1.0).to(self.device)\r\n\r\n        if self.guide_mode in {\"data\", \"data_projection\", \"lure\", \"lure_projection\"}:\r\n            if frame_targets is None:\r\n                x_row = self.get_data_substep(x_row, data_row, y0, y0_inv, lgw_mask, lgw_mask_inv, step, sigma_row)\r\n            else:\r\n                t_dim = x_row.shape[-3]\r\n                for t in range(t_dim): #temporal dimension\r\n                    frame_target = float(frame_targets[t] if len(frame_targets) > t else frame_targets[-1])\r\n                    x_row[...,t:t+1,:,:] = self.get_data_substep(\r\n                                                                x_row       [...,t:t+1,:,:], \r\n                                                                data_row    [...,t:t+1,:,:],\r\n                                                                y0          [...,t:t+1,:,:], \r\n                                                                y0_inv      [...,t:t+1,:,:], \r\n                                                                lgw_mask    [...,t:t+1,:,:], \r\n                                                                lgw_mask_inv[...,t:t+1,:,:], \r\n                                                                step, \r\n                                                                sigma_row, \r\n                                                                frame_target)\r\n        \r\n        return x_row\r\n\r\n\r\n\r\n\r\n    @torch.no_grad\r\n    def get_data_substep(self,\r\n                        x_row         : Tensor,\r\n                        data_row      : Tensor,\r\n                        y0            : Tensor,\r\n                        y0_inv        : Tensor,\r\n                        lgw_mask      : Tensor,\r\n                        lgw_mask_inv  : Tensor,\r\n                        step          : int,\r\n                        sigma_row     : Tensor,\r\n                        frame_target  : float = 1.0,\r\n                        ):\r\n\r\n        if not self.HAS_LATENT_GUIDE and not self.HAS_LATENT_GUIDE_INV:\r\n            return x_row\r\n\r\n        if self.guide_mode in {\"data\", \"data_projection\", \"lure\", \"lure_projection\"}:\r\n            data_targets = self.EO(\"data_targets\", [1.0])\r\n            step_target = step if len(data_targets) > step else len(data_targets)-1\r\n            \r\n            cossim_target = frame_target * data_targets[step_target]\r\n            \r\n            if self.HAS_LATENT_GUIDE:\r\n                if self.guide_mode.endswith(\"projection\"):\r\n                    d_collinear_d_lerp = get_collinear(data_row, y0)  \r\n                    d_lerp_ortho_d     = get_orthogonal(y0, data_row)  \r\n                    y0                 = d_collinear_d_lerp + d_lerp_ortho_d\r\n                    \r\n                if   cossim_target == 1.0:\r\n                    d_slerped = y0\r\n                elif cossim_target == 0.0:\r\n                    d_slerped = data_row\r\n                else:\r\n                    y0_pearsim    = get_pearson_similarity(data_row, y0,     mask=self.mask)\r\n                    slerp_weight  = get_slerp_weight_for_cossim(y0_pearsim.item(), cossim_target)\r\n                    d_slerped     = slerp_tensor(slerp_weight, data_row, y0) # lgw_mask * slerp_weight same as using mask below\r\n                    \r\n                \"\"\"if self.guide_mode == \"data_projection\":\r\n                    d_collinear_d_lerp = get_collinear(data_row, d_slerped)  \r\n                    d_lerp_ortho_d     = get_orthogonal(d_slerped, data_row)  \r\n                    d_slerped          = d_collinear_d_lerp + d_lerp_ortho_d\"\"\"\r\n                    \r\n                if self.VE_MODEL:\r\n                    x_row = x_row + lgw_mask * (d_slerped - data_row) \r\n                else:\r\n                    x_row = x_row + lgw_mask * (self.sigma_max - sigma_row) * (d_slerped - data_row) \r\n\r\n                \r\n            if self.HAS_LATENT_GUIDE_INV:\r\n                if self.guide_mode.endswith(\"projection\"):\r\n                    d_collinear_d_lerp = get_collinear(data_row, y0_inv)  \r\n                    d_lerp_ortho_d     = get_orthogonal(y0_inv, data_row)  \r\n                    y0_inv             = d_collinear_d_lerp + d_lerp_ortho_d\r\n                \r\n                if   cossim_target == 1.0:\r\n                    d_slerped_inv = y0_inv\r\n                elif cossim_target == 0.0:\r\n                    d_slerped_inv = data_row\r\n                else:\r\n                    y0_pearsim    = get_pearson_similarity(data_row, y0_inv, mask=self.mask_inv)\r\n                    slerp_weight  = get_slerp_weight_for_cossim(y0_pearsim.item(), cossim_target)\r\n                    d_slerped_inv = slerp_tensor(slerp_weight, data_row, y0_inv)\r\n                    \r\n                \"\"\"if self.guide_mode == \"data_projection\":\r\n                    d_collinear_d_lerp = get_collinear(data_row, d_slerped_inv)  \r\n                    d_lerp_ortho_d     = get_orthogonal(d_slerped_inv, data_row)  \r\n                    d_slerped_inv      = d_collinear_d_lerp + d_lerp_ortho_d\"\"\"\r\n                    \r\n                if self.VE_MODEL:\r\n                    x_row = x_row + lgw_mask_inv * (d_slerped_inv - data_row) \r\n                else:\r\n                    x_row = x_row + lgw_mask_inv * (self.sigma_max - sigma_row) * (d_slerped_inv - data_row) \r\n\r\n                    \r\n        return x_row\r\n\r\n    @torch.no_grad\r\n    def swap_data(self,\r\n        x     : Tensor,\r\n        data  : Tensor,\r\n        y     : Tensor,\r\n        sigma : Tensor,\r\n        mask  : Optional[Tensor] = None,\r\n    ):\r\n        mask = 1.0 if mask is None else mask\r\n        if self.VE_MODEL:\r\n            return x + mask * (y - data)\r\n        else:\r\n            return x + mask * (self.sigma_max - sigma) * (y - data)\r\n\r\n    @torch.no_grad\r\n    def process_guides_eps_substep(self,\r\n                                x_0           : Tensor,\r\n                                x_row         : Tensor,\r\n                                data_row      : Tensor,\r\n                                eps_row       : Tensor,\r\n                                step          : int,\r\n                                sigma         : Tensor,\r\n                                sigma_down    : Tensor,\r\n                                sigma_row     : Tensor,\r\n                                frame_targets : Optional[Tensor] = None,\r\n                                RK=None,\r\n                                ):\r\n        \r\n        if not self.HAS_LATENT_GUIDE and not self.HAS_LATENT_GUIDE_INV:\r\n            return eps_row\r\n\r\n        y0, y0_inv, lgw_mask, lgw_mask_inv = self.get_cossim_adjusted_lgw_masks(data_row, step)\r\n        \r\n        if not (lgw_mask.any() != 0 or lgw_mask_inv.any() != 0):  # cossim score too similar! deactivate guide for this step\r\n            return eps_row\r\n\r\n        if self.VIDEO and data_row.ndim == 5 and frame_targets is None:\r\n            num_frames = data_row.shape[2]\r\n            frame_targets = self.frame_weights_mgr.get_frame_weights_by_name('frame_targets', num_frames, step)\r\n            if frame_targets is None:\r\n                frame_targets = self.EO(\"frame_targets\", [1.0])\r\n            frame_targets = torch.clamp(frame_targets, 0.0, 1.0)\r\n            \r\n        eps_y0     = torch.zeros_like(x_0)\r\n        eps_y0_inv = torch.zeros_like(x_0)\r\n        \r\n        if self.HAS_LATENT_GUIDE:\r\n            eps_y0     = RK.get_guide_epsilon(x_0, x_row, y0,     sigma, sigma_row, sigma_down, None)  \r\n            \r\n        if self.HAS_LATENT_GUIDE_INV:\r\n            eps_y0_inv = RK.get_guide_epsilon(x_0, x_row, y0_inv, sigma, sigma_row, sigma_down, None)  \r\n\r\n        if self.guide_mode in {\"epsilon\", \"epsilon_projection\"}:\r\n            if frame_targets is None:\r\n                eps_row = self.get_eps_substep(eps_row, eps_y0, eps_y0_inv, lgw_mask, lgw_mask_inv, step, sigma_row)\r\n            else:\r\n                t_dim = x_row.shape[-3]\r\n                for t in range(t_dim): #temporal dimension\r\n                    frame_target = float(frame_targets[t] if len(frame_targets) > t else frame_targets[-1])\r\n                    eps_row[...,t:t+1,:,:] = self.get_eps_substep(\r\n                                                                eps_row     [...,t:t+1,:,:],\r\n                                                                eps_y0      [...,t:t+1,:,:], \r\n                                                                eps_y0_inv  [...,t:t+1,:,:], \r\n                                                                lgw_mask    [...,t:t+1,:,:], \r\n                                                                lgw_mask_inv[...,t:t+1,:,:], \r\n                                                                step, \r\n                                                                sigma_row, \r\n                                                                frame_target)\r\n                    \r\n        return eps_row\r\n\r\n\r\n\r\n    @torch.no_grad\r\n    def get_eps_substep(self,\r\n                        eps_row       : Tensor,\r\n                        eps_y0        : Tensor,\r\n                        eps_y0_inv    : Tensor,\r\n                        lgw_mask      : Tensor,\r\n                        lgw_mask_inv  : Tensor,\r\n                        step          : int,\r\n                        sigma_row     : Tensor,\r\n                        frame_target  : float = 1.0,\r\n                        ):\r\n        \r\n        if not self.HAS_LATENT_GUIDE and not self.HAS_LATENT_GUIDE_INV:\r\n            return eps_row\r\n\r\n        if self.guide_mode in {\"epsilon\", \"epsilon_projection\"}:\r\n            eps_targets = self.EO(\"eps_targets\", [1.0])\r\n            step_target = step if len(eps_targets) > step else len(eps_targets)-1\r\n            \r\n            cossim_target = frame_target * eps_targets[step_target]\r\n            \r\n            if self.HAS_LATENT_GUIDE:\r\n                if self.guide_mode == \"epsilon_projection\":\r\n                    d_collinear_d_lerp = get_collinear(eps_row, eps_y0)  \r\n                    d_lerp_ortho_d     = get_orthogonal(eps_y0, eps_row)  \r\n                    eps_y0             = d_collinear_d_lerp + d_lerp_ortho_d\r\n                    \r\n                if   cossim_target == 1.0:\r\n                    d_slerped = eps_y0\r\n                elif cossim_target == 0.0:\r\n                    d_slerped = eps_row\r\n                else:\r\n                    y0_pearsim    = get_pearson_similarity(eps_row, eps_y0,     mask=self.mask)\r\n                    slerp_weight  = get_slerp_weight_for_cossim(y0_pearsim.item(), cossim_target)\r\n                    d_slerped     = slerp_tensor(slerp_weight, eps_row, eps_y0) # lgw_mask * slerp_weight same as using mask below\r\n                    \r\n                \"\"\"if self.guide_mode == \"data_projection\":\r\n                    d_collinear_d_lerp = get_collinear(data_row, d_slerped)  \r\n                    d_lerp_ortho_d     = get_orthogonal(d_slerped, data_row)  \r\n                    d_slerped          = d_collinear_d_lerp + d_lerp_ortho_d\"\"\"\r\n                    \r\n                eps_row = eps_row + lgw_mask * (d_slerped - eps_row) \r\n\r\n                \r\n            if self.HAS_LATENT_GUIDE_INV:\r\n                if self.guide_mode == \"epsilon_projection\":\r\n                    d_collinear_d_lerp = get_collinear(eps_row, eps_y0_inv)  \r\n                    d_lerp_ortho_d     = get_orthogonal(eps_y0_inv, eps_row)  \r\n                    eps_y0_inv             = d_collinear_d_lerp + d_lerp_ortho_d\r\n                \r\n                if   cossim_target == 1.0:\r\n                    d_slerped_inv = eps_y0_inv\r\n                elif cossim_target == 0.0:\r\n                    d_slerped_inv = eps_row\r\n                else:\r\n                    y0_pearsim    = get_pearson_similarity(eps_row, eps_y0_inv, mask=self.mask_inv)\r\n                    slerp_weight  = get_slerp_weight_for_cossim(y0_pearsim.item(), cossim_target)\r\n                    d_slerped_inv = slerp_tensor(slerp_weight, eps_row, eps_y0_inv)\r\n                    \r\n                \"\"\"if self.guide_mode == \"data_projection\":\r\n                    d_collinear_d_lerp = get_collinear(data_row, d_slerped_inv)  \r\n                    d_lerp_ortho_d     = get_orthogonal(d_slerped_inv, data_row)  \r\n                    d_slerped_inv      = d_collinear_d_lerp + d_lerp_ortho_d\"\"\"\r\n                    \r\n                eps_row = eps_row + lgw_mask_inv * (d_slerped_inv - eps_row) \r\n\r\n        return eps_row\r\n\r\n\r\n\r\n\r\n\r\n    @torch.no_grad\r\n    def process_guides_substep(self,\r\n                                x_0           : Tensor,\r\n                                x_            : Tensor,\r\n                                eps_          : Tensor,\r\n                                data_         : Tensor,\r\n                                row           :  int,\r\n                                step_sched    :  int,\r\n                                sigma         : Tensor,\r\n                                sigma_next    : Tensor,\r\n                                sigma_down    : Tensor,\r\n                                s_            : Tensor,\r\n                                epsilon_scale :  float,\r\n                                RK,\r\n                                ):\r\n        \r\n        if not self.HAS_LATENT_GUIDE and not self.HAS_LATENT_GUIDE_INV:\r\n            return eps_, x_\r\n\r\n        y0, y0_inv, lgw_mask, lgw_mask_inv = self.get_cossim_adjusted_lgw_masks(data_[row], step_sched)\r\n        \r\n        if not (lgw_mask.any() != 0 or lgw_mask_inv.any() != 0):  # cossim score too similar! deactivate guide for this step\r\n            return eps_, x_ \r\n\r\n        if self.EO([\"substep_eps_ch_mean_std\", \"substep_eps_ch_mean\", \"substep_eps_ch_std\", \"substep_eps_mean_std\", \"substep_eps_mean\", \"substep_eps_std\"]):\r\n            eps_orig = eps_.clone()\r\n        \r\n        if self.EO(\"dynamic_guides_mean_std\"):\r\n            y_shift, y_inv_shift = normalize_latent([y0, y0_inv], [data_, data_])\r\n            y0 = y_shift\r\n            if self.EO(\"dynamic_guides_inv\"):\r\n                y0_inv = y_inv_shift\r\n\r\n        if self.EO(\"dynamic_guides_mean\"):\r\n            y_shift, y_inv_shift = normalize_latent([y0, y0_inv], [data_, data_], std=False)\r\n            y0 = y_shift\r\n            if self.EO(\"dynamic_guides_inv\"):\r\n                y0_inv = y_inv_shift\r\n\r\n\r\n\r\n        if \"data_old\" == self.guide_mode:\r\n            y0_tmp = y0.clone()\r\n            if self.HAS_LATENT_GUIDE:\r\n                y0_tmp = (1-lgw_mask) * data_[row] + lgw_mask * y0\r\n                y0_tmp = (1-lgw_mask_inv) * y0_tmp + lgw_mask_inv * y0_inv\r\n            x_[row+1] = y0_tmp + eps_[row]\r\n            \r\n        if self.guide_mode == \"data_old_projection\":\r\n\r\n            d_lerp             = data_[row]   +   lgw_mask * (y0-data_[row])   +   lgw_mask_inv * (y0_inv-data_[row])\r\n            \r\n            d_collinear_d_lerp = get_collinear(data_[row], d_lerp)  \r\n            d_lerp_ortho_d     = get_orthogonal(d_lerp, data_[row])  \r\n            \r\n            data_[row]         = d_collinear_d_lerp + d_lerp_ortho_d\r\n            \r\n            x_[row+1]          = data_[row] + eps_[row] * sigma\r\n            \r\n\r\n\r\n            #elif (self.UNSAMPLE or self.guide_mode in {\"epsilon\", \"epsilon_cw\", \"epsilon_projection\", \"epsilon_projection_cw\"}) and (self.lgw[step] > 0 or self.lgw_inv[step] > 0):\r\n        elif self.guide_mode in {\"epsilon\", \"epsilon_cw\", \"epsilon_projection\", \"epsilon_projection_cw\"} and (self.lgw[step_sched] > 0 or self.lgw_inv[step_sched] > 0):\r\n            if sigma_down < sigma   or   s_[row] < RK.sigma_max:\r\n                                \r\n                eps_substep_guide     = torch.zeros_like(x_0)\r\n                eps_substep_guide_inv = torch.zeros_like(x_0)\r\n                \r\n                if self.HAS_LATENT_GUIDE:\r\n                    eps_substep_guide     = RK.get_guide_epsilon(x_0, x_[row], y0,     sigma, s_[row], sigma_down, epsilon_scale)  \r\n                    \r\n                if self.HAS_LATENT_GUIDE_INV:\r\n                    eps_substep_guide_inv = RK.get_guide_epsilon(x_0, x_[row], y0_inv, sigma, s_[row], sigma_down, epsilon_scale)  \r\n\r\n                tol_value = self.EO(\"tol\", -1.0)\r\n                if tol_value >= 0:\r\n                    for b, c in itertools.product(range(x_0.shape[0]), range(x_0.shape[1])):\r\n                        current_diff       = torch.norm(data_[row][b][c] - y0    [b][c]) \r\n                        current_diff_inv   = torch.norm(data_[row][b][c] - y0_inv[b][c]) \r\n                        \r\n                        lgw_scaled         = torch.nan_to_num(1-(tol_value/current_diff),     0)\r\n                        lgw_scaled_inv     = torch.nan_to_num(1-(tol_value/current_diff_inv), 0)\r\n                        \r\n                        lgw_tmp            = min(self.lgw[step_sched]    , lgw_scaled)\r\n                        lgw_tmp_inv        = min(self.lgw_inv[step_sched], lgw_scaled_inv)\r\n\r\n                        lgw_mask_clamp     = torch.clamp(lgw_mask,     max=lgw_tmp)\r\n                        lgw_mask_clamp_inv = torch.clamp(lgw_mask_inv, max=lgw_tmp_inv)\r\n\r\n                        eps_[row][b][c]    = eps_[row][b][c] + lgw_mask_clamp[b][0] * (eps_substep_guide[b][c] - eps_[row][b][c]) + lgw_mask_clamp_inv[b][0] * (eps_substep_guide_inv[b][c] - eps_[row][b][c])\r\n                \r\n                elif self.guide_mode in {\"epsilon\"}: \r\n                    #eps_[row] = slerp(lgw_mask.mean().item(), eps_[row], eps_substep_guide)\r\n                    if self.EO(\"slerp_epsilon_guide\"):\r\n                        if eps_substep_guide.sum() != 0:\r\n                            eps_[row] = slerp_tensor(lgw_mask, eps_[row], eps_substep_guide)\r\n                        if eps_substep_guide_inv.sum() != 0:\r\n                            eps_[row] = slerp_tensor(lgw_mask_inv, eps_[row], eps_substep_guide_inv)\r\n                    else:\r\n                        eps_[row] = eps_[row] + lgw_mask * (eps_substep_guide - eps_[row]) + lgw_mask_inv * (eps_substep_guide_inv - eps_[row])\r\n                    \r\n                    #eps_[row] = slerp_barycentric(eps_[row].norm(), eps_substep_guide.norm(), eps_substep_guide_inv.norm(), 1-lgw_mask-lgw_mask_inv, lgw_mask, lgw_mask_inv)\r\n                    \r\n                elif self.guide_mode in {\"epsilon_projection\"}:\r\n                    if self.EO(\"slerp_epsilon_guide\"):\r\n                        if eps_substep_guide.sum() != 0:\r\n                            eps_row_slerp = slerp_tensor(self.mask, eps_[row], eps_substep_guide)\r\n                        if eps_substep_guide_inv.sum() != 0:\r\n                            eps_row_slerp = slerp_tensor((1-self.mask), eps_row_slerp, eps_substep_guide_inv)\r\n\r\n                        eps_collinear_eps_slerp = get_collinear(eps_[row], eps_row_slerp)\r\n                        eps_slerp_ortho_eps     = get_orthogonal(eps_row_slerp, eps_[row])\r\n\r\n                        eps_sum                = eps_collinear_eps_slerp + eps_slerp_ortho_eps\r\n\r\n                        eps_[row] = slerp_tensor(lgw_mask, eps_[row] , eps_sum)\r\n                        eps_[row] = slerp_tensor(lgw_mask_inv, eps_[row], eps_sum)\r\n                    else:\r\n                        eps_row_lerp           = eps_[row]   +   self.mask * (eps_substep_guide-eps_[row])   +   (1-self.mask) * (eps_substep_guide_inv-eps_[row])\r\n\r\n                        eps_collinear_eps_lerp = get_collinear(eps_[row], eps_row_lerp)\r\n                        eps_lerp_ortho_eps     = get_orthogonal(eps_row_lerp, eps_[row])\r\n\r\n                        eps_sum                = eps_collinear_eps_lerp + eps_lerp_ortho_eps\r\n\r\n                        eps_[row]              = eps_[row] + lgw_mask * (eps_sum - eps_[row]) + lgw_mask_inv * (eps_sum - eps_[row])\r\n                    \r\n                    \r\n                    #eps_row_slerp          = eps_[row]   +   self.mask * (eps_substep_guide-eps_[row])   +   (1-self.mask) * (eps_substep_guide_inv-eps_[row])\r\n\r\n                    \r\n                elif self.guide_mode in {\"epsilon_cw\", \"epsilon_projection_cw\"}:\r\n                    eps_ = self.process_channelwise(x_0,\r\n                                                    eps_,\r\n                                                    data_,\r\n                                                    row,\r\n                                                    eps_substep_guide,\r\n                                                    eps_substep_guide_inv,\r\n                                                    y0,\r\n                                                    y0_inv,\r\n                                                    lgw_mask,\r\n                                                    lgw_mask_inv,\r\n                                                    use_projection = self.guide_mode == \"epsilon_projection_cw\",\r\n                                                    channelwise    = True\r\n                                                    )\r\n\r\n        temporal_smoothing = self.EO(\"temporal_smoothing\", 0.0)\r\n        if temporal_smoothing > 0:\r\n            eps_[row] = apply_temporal_smoothing(eps_[row], temporal_smoothing)\r\n            \r\n        if self.EO(\"substep_eps_ch_mean_std\"):\r\n            eps_[row] = normalize_latent(eps_[row], eps_orig[row])\r\n        if self.EO(\"substep_eps_ch_mean\"):\r\n            eps_[row] = normalize_latent(eps_[row], eps_orig[row], std=False)\r\n        if self.EO(\"substep_eps_ch_std\"):\r\n            eps_[row] = normalize_latent(eps_[row], eps_orig[row], mean=False)\r\n        if self.EO(\"substep_eps_mean_std\"):\r\n            eps_[row] = normalize_latent(eps_[row], eps_orig[row], channelwise=False)\r\n        if self.EO(\"substep_eps_mean\"):\r\n            eps_[row] = normalize_latent(eps_[row], eps_orig[row], std=False, channelwise=False)\r\n        if self.EO(\"substep_eps_std\"):\r\n            eps_[row] = normalize_latent(eps_[row], eps_orig[row], mean=False, channelwise=False)\r\n        return eps_, x_\r\n    \r\n\r\n    def process_channelwise(self,\r\n                            x_0                   : Tensor,\r\n                            eps_                  : Tensor,\r\n                            data_                 : Tensor,\r\n                            row                   : int,\r\n                            eps_substep_guide     : Tensor,\r\n                            eps_substep_guide_inv : Tensor,\r\n                            y0                    : Tensor,\r\n                            y0_inv                : Tensor,\r\n                            lgw_mask              : Tensor,\r\n                            lgw_mask_inv          : Tensor,\r\n                            use_projection        : bool    = False,\r\n                            channelwise           : bool    = False\r\n                            ):\r\n        \r\n        avg, avg_inv = 0, 0\r\n        for b, c in itertools.product(range(x_0.shape[0]), range(x_0.shape[1])):\r\n            avg     += torch.norm(lgw_mask    [b][0] * data_[row][b][c]   -   lgw_mask    [b][0] * y0    [b][c])\r\n            avg_inv += torch.norm(lgw_mask_inv[b][0] * data_[row][b][c]   -   lgw_mask_inv[b][0] * y0_inv[b][c])\r\n            \r\n        avg     /= x_0.shape[1]\r\n        avg_inv /= x_0.shape[1]\r\n        \r\n        for b, c in itertools.product(range(x_0.shape[0]), range(x_0.shape[1])):\r\n            if channelwise:\r\n                ratio     = torch.nan_to_num(torch.norm(lgw_mask    [b][0] * data_[row][b][c] - lgw_mask    [b][0] * y0    [b][c])   /   avg,     0)\r\n                ratio_inv = torch.nan_to_num(torch.norm(lgw_mask_inv[b][0] * data_[row][b][c] - lgw_mask_inv[b][0] * y0_inv[b][c])   /   avg_inv, 0)\r\n            else:\r\n                ratio     = 1.\r\n                ratio_inv = 1.\r\n                    \r\n            if self.EO(\"slerp_epsilon_guide\"):\r\n                if eps_substep_guide[b][c].sum() != 0:\r\n                    eps_[row][b][c] = slerp_tensor(ratio * lgw_mask[b][0], eps_[row][b][c], eps_substep_guide[b][c])\r\n                if eps_substep_guide_inv[b][c].sum() != 0:\r\n                    eps_[row][b][c] = slerp_tensor(ratio_inv * lgw_mask_inv[b][0], eps_[row][b][c], eps_substep_guide_inv[b][c])\r\n            else:\r\n                eps_[row][b][c]            = eps_[row][b][c]   +   ratio * lgw_mask[b][0] * (eps_substep_guide[b][c] - eps_[row][b][c])   +   ratio_inv * lgw_mask_inv[b][0] * (eps_substep_guide_inv[b][c] - eps_[row][b][c])\r\n            \r\n            if use_projection:\r\n                if self.EO(\"slerp_epsilon_guide\"):\r\n                    if eps_substep_guide[b][c].sum() != 0:\r\n                        eps_row_lerp = slerp_tensor(self.mask[b][0], eps_[row][b][c], eps_substep_guide[b][c])\r\n                    if eps_substep_guide_inv[b][c].sum() != 0:\r\n                        eps_row_lerp = slerp_tensor((1-self.mask[b][0]), eps_[row][b][c], eps_substep_guide_inv[b][c])\r\n                else:\r\n                    eps_row_lerp           = eps_[row][b][c]   +          self.mask[b][0] * (eps_substep_guide[b][c] - eps_[row][b][c])   +              (1-self.mask[b][0]) * (eps_substep_guide_inv[b][c] - eps_[row][b][c]) # should this ever be self.mask_inv?\r\n\r\n                eps_collinear_eps_lerp = get_collinear (eps_[row][b][c], eps_row_lerp)\r\n                eps_lerp_ortho_eps     = get_orthogonal(eps_row_lerp   , eps_[row][b][c])\r\n\r\n                eps_sum                = eps_collinear_eps_lerp + eps_lerp_ortho_eps\r\n\r\n\r\n                if self.EO(\"slerp_epsilon_guide\"):\r\n                    if eps_substep_guide[b][c].sum() != 0:\r\n                        eps_[row][b][c] = slerp_tensor(ratio * lgw_mask[b][0], eps_[row][b][c], eps_sum)\r\n                    if eps_substep_guide_inv[b][c].sum() != 0:\r\n                        eps_[row][b][c] = slerp_tensor(ratio_inv * lgw_mask_inv[b][0], eps_[row][b][c], eps_sum)\r\n                else:\r\n                    eps_[row][b][c]        = eps_[row][b][c]   +   ratio * lgw_mask[b][0] * (eps_sum                 - eps_[row][b][c])   +   ratio_inv * lgw_mask_inv[b][0] * (eps_sum                     - eps_[row][b][c])\r\n            else:\r\n                if self.EO(\"slerp_epsilon_guide\"):\r\n                    if eps_substep_guide[b][c].sum() != 0:\r\n                        eps_[row][b][c] = slerp_tensor(ratio * lgw_mask[b][0], eps_[row][b][c], eps_substep_guide[b][c])\r\n                    if eps_substep_guide_inv[b][c].sum() != 0:\r\n                        eps_[row][b][c] = slerp_tensor(ratio_inv * lgw_mask_inv[b][0], eps_[row][b][c], eps_substep_guide_inv[b][c])\r\n                else:\r\n                    eps_[row][b][c]        = eps_[row][b][c]   +   ratio * lgw_mask[b][0] * (eps_substep_guide[b][c] - eps_[row][b][c])   +   ratio_inv * lgw_mask_inv[b][0] * (eps_substep_guide_inv[b][c] - eps_[row][b][c])\r\n                \r\n        return eps_\r\n\r\n    \r\n    def normalize_inputs(self, x:Tensor, y0:Tensor, y0_inv:Tensor):\r\n        \"\"\"\r\n        Modifies and returns 'x' by matching its mean and/or std to y0 and/or y0_inv.\r\n        Controlled by extra_options.\r\n\r\n        Returns:\r\n            - x      (modified)\r\n            - y0     (may be modified to match mean and std from y0_inv)\r\n            - y0_inv (unchanged)\r\n        \"\"\"\r\n        if self.guide_mode == \"epsilon_guide_mean_std_from_bkg\":\r\n            y0 = normalize_latent(y0, y0_inv)\r\n\r\n        input_norm = self.EO(\"input_norm\", \"\")\r\n        input_std  = self.EO(\"input_std\", 1.0)\r\n                \r\n        if input_norm == \"input_ch_mean_set_std_to\":\r\n            x = normalize_latent(x, set_std=input_std)\r\n\r\n        if input_norm == \"input_ch_set_std_to\":\r\n            x = normalize_latent(x, set_std=input_std, mean=False)\r\n                \r\n        if input_norm == \"input_mean_set_std_to\":\r\n            x = normalize_latent(x, set_std=input_std,             channelwise=False)\r\n            \r\n        if input_norm == \"input_std_set_std_to\":\r\n            x = normalize_latent(x, set_std=input_std, mean=False, channelwise=False)\r\n        \r\n        return x, y0, y0_inv\r\n\r\n\r\n\r\ndef apply_frame_weights(mask, frame_weights, normalize=False):\r\n    original_mask_mean = mask.mean()\r\n    if frame_weights is not None:\r\n        for f in range(mask.shape[2]):\r\n            frame_weight = frame_weights[f]\r\n            mask[..., f:f+1, :, :] *= frame_weight\r\n        if normalize:\r\n            mask_mean = mask.mean()\r\n            mask *= (original_mask_mean / mask_mean)\r\n\r\n\r\n\r\ndef prepare_mask(x, mask, LGW_MASK_RESCALE_MIN) -> tuple[torch.Tensor, bool]:\r\n    if mask is None:\r\n        mask = torch.ones_like(x[:,0:1,...])\r\n        LGW_MASK_RESCALE_MIN = False\r\n        return mask, LGW_MASK_RESCALE_MIN\r\n    \r\n    target_height = x.shape[-2]\r\n    target_width  = x.shape[-1]\r\n\r\n    spatial_mask = None\r\n    if x.ndim == 5 and mask.shape[0] > 1 and mask.ndim < 4:\r\n        target_frames = x.shape[-3]\r\n        spatial_mask = mask.unsqueeze(0).unsqueeze(0)  # [B, H, W] -> [1, 1, B, H, W]\r\n        spatial_mask = F.interpolate(spatial_mask, \r\n                                    size=(target_frames, target_height, target_width), \r\n                                    mode='trilinear', \r\n                                    align_corners=False)  # [1, 1, F, H, W]\r\n        repeat_shape = [1]  # batch\r\n        for i in range(1, x.ndim - 3):\r\n            repeat_shape.append(x.shape[i])\r\n        repeat_shape.extend([1, 1, 1])  # frames, height, width\r\n    elif mask.ndim == 4: #temporal mask batch\r\n        mask = F.interpolate(mask, size=(target_height, target_width), mode='bilinear', align_corners=False)\r\n        mask = mask.repeat(x.shape[-4],1,1,1)\r\n        mask.unsqueeze_(0)\r\n\r\n    else:\r\n        spatial_mask = mask.unsqueeze(1)\r\n        spatial_mask = F.interpolate(spatial_mask, size=(target_height, target_width), mode='bilinear', align_corners=False)\r\n\r\n        while spatial_mask.ndim < x.ndim:\r\n            spatial_mask = spatial_mask.unsqueeze(2)\r\n        \r\n        repeat_shape = [1]  # batch\r\n        for i in range(1, x.ndim - 2):\r\n            repeat_shape.append(x.shape[i])\r\n        repeat_shape.extend([1, 1])  # height and width\r\n        repeat_shape[1] = 1                                   # only need one channel for masks\r\n\r\n    if spatial_mask is not None:\r\n        mask = spatial_mask.repeat(*repeat_shape).to(x.dtype)\r\n        \r\n        del spatial_mask\r\n    return mask, LGW_MASK_RESCALE_MIN\r\n    \r\ndef apply_temporal_smoothing(tensor, temporal_smoothing):\r\n    if temporal_smoothing <= 0 or tensor.ndim != 5:\r\n        return tensor\r\n\r\n    kernel_size = 5\r\n    padding = kernel_size // 2\r\n    temporal_kernel = torch.tensor(\r\n        [0.1, 0.2, 0.4, 0.2, 0.1],\r\n        device=tensor.device, dtype=tensor.dtype\r\n    ) * temporal_smoothing\r\n    temporal_kernel[kernel_size//2] += (1 - temporal_smoothing)\r\n    temporal_kernel = temporal_kernel / temporal_kernel.sum()\r\n\r\n    # resahpe for conv1d\r\n    b, c, f, h, w = tensor.shape\r\n    data_flat = tensor.permute(0, 1, 3, 4, 2).reshape(-1, f)\r\n\r\n    # apply smoohting\r\n    data_smooth = F.conv1d(\r\n        data_flat.unsqueeze(1),\r\n        temporal_kernel.view(1, 1, -1),\r\n        padding=padding\r\n    ).squeeze(1)\r\n\r\n    return data_smooth.view(b, c, h, w, f).permute(0, 1, 4, 2, 3)\r\n\r\ndef get_guide_epsilon_substep(x_0, x_, y0, y0_inv, s_, row, row_offset, rk_type, b=None, c=None):\r\n    s_in = x_0.new_ones([x_0.shape[0]])\r\n    \r\n    if b is not None and c is not None:  \r\n        index = (b, c)\r\n    elif b is not None: \r\n        index = (b,)\r\n    else: \r\n        index = ()\r\n\r\n    if RK_Method_Beta.is_exponential(rk_type):\r\n        eps_row     =  y0    [index] -  x_0[index]\r\n        eps_row_inv =  y0_inv[index] -  x_0[index]\r\n    else:\r\n        eps_row     = (x_[row][index] - y0    [index]) / (s_[row] * s_in) # was row+row_offset before for x_!!   not right...     also? potential issues here with x_[row+1] being RK.rows+2 with gauss-legendre_2s 1 imp step 1 imp substep\r\n        eps_row_inv = (x_[row][index] - y0_inv[index]) / (s_[row] * s_in)\r\n    \r\n    return eps_row, eps_row_inv\r\n\r\ndef get_guide_epsilon(x_0, x_, y0, sigma, rk_type, b=None, c=None):\r\n    s_in = x_0.new_ones([x_0.shape[0]])\r\n    \r\n    if b is not None and c is not None:  \r\n        index = (b, c)\r\n    elif b is not None: \r\n        index = (b,)\r\n    else: \r\n        index = ()\r\n\r\n    if RK_Method_Beta.is_exponential(rk_type):\r\n        eps     = y0    [index] - x_0[index]\r\n    else:\r\n        eps     = (x_[index] - y0    [index]) / (sigma * s_in)\r\n    \r\n    return eps\r\n\r\n\r\n\r\n@torch.no_grad\r\ndef noise_cossim_guide_tiled(x_list, guide, cossim_mode=\"forward\", tile_size=2, step=0):\r\n\r\n    guide_tiled = rearrange(guide, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)\r\n\r\n    x_tiled_list = [\r\n        rearrange(x, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)\r\n        for x in x_list\r\n    ]\r\n    x_tiled_stack = torch.stack([x_tiled[0] for x_tiled in x_tiled_list])  # [n_x, n_tiles, c, h, w]\r\n\r\n    guide_flat = guide_tiled[0].view(guide_tiled.shape[1], -1).unsqueeze(0)  # [1, n_tiles, c*h*w]\r\n    x_flat = x_tiled_stack.view(x_tiled_stack.size(0), x_tiled_stack.size(1), -1)  # [n_x, n_tiles, c*h*w]\r\n\r\n    cossim_tmp_all = F.cosine_similarity(x_flat, guide_flat, dim=-1)  # [n_x, n_tiles]\r\n\r\n    if cossim_mode == \"forward\":\r\n        indices = cossim_tmp_all.argmax(dim=0) \r\n    elif cossim_mode == \"reverse\":\r\n        indices = cossim_tmp_all.argmin(dim=0) \r\n    elif cossim_mode == \"orthogonal\":\r\n        indices = torch.abs(cossim_tmp_all).argmin(dim=0) \r\n    elif cossim_mode == \"forward_reverse\":\r\n        if step % 2 == 0:\r\n            indices = cossim_tmp_all.argmax(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"reverse_forward\":\r\n        if step % 2 == 1:\r\n            indices = cossim_tmp_all.argmax(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"orthogonal_reverse\":\r\n        if step % 2 == 0:\r\n            indices = torch.abs(cossim_tmp_all).argmin(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"reverse_orthogonal\":\r\n        if step % 2 == 1:\r\n            indices = torch.abs(cossim_tmp_all).argmin(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    else:\r\n        target_value = float(cossim_mode)\r\n        indices = torch.abs(cossim_tmp_all - target_value).argmin(dim=0)  \r\n\r\n    x_tiled_out = x_tiled_stack[indices, torch.arange(indices.size(0))]  # [n_tiles, c, h, w]\r\n\r\n    x_tiled_out = x_tiled_out.unsqueeze(0) \r\n    x_detiled = rearrange(x_tiled_out, \"b (t1 t2) c h w -> b c (h t1) (w t2)\", t1=tile_size, t2=tile_size)\r\n\r\n    return x_detiled\r\n\r\n\r\n@torch.no_grad\r\ndef noise_cossim_eps_tiled(x_list, eps, noise_list, cossim_mode=\"forward\", tile_size=2, step=0):\r\n\r\n    eps_tiled = rearrange(eps, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)\r\n    x_tiled_list = [\r\n        rearrange(x, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)\r\n        for x in x_list\r\n    ]\r\n    noise_tiled_list = [\r\n        rearrange(noise, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)\r\n        for noise in noise_list\r\n    ]\r\n\r\n    noise_tiled_stack = torch.stack([noise_tiled[0] for noise_tiled in noise_tiled_list])  # [n_x, n_tiles, c, h, w]\r\n    eps_expanded = eps_tiled[0].view(eps_tiled.shape[1], -1).unsqueeze(0)  # [1, n_tiles, c*h*w]\r\n    noise_flat = noise_tiled_stack.view(noise_tiled_stack.size(0), noise_tiled_stack.size(1), -1)  # [n_x, n_tiles, c*h*w]\r\n    cossim_tmp_all = F.cosine_similarity(noise_flat, eps_expanded, dim=-1)  # [n_x, n_tiles]\r\n\r\n    if cossim_mode == \"forward\":\r\n        indices = cossim_tmp_all.argmax(dim=0)  \r\n    elif cossim_mode == \"reverse\":\r\n        indices = cossim_tmp_all.argmin(dim=0) \r\n    elif cossim_mode == \"orthogonal\":\r\n        indices = torch.abs(cossim_tmp_all).argmin(dim=0) \r\n    elif cossim_mode == \"orthogonal_pos\":\r\n        positive_mask = cossim_tmp_all > 0\r\n        positive_tmp = torch.where(positive_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('inf')))\r\n        indices = positive_tmp.argmin(dim=0)\r\n    elif cossim_mode == \"orthogonal_neg\":\r\n        negative_mask = cossim_tmp_all < 0\r\n        negative_tmp = torch.where(negative_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('-inf')))\r\n        indices = negative_tmp.argmax(dim=0)\r\n    elif cossim_mode == \"orthogonal_posneg\":\r\n        if step % 2 == 0:\r\n            positive_mask = cossim_tmp_all > 0\r\n            positive_tmp = torch.where(positive_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('inf')))\r\n            indices = positive_tmp.argmin(dim=0)\r\n        else:\r\n            negative_mask = cossim_tmp_all < 0\r\n            negative_tmp = torch.where(negative_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('-inf')))\r\n            indices = negative_tmp.argmax(dim=0)\r\n    elif cossim_mode == \"orthogonal_negpos\":\r\n        if step % 2 == 1:\r\n            positive_mask = cossim_tmp_all > 0\r\n            positive_tmp = torch.where(positive_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('inf')))\r\n            indices = positive_tmp.argmin(dim=0)\r\n        else:\r\n            negative_mask = cossim_tmp_all < 0\r\n            negative_tmp = torch.where(negative_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('-inf')))\r\n            indices = negative_tmp.argmax(dim=0)\r\n    elif cossim_mode == \"forward_reverse\":\r\n        if step % 2 == 0:\r\n            indices = cossim_tmp_all.argmax(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"reverse_forward\":\r\n        if step % 2 == 1:\r\n            indices = cossim_tmp_all.argmax(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"orthogonal_reverse\":\r\n        if step % 2 == 0:\r\n            indices = torch.abs(cossim_tmp_all).argmin(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"reverse_orthogonal\":\r\n        if step % 2 == 1:\r\n            indices = torch.abs(cossim_tmp_all).argmin(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    else:\r\n        target_value = float(cossim_mode)\r\n        indices = torch.abs(cossim_tmp_all - target_value).argmin(dim=0)\r\n    #else:\r\n    #    raise ValueError(f\"Unknown cossim_mode: {cossim_mode}\")\r\n\r\n    x_tiled_stack = torch.stack([x_tiled[0] for x_tiled in x_tiled_list])  # [n_x, n_tiles, c, h, w]\r\n    x_tiled_out = x_tiled_stack[indices, torch.arange(indices.size(0))]  # [n_tiles, c, h, w]\r\n\r\n    x_tiled_out = x_tiled_out.unsqueeze(0)  # restore batch dim\r\n    x_detiled = rearrange(x_tiled_out, \"b (t1 t2) c h w -> b c (h t1) (w t2)\", t1=tile_size, t2=tile_size)\r\n    return x_detiled\r\n\r\n\r\n\r\n@torch.no_grad\r\ndef noise_cossim_guide_eps_tiled(x_0, x_list, y0, noise_list, cossim_mode=\"forward\", tile_size=2, step=0, sigma=None, rk_type=None):\r\n\r\n    x_tiled_stack = torch.stack([\r\n        rearrange(x, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)[0]\r\n        for x in x_list\r\n    ])  # [n_x, n_tiles, c, h, w]\r\n    eps_guide_stack = torch.stack([\r\n        rearrange(x - y0, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)[0]\r\n        for x in x_list\r\n    ])  # [n_x, n_tiles, c, h, w]\r\n    del x_list\r\n\r\n    noise_tiled_stack = torch.stack([\r\n        rearrange(noise, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)[0]\r\n        for noise in noise_list\r\n    ])  # [n_x, n_tiles, c, h, w]\r\n    del noise_list\r\n\r\n    noise_flat = noise_tiled_stack.view(noise_tiled_stack.size(0), noise_tiled_stack.size(1), -1)  # [n_x, n_tiles, c*h*w]\r\n    eps_guide_flat = eps_guide_stack.view(eps_guide_stack.size(0), eps_guide_stack.size(1), -1)  # [n_x, n_tiles, c*h*w]\r\n\r\n    cossim_tmp_all = F.cosine_similarity(noise_flat, eps_guide_flat, dim=-1)  # [n_x, n_tiles]\r\n    del noise_tiled_stack, noise_flat, eps_guide_stack, eps_guide_flat\r\n\r\n    if cossim_mode == \"forward\":\r\n        indices = cossim_tmp_all.argmax(dim=0) \r\n    elif cossim_mode == \"reverse\":\r\n        indices = cossim_tmp_all.argmin(dim=0) \r\n    elif cossim_mode == \"orthogonal\":\r\n        indices = torch.abs(cossim_tmp_all).argmin(dim=0) \r\n    elif cossim_mode == \"orthogonal_pos\":\r\n        positive_mask = cossim_tmp_all > 0\r\n        positive_tmp = torch.where(positive_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('inf')))\r\n        indices = positive_tmp.argmin(dim=0)\r\n    elif cossim_mode == \"orthogonal_neg\":\r\n        negative_mask = cossim_tmp_all < 0\r\n        negative_tmp = torch.where(negative_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('-inf')))\r\n        indices = negative_tmp.argmax(dim=0)\r\n    elif cossim_mode == \"orthogonal_posneg\":\r\n        if step % 2 == 0:\r\n            positive_mask = cossim_tmp_all > 0\r\n            positive_tmp = torch.where(positive_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('inf')))\r\n            indices = positive_tmp.argmin(dim=0)\r\n        else:\r\n            negative_mask = cossim_tmp_all < 0\r\n            negative_tmp = torch.where(negative_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('-inf')))\r\n            indices = negative_tmp.argmax(dim=0)\r\n    elif cossim_mode == \"orthogonal_negpos\":\r\n        if step % 2 == 1:\r\n            positive_mask = cossim_tmp_all > 0\r\n            positive_tmp = torch.where(positive_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('inf')))\r\n            indices = positive_tmp.argmin(dim=0)\r\n        else:\r\n            negative_mask = cossim_tmp_all < 0\r\n            negative_tmp = torch.where(negative_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('-inf')))\r\n            indices = negative_tmp.argmax(dim=0)\r\n    elif cossim_mode == \"forward_reverse\":\r\n        if step % 2 == 0:\r\n            indices = cossim_tmp_all.argmax(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"reverse_forward\":\r\n        if step % 2 == 1:\r\n            indices = cossim_tmp_all.argmax(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"orthogonal_reverse\":\r\n        if step % 2 == 0:\r\n            indices = torch.abs(cossim_tmp_all).argmin(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"reverse_orthogonal\":\r\n        if step % 2 == 1:\r\n            indices = torch.abs(cossim_tmp_all).argmin(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    else:\r\n        target_value = float(cossim_mode)\r\n        indices = torch.abs(cossim_tmp_all - target_value).argmin(dim=0)  \r\n\r\n    x_tiled_out = x_tiled_stack[indices, torch.arange(indices.size(0))]  # [n_tiles, c, h, w]\r\n    del x_tiled_stack\r\n\r\n    x_tiled_out = x_tiled_out.unsqueeze(0)  \r\n    x_detiled = rearrange(x_tiled_out, \"b (t1 t2) c h w -> b c (h t1) (w t2)\", t1=tile_size, t2=tile_size)\r\n\r\n    return x_detiled\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass NoiseStepHandlerOSDE:\r\n    def __init__(self, x, eps=None, data=None, x_init=None, guide=None, guide_bkg=None):\r\n        self.noise = None\r\n        self.x = x\r\n        self.eps = eps\r\n        self.data = data\r\n        self.x_init = x_init\r\n        self.guide = guide\r\n        self.guide_bkg = guide_bkg\r\n        \r\n        self.eps_list = None\r\n\r\n        self.noise_cossim_map = {\r\n            \"eps_orthogonal\":              [self.noise, self.eps],\r\n            \"eps_data_orthogonal\":         [self.noise, self.eps, self.data],\r\n\r\n            \"data_orthogonal\":             [self.noise, self.data],\r\n            \"xinit_orthogonal\":            [self.noise, self.x_init],\r\n            \r\n            \"x_orthogonal\":                [self.noise, self.x],\r\n            \"x_data_orthogonal\":           [self.noise, self.x, self.data],\r\n            \"x_eps_orthogonal\":            [self.noise, self.x, self.eps],\r\n\r\n            \"x_eps_data_orthogonal\":       [self.noise, self.x, self.eps, self.data],\r\n            \"x_eps_data_xinit_orthogonal\": [self.noise, self.x, self.eps, self.data, self.x_init],\r\n            \r\n            \"x_eps_guide_orthogonal\":      [self.noise, self.x, self.eps, self.guide],\r\n            \"x_eps_guide_bkg_orthogonal\":  [self.noise, self.x, self.eps, self.guide_bkg],\r\n            \r\n            \"noise_orthogonal\":            [self.noise, self.x_init],\r\n            \r\n            \"guide_orthogonal\":            [self.noise, self.guide],\r\n            \"guide_bkg_orthogonal\":        [self.noise, self.guide_bkg],\r\n        }\r\n\r\n    def check_cossim_source(self, source):\r\n        return source in self.noise_cossim_map\r\n\r\n    def get_ortho_noise(self, noise, prev_noises=None, max_iter=100, max_score=1e-7, NOISE_COSSIM_SOURCE=\"eps_orthogonal\"):\r\n        \r\n        if NOISE_COSSIM_SOURCE not in self.noise_cossim_map:\r\n            raise ValueError(f\"Invalid NOISE_COSSIM_SOURCE: {NOISE_COSSIM_SOURCE}\")\r\n        \r\n        self.noise_cossim_map[NOISE_COSSIM_SOURCE][0] = noise\r\n\r\n        params = self.noise_cossim_map[NOISE_COSSIM_SOURCE]\r\n        \r\n        noise = get_orthogonal_noise_from_channelwise(*params, max_iter=max_iter, max_score=max_score)\r\n        \r\n        return noise\r\n\r\n\r\n\r\n\r\n\r\n# NOTE: NS AND SUBSTEP ADDED!\r\ndef handle_tiled_etc_noise_steps(\r\n                                x_0,\r\n                                x,\r\n                                x_prenoise,\r\n                                x_init,\r\n                                eps,\r\n                                denoised,\r\n                                y0,\r\n                                y0_inv,\r\n                                step,\r\n                                rk_type,\r\n                                RK,\r\n                                NS,\r\n                                SUBSTEP,\r\n                                sigma_up,\r\n                                sigma,\r\n                                sigma_next,\r\n                                alpha_ratio,\r\n                                s_noise,\r\n                                noise_mode,\r\n                                SDE_NOISE_EXTERNAL,\r\n                                sde_noise_t,\r\n                                NOISE_COSSIM_SOURCE,\r\n                                NOISE_COSSIM_MODE,\r\n                                noise_cossim_tile_size,\r\n                                noise_cossim_iterations,\r\n                                extra_options):\r\n    \r\n    EO = ExtraOptions(extra_options)\r\n    \r\n    x_tmp          = []\r\n    cossim_tmp     = []\r\n    noise_tmp_list = []\r\n    \r\n    if step > EO(\"noise_cossim_end_step\", MAX_STEPS):\r\n        NOISE_COSSIM_SOURCE       = EO(\"noise_cossim_takeover_source\"    , \"eps\")\r\n        NOISE_COSSIM_MODE         = EO(\"noise_cossim_takeover_mode\"      , \"forward\"              )\r\n        noise_cossim_tile_size    = EO(\"noise_cossim_takeover_tile\"      , noise_cossim_tile_size )\r\n        noise_cossim_iterations   = EO(\"noise_cossim_takeover_iterations\", noise_cossim_iterations)\r\n        \r\n    for i in range(noise_cossim_iterations):\r\n        #x_tmp.append(NS.swap_noise(x_0, x, sigma, sigma, sigma_next, ))\r\n        x_tmp.append(NS.add_noise_post(x, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t)    )#y0, lgw, sigma_down are currently unused\r\n        noise_tmp = x_tmp[i] - x\r\n        if EO(\"noise_noise_zscore_norm\"):\r\n            noise_tmp = normalize_zscore(noise_tmp, channelwise=False, inplace=True)\r\n        if EO(\"noise_noise_zscore_norm_cw\"):\r\n            noise_tmp = normalize_zscore(noise_tmp, channelwise=True,  inplace=True)\r\n        if EO(\"noise_eps_zscore_norm\"):\r\n            eps       = normalize_zscore(eps,       channelwise=False, inplace=True)\r\n        if EO(\"noise_eps_zscore_norm_cw\"):\r\n            eps       = normalize_zscore(eps,       channelwise=True,  inplace=True)\r\n            \r\n        if   NOISE_COSSIM_SOURCE in (\"eps_tiled\", \"guide_epsilon_tiled\", \"guide_bkg_epsilon_tiled\", \"iig_tiled\"):\r\n            noise_tmp_list.append(noise_tmp)\r\n        if   NOISE_COSSIM_SOURCE == \"eps\":\r\n            cossim_tmp.append(get_cosine_similarity(eps, noise_tmp))\r\n        if   NOISE_COSSIM_SOURCE == \"eps_ch\":\r\n            cossim_total = torch.zeros_like(eps[0][0][0][0])\r\n            for ch in range(eps.shape[1]):\r\n                cossim_total += get_cosine_similarity(eps[0][ch], noise_tmp[0][ch])\r\n            cossim_tmp.append(cossim_total)\r\n        elif NOISE_COSSIM_SOURCE == \"data\":\r\n            cossim_tmp.append(get_cosine_similarity(denoised, noise_tmp))\r\n        elif NOISE_COSSIM_SOURCE == \"latent\":\r\n            cossim_tmp.append(get_cosine_similarity(x_prenoise, noise_tmp))\r\n        elif NOISE_COSSIM_SOURCE == \"x_prenoise\":\r\n            cossim_tmp.append(get_cosine_similarity(x_prenoise, x_tmp[i]))\r\n        elif NOISE_COSSIM_SOURCE == \"x\":\r\n            cossim_tmp.append(get_cosine_similarity(x, x_tmp[i]))\r\n        elif NOISE_COSSIM_SOURCE == \"x_data\":\r\n            cossim_tmp.append(get_cosine_similarity(denoised, x_tmp[i]))\r\n        elif NOISE_COSSIM_SOURCE == \"x_init_vs_noise\":\r\n            cossim_tmp.append(get_cosine_similarity(x_init, noise_tmp))\r\n        elif NOISE_COSSIM_SOURCE == \"mom\":\r\n            cossim_tmp.append(get_cosine_similarity(denoised, x + sigma_next*noise_tmp))\r\n        elif NOISE_COSSIM_SOURCE == \"guide\":\r\n            cossim_tmp.append(get_cosine_similarity(y0, x_tmp[i]))\r\n        elif NOISE_COSSIM_SOURCE == \"guide_bkg\":\r\n            cossim_tmp.append(get_cosine_similarity(y0_inv, x_tmp[i]))\r\n            \r\n    if step < EO(\"noise_cossim_start_step\", 0):\r\n        x = x_tmp[0]\r\n\r\n    elif (NOISE_COSSIM_SOURCE == \"eps_tiled\"):\r\n        x = noise_cossim_eps_tiled(x_tmp, eps, noise_tmp_list, cossim_mode=NOISE_COSSIM_MODE, tile_size=noise_cossim_tile_size, step=step)\r\n    elif (NOISE_COSSIM_SOURCE == \"guide_epsilon_tiled\"):\r\n        x = noise_cossim_guide_eps_tiled(x_0, x_tmp, y0, noise_tmp_list, cossim_mode=NOISE_COSSIM_MODE, tile_size=noise_cossim_tile_size, step=step, sigma=sigma, rk_type=rk_type)\r\n    elif (NOISE_COSSIM_SOURCE == \"guide_bkg_epsilon_tiled\"):\r\n        x = noise_cossim_guide_eps_tiled(x_0, x_tmp, y0_inv, noise_tmp_list, cossim_mode=NOISE_COSSIM_MODE, tile_size=noise_cossim_tile_size, step=step, sigma=sigma, rk_type=rk_type)\r\n    elif (NOISE_COSSIM_SOURCE == \"guide_tiled\"):\r\n        x = noise_cossim_guide_tiled(x_tmp, y0, cossim_mode=NOISE_COSSIM_MODE, tile_size=noise_cossim_tile_size, step=step)\r\n    elif (NOISE_COSSIM_SOURCE == \"guide_bkg_tiled\"):\r\n        x = noise_cossim_guide_tiled(x_tmp, y0_inv, cossim_mode=NOISE_COSSIM_MODE, tile_size=noise_cossim_tile_size)\r\n    else:\r\n        for i in range(len(x_tmp)):\r\n            if   (NOISE_COSSIM_MODE == \"forward\") and (cossim_tmp[i] == max(cossim_tmp)):\r\n                x = x_tmp[i]\r\n                break\r\n            elif (NOISE_COSSIM_MODE == \"reverse\") and (cossim_tmp[i] == min(cossim_tmp)):\r\n                x = x_tmp[i]\r\n                break\r\n            elif (NOISE_COSSIM_MODE == \"orthogonal\") and (abs(cossim_tmp[i]) == min(abs(val) for val in cossim_tmp)):\r\n                x = x_tmp[i]\r\n                break\r\n            elif (NOISE_COSSIM_MODE != \"forward\") and (NOISE_COSSIM_MODE != \"reverse\") and (NOISE_COSSIM_MODE != \"orthogonal\"):\r\n                x = x_tmp[0]\r\n                break\r\n    return x\r\n\r\n\r\n\r\n\r\n\r\ndef get_masked_epsilon_projection(x_0, x_, eps_, y0, y0_inv, s_, row, row_offset, rk_type, LG, step):\r\n    \r\n    eps_row, eps_row_inv = get_guide_epsilon_substep(x_0, x_, y0, y0_inv, s_, row, row_offset, rk_type)\r\n    eps_row_lerp = eps_[row]   +   LG.mask * (eps_row-eps_[row])   +   (1-LG.mask) * (eps_row_inv-eps_[row])\r\n    eps_collinear_eps_lerp = get_collinear(eps_[row], eps_row_lerp)\r\n    eps_lerp_ortho_eps     = get_orthogonal(eps_row_lerp, eps_[row])\r\n    eps_sum = eps_collinear_eps_lerp + eps_lerp_ortho_eps\r\n    lgw_mask, lgw_mask_inv = LG.get_masks_for_step(step)\r\n    eps_substep_guide = eps_[row] + lgw_mask * (eps_sum - eps_[row]) + lgw_mask_inv * (eps_sum - eps_[row])\r\n    return eps_substep_guide\r\n\r\n\r\n\r\n"
  },
  {
    "path": "beta/rk_method_beta.py",
    "content": "import torch\r\nfrom torch import Tensor\r\nfrom typing import Optional, Callable, Tuple, List, Dict, Any, Union\r\n\r\nimport comfy.model_patcher\r\nimport comfy.supported_models\r\n\r\nimport itertools \r\n\r\nfrom .phi_functions        import Phi\r\nfrom .rk_coefficients_beta import get_implicit_sampler_name_list, get_rk_methods_beta\r\nfrom ..helper              import ExtraOptions\r\nfrom ..latents             import get_orthogonal, get_collinear, get_cosine_similarity, tile_latent, untile_latent\r\n\r\nfrom ..res4lyf             import RESplain\r\n\r\nMAX_STEPS = 10000\r\n\r\n\r\ndef get_data_from_step   (x:Tensor, x_next:Tensor, sigma:Tensor, sigma_next:Tensor) -> Tensor:\r\n    h = sigma_next - sigma\r\n    return (sigma_next * x - sigma * x_next) / h\r\n\r\ndef get_epsilon_from_step(x:Tensor, x_next:Tensor, sigma:Tensor, sigma_next:Tensor) -> Tensor:\r\n    h = sigma_next - sigma\r\n    return (x - x_next) / h\r\n\r\n\r\n\r\nclass RK_Method_Beta:\r\n    def __init__(self,\r\n                model,\r\n                rk_type               : str,\r\n                VE_MODEL              : bool,\r\n                noise_anchor          : float,\r\n                noise_boost_normalize : bool        = True,\r\n                model_device          : str         = 'cuda',\r\n                work_device           : str         = 'cpu',\r\n                dtype                 : torch.dtype = torch.float64,\r\n                extra_options         : str         = \"\"\r\n                ):\r\n        \r\n        self.work_device                 = work_device\r\n        self.model_device                = model_device\r\n        self.dtype                       : torch.dtype = dtype\r\n\r\n        self.model                       = model\r\n\r\n        if hasattr(model, \"model\"):\r\n            model_sampling = model.model.model_sampling\r\n        elif hasattr(model, \"inner_model\"):\r\n            model_sampling = model.inner_model.inner_model.model_sampling\r\n        \r\n        self.sigma_min                   : Tensor                   = model_sampling.sigma_min.to(dtype=dtype, device=work_device)\r\n        self.sigma_max                   : Tensor                   = model_sampling.sigma_max.to(dtype=dtype, device=work_device)\r\n\r\n        self.rk_type                     : str                      = rk_type\r\n\r\n        self.IMPLICIT                    : str                      = rk_type in get_implicit_sampler_name_list(nameOnly=True)\r\n        self.EXPONENTIAL                 : bool                     = RK_Method_Beta.is_exponential(rk_type)\r\n        self.VE_MODEL                    : bool                     = VE_MODEL\r\n\r\n        self.SYNC_SUBSTEP_MEAN_CW        : bool                     = noise_boost_normalize\r\n\r\n        self.A                           : Optional[Tensor]         = None\r\n        self.B                           : Optional[Tensor]         = None\r\n        self.U                           : Optional[Tensor]         = None\r\n        self.V                           : Optional[Tensor]         = None\r\n\r\n        self.rows                        : int                      = 0\r\n        self.cols                        : int                      = 0\r\n\r\n        self.denoised                    : Optional[Tensor]         = None\r\n        self.uncond                      : Optional[Tensor]         = None\r\n\r\n        self.y0                          : Optional[Tensor]         = None\r\n        self.y0_inv                      : Optional[Tensor]         = None\r\n\r\n        self.multistep_stages            : int                      = 0\r\n        self.row_offset                  : Optional[int]            = None\r\n\r\n        self.cfg_cw                      : float                    = 1.0\r\n        self.extra_args                  : Optional[Dict[str, Any]] = None\r\n\r\n        self.extra_options               : str                      = extra_options\r\n        self.EO                          : ExtraOptions             = ExtraOptions(extra_options)\r\n\r\n        self.reorder_tableau_indices     : list[int]                = self.EO(\"reorder_tableau_indices\", [-1])\r\n\r\n        self.LINEAR_ANCHOR_X_0           : float                    = noise_anchor\r\n        \r\n        self.tile_sizes                  : Optional[List[Tuple[int,int]]] = None\r\n        self.tile_cnt                    : int                      = 0\r\n        self.latent_compression_ratio    : int                      = 8\r\n\r\n    @staticmethod\r\n    def is_exponential(rk_type:str) -> bool:\r\n        if rk_type.startswith(( \"res\", \r\n                                \"dpmpp\", \r\n                                \"ddim\", \r\n                                \"pec\", \r\n                                \"etdrk\", \r\n                                \"lawson\", \r\n                                \"abnorsett\",\r\n                                )): \r\n            return True\r\n        else:\r\n            return False\r\n\r\n    @staticmethod\r\n    def create(model,\r\n            rk_type       : str,\r\n            VE_MODEL      : bool,\r\n            noise_anchor  : float       = 1.0,\r\n            noise_boost_normalize  : bool = True,\r\n            model_device  : str         = 'cuda',\r\n            work_device   : str         = 'cpu',\r\n            dtype         : torch.dtype = torch.float64,\r\n            extra_options : str         = \"\"\r\n            ) -> \"Union[RK_Method_Exponential, RK_Method_Linear]\":\r\n        \r\n        if RK_Method_Beta.is_exponential(rk_type):\r\n            return RK_Method_Exponential(model, rk_type, VE_MODEL, noise_anchor, noise_boost_normalize, model_device, work_device, dtype, extra_options)\r\n        else:\r\n            return RK_Method_Linear     (model, rk_type, VE_MODEL, noise_anchor, noise_boost_normalize, model_device, work_device, dtype, extra_options)\r\n                \r\n    def __call__(self):\r\n        raise NotImplementedError(\"This method got clownsharked!\")\r\n    \r\n    def model_epsilon(self, x:Tensor, sigma:Tensor, **extra_args) -> Tuple[Tensor, Tensor]:\r\n        s_in     = x.new_ones([x.shape[0]])\r\n        denoised = self.model(x, sigma * s_in, **extra_args)\r\n        denoised = self.calc_cfg_channelwise(denoised)\r\n        eps      = (x - denoised) / (sigma * s_in).view(x.shape[0], 1, 1, 1)       #return x0 ###################################THIS WORKS ONLY WITH THE MODEL SAMPLING PATCH\r\n        return eps, denoised\r\n    \r\n    def model_denoised(self, x:Tensor, sigma:Tensor, **extra_args) -> Tensor:\r\n        s_in     = x.new_ones([x.shape[0]])\r\n        control_tiles = None\r\n        y0_style_pos = self.extra_args['model_options']['transformer_options'].get(\"y0_style_pos\")\r\n        y0_style_neg = self.extra_args['model_options']['transformer_options'].get(\"y0_style_neg\")\r\n        y0_style_pos_tile, sy0_style_neg_tiles = None, None\r\n        \r\n        if self.EO(\"tile_model_calls\"):\r\n            tile_h = self.EO(\"tile_h\", 128)\r\n            tile_w = self.EO(\"tile_w\", 128)\r\n            \r\n            denoised_tiles = []\r\n            \r\n            tiles, orig_shape, grid, strides = tile_latent(x, tile_size=(tile_h,tile_w))\r\n            \r\n            for i in range(tiles.shape[0]):\r\n                tile = tiles[i].unsqueeze(0)\r\n                \r\n                denoised_tile = self.model(tile, sigma * s_in, **extra_args)\r\n                \r\n                denoised_tiles.append(denoised_tile)\r\n                \r\n            denoised_tiles = torch.cat(denoised_tiles, dim=0)\r\n            \r\n            denoised = untile_latent(denoised_tiles, orig_shape, grid, strides)\r\n            \r\n        elif self.tile_sizes is not None:\r\n            tile_h_full = self.tile_sizes[self.tile_cnt % len(self.tile_sizes)][0]\r\n            tile_w_full = self.tile_sizes[self.tile_cnt % len(self.tile_sizes)][1] \r\n            \r\n            if tile_h_full == -1:\r\n                tile_h      = x.shape[-2]\r\n                tile_h_full = tile_h * self.latent_compression_ratio\r\n            else:\r\n                tile_h = tile_h_full // self.latent_compression_ratio\r\n                \r\n            if tile_w_full == -1:\r\n                tile_w      = x.shape[-1]\r\n                tile_w_full = tile_w * self.latent_compression_ratio\r\n            else:\r\n                tile_w = tile_w_full // self.latent_compression_ratio\r\n            \r\n            #tile_h = tile_h_full // self.latent_compression_ratio\r\n            #tile_w = tile_w_full // self.latent_compression_ratio\r\n            \r\n            self.tile_cnt += 1\r\n            \r\n            #if len(self.tile_sizes) == 1 and self.tile_cnt % 2 == 1:\r\n            #    tile_h, tile_w = tile_w, tile_h\r\n            #    tile_h_full, tile_w_full = tile_w_full, tile_h_full\r\n            \r\n            if (self.tile_cnt // len(self.tile_sizes)) % 2 == 1 and self.EO(\"tiles_autorotate\"):\r\n                tile_h, tile_w = tile_w, tile_h\r\n                tile_h_full, tile_w_full = tile_w_full, tile_h_full\r\n            \r\n            xt_negative = self.model.inner_model.conds.get('xt_negative', self.model.inner_model.conds.get('negative'))\r\n            negative_control = xt_negative[0].get('control')\r\n            \r\n            if negative_control is not None and hasattr(negative_control, 'cond_hint_original'):\r\n                negative_cond_hint_init = negative_control.cond_hint.clone() if negative_control.cond_hint is not None else None\r\n            \r\n            xt_positive = self.model.inner_model.conds.get('xt_positive', self.model.inner_model.conds.get('positive'))\r\n            positive_control = xt_positive[0].get('control')\r\n            \r\n            if positive_control is not None and hasattr(positive_control, 'cond_hint_original'):\r\n                positive_cond_hint_init = positive_control.cond_hint.clone() if positive_control.cond_hint is not None else None\r\n                if positive_control.cond_hint_original.shape[-1] != x.shape[-2] * self.latent_compression_ratio or positive_control.cond_hint_original.shape[-2] != x.shape[-1] * self.latent_compression_ratio:\r\n                    positive_control_pretile = comfy.utils.bislerp(positive_control.cond_hint_original.clone().to(torch.float16).to('cuda'), x.shape[-1] * self.latent_compression_ratio, x.shape[-2] * self.latent_compression_ratio)\r\n                    positive_control.cond_hint_original = positive_control_pretile.to(positive_control.cond_hint_original)\r\n                positive_control_pretile = positive_control.cond_hint_original.clone().to(torch.float16).to('cuda')\r\n                control_tiles, control_orig_shape, control_grid, control_strides = tile_latent(positive_control_pretile, tile_size=(tile_h_full,tile_w_full))\r\n                control_tiles = control_tiles\r\n            \r\n            denoised_tiles = []\r\n            \r\n            tiles, orig_shape, grid, strides = tile_latent(x, tile_size=(tile_h,tile_w))\r\n            \r\n            if y0_style_pos is not None:\r\n                y0_style_pos_tiles, _, _, _ = tile_latent(y0_style_pos, tile_size=(tile_h,tile_w))\r\n            if y0_style_neg is not None:\r\n                y0_style_neg_tiles, _, _, _ = tile_latent(y0_style_neg, tile_size=(tile_h,tile_w))\r\n            \r\n            for i in range(tiles.shape[0]):\r\n                tile = tiles[i].unsqueeze(0)\r\n                self.extra_args['model_options']['transformer_options']['x_tmp'] = tile\r\n                if control_tiles is not None:\r\n                    positive_control.cond_hint = control_tiles[i].unsqueeze(0).to(positive_control.cond_hint)\r\n                    if negative_control is not None:\r\n                        negative_control.cond_hint = control_tiles[i].unsqueeze(0).to(positive_control.cond_hint)\r\n                \r\n                if y0_style_pos is not None:\r\n                    self.extra_args['model_options']['transformer_options']['y0_style_pos'] = y0_style_pos_tiles[i].unsqueeze(0)\r\n                if y0_style_neg is not None:\r\n                    self.extra_args['model_options']['transformer_options']['y0_style_neg'] = y0_style_neg_tiles[i].unsqueeze(0)\r\n                \r\n                denoised_tile = self.model(tile, sigma * s_in, **extra_args)\r\n                \r\n                denoised_tiles.append(denoised_tile)\r\n                \r\n            denoised_tiles = torch.cat(denoised_tiles, dim=0)\r\n            \r\n            denoised = untile_latent(denoised_tiles, orig_shape, grid, strides)\r\n            \r\n        else:\r\n            denoised = self.model(x, sigma * s_in, **extra_args)\r\n        \r\n        if control_tiles is not None:\r\n            positive_control.cond_hint = positive_cond_hint_init\r\n            if negative_control is not None:\r\n                negative_control.cond_hint = negative_cond_hint_init\r\n                \r\n        if y0_style_pos is not None:\r\n            self.extra_args['model_options']['transformer_options']['y0_style_pos'] = y0_style_pos\r\n        if y0_style_neg is not None:\r\n            self.extra_args['model_options']['transformer_options']['y0_style_neg'] = y0_style_neg\r\n        \r\n        denoised = self.calc_cfg_channelwise(denoised)\r\n        return denoised\r\n\r\n    def update_transformer_options(self,\r\n                transformer_options : Optional[dict] = None,\r\n                ):\r\n\r\n        self.extra_args.setdefault(\"model_options\", {}).setdefault(\"transformer_options\", {}).update(transformer_options)\r\n        return\r\n\r\n    def set_coeff(self,\r\n                rk_type    : str,\r\n                h          : Tensor,\r\n                c1         : float  = 0.0,\r\n                c2         : float  = 0.5,\r\n                c3         : float  = 1.0,\r\n                step       : int    = 0,\r\n                sigmas     : Optional[Tensor] = None,\r\n                sigma_down : Optional[Tensor] = None,\r\n                ) -> None:\r\n\r\n        self.rk_type     = rk_type\r\n        self.IMPLICIT    = rk_type in get_implicit_sampler_name_list(nameOnly=True)\r\n        self.EXPONENTIAL = RK_Method_Beta.is_exponential(rk_type) \r\n\r\n        sigma            = sigmas[step]\r\n        sigma_next       = sigmas[step+1]\r\n        \r\n        h_prev = []\r\n        a, b, u, v, ci, multistep_stages, hybrid_stages, FSAL = get_rk_methods_beta(rk_type,\r\n                                                                                    h,\r\n                                                                                    c1,\r\n                                                                                    c2,\r\n                                                                                    c3,\r\n                                                                                    h_prev,\r\n                                                                                    step,\r\n                                                                                    sigmas,\r\n                                                                                    sigma,\r\n                                                                                    sigma_next,\r\n                                                                                    sigma_down,\r\n                                                                                    self.extra_options,\r\n                                                                                    )\r\n        \r\n        self.multistep_stages = multistep_stages\r\n        self.hybrid_stages    = hybrid_stages\r\n        \r\n        self.A = torch.tensor(a,  dtype=h.dtype, device=h.device)\r\n        self.B = torch.tensor(b,  dtype=h.dtype, device=h.device)\r\n        self.C = torch.tensor(ci, dtype=h.dtype, device=h.device)\r\n\r\n        self.U = torch.tensor(u,  dtype=h.dtype, device=h.device) if u is not None else None\r\n        self.V = torch.tensor(v,  dtype=h.dtype, device=h.device) if v is not None else None\r\n        \r\n        self.rows = self.A.shape[0]\r\n        self.cols = self.A.shape[1]\r\n        \r\n        self.row_offset = 1 if not self.IMPLICIT and self.A[0].sum() == 0 else 0  \r\n        \r\n        if self.IMPLICIT and self.reorder_tableau_indices[0] != -1:\r\n            self.reorder_tableau(self.reorder_tableau_indices)\r\n\r\n\r\n\r\n    def reorder_tableau(self, indices:list[int]) -> None:\r\n        #if indices[0]:\r\n        self.A    = self.A   [indices]\r\n        self.B[0] = self.B[0][indices]\r\n        self.C    = self.C   [indices]\r\n        self.C = torch.cat((self.C, self.C[-1:])) \r\n        return\r\n\r\n\r\n\r\n    def update_substep(self,\r\n                        x_0        : Tensor,\r\n                        x_         : Tensor,\r\n                        eps_       : Tensor,\r\n                        eps_prev_  : Tensor,\r\n                        row        : int,\r\n                        row_offset : int,\r\n                        h_new      : Tensor,\r\n                        h_new_orig : Tensor,\r\n                        lying_eps_row_factor : float = 1.0,\r\n                        sigma      : Optional[Tensor] = None,\r\n                        ) -> Tensor:\r\n        \r\n        if row < self.rows - row_offset   and   self.multistep_stages == 0:\r\n            row_tmp_offset = row + row_offset\r\n\r\n        else:\r\n            row_tmp_offset = row + 1\r\n                \r\n        #zr_base   = self.zum(row+row_offset+self.multistep_stages, eps_, eps_prev_)  # TODO: why unused?\r\n        \r\n        if self.SYNC_SUBSTEP_MEAN_CW and lying_eps_row_factor != 1.0:\r\n            zr_orig = self.zum(row+row_offset+self.multistep_stages, eps_, eps_prev_)\r\n            x_orig_row = x_0 + h_new * zr_orig\r\n        \r\n        #eps_row      = eps_     [row].clone()\r\n        #eps_prev_row = eps_prev_[row].clone()\r\n        \r\n        eps_     [row] *= lying_eps_row_factor\r\n        eps_prev_[row] *= lying_eps_row_factor\r\n        \r\n        if self.EO(\"exp2lin_override\"):\r\n            zr = self.zum2(row+row_offset+self.multistep_stages, eps_, eps_prev_, h_new, sigma)\r\n            x_[row_tmp_offset] = x_0 + zr\r\n        else:\r\n            zr = self.zum(row+row_offset+self.multistep_stages, eps_, eps_prev_)\r\n\r\n            x_[row_tmp_offset] = x_0 + h_new * zr\r\n        \r\n        if self.SYNC_SUBSTEP_MEAN_CW and lying_eps_row_factor != 1.0:\r\n            x_[row_tmp_offset] = x_[row_tmp_offset] - x_[row_tmp_offset].mean(dim=(-2,-1), keepdim=True) + x_orig_row.mean(dim=(-2,-1), keepdim=True)\r\n        \r\n        #eps_     [row] = eps_row\r\n        #eps_prev_[row] = eps_prev_row\r\n        \r\n        if (self.SYNC_SUBSTEP_MEAN_CW and h_new != h_new_orig) or self.EO(\"sync_mean_noise\"):\r\n            if not self.EO(\"disable_sync_mean_noise\"):\r\n                x_row_down = x_0 + h_new_orig * zr\r\n                x_[row_tmp_offset] = x_[row_tmp_offset] - x_[row_tmp_offset].mean(dim=(-2,-1), keepdim=True) + x_row_down.mean(dim=(-2,-1), keepdim=True)\r\n        \r\n        return x_\r\n\r\n\r\n\r\n    def zum2(self, row:int, k:Tensor, k_prev:Tensor=None, h_new:Tensor=None, sigma:Tensor=None) -> Tensor:\r\n        if row < self.rows:\r\n            return self.a_k_einsum2(row, k, h_new, sigma)\r\n        else:\r\n            row = row - self.rows\r\n            return self.b_k_einsum2(row, k, h_new, sigma)\r\n\r\n    def a_k_einsum2(self, row:int, k:Tensor, h:Tensor, sigma:Tensor) -> Tensor:\r\n        return torch.einsum('i,j,k,i... -> ...', self.A[row], h.unsqueeze(0), -sigma.unsqueeze(0), k[:self.cols])\r\n    \r\n    def b_k_einsum2(self, row:int, k:Tensor, h:Tensor, sigma:Tensor) -> Tensor:\r\n        return torch.einsum('i,j,k,i... -> ...', self.B[row], h.unsqueeze(0), -sigma.unsqueeze(0), k[:self.cols])\r\n\r\n    \r\n    def a_k_einsum(self, row:int, k     :Tensor) -> Tensor:\r\n        return torch.einsum('i, i... -> ...', self.A[row], k[:self.cols])\r\n    \r\n    def b_k_einsum(self, row:int, k     :Tensor) -> Tensor:\r\n        return torch.einsum('i, i... -> ...', self.B[row], k[:self.cols])\r\n    \r\n    def u_k_einsum(self, row:int, k_prev:Tensor) -> Tensor:\r\n        return torch.einsum('i, i... -> ...', self.U[row], k_prev[:self.cols]) if (self.U is not None and k_prev is not None) else 0\r\n    \r\n    def v_k_einsum(self, row:int, k_prev:Tensor) -> Tensor:\r\n        return torch.einsum('i, i... -> ...', self.V[row], k_prev[:self.cols]) if (self.V is not None and k_prev is not None) else 0\r\n    \r\n    \r\n    \r\n    def zum(self, row:int, k:Tensor, k_prev:Tensor=None,) -> Tensor:\r\n        if row < self.rows:\r\n            return self.a_k_einsum(row, k) + self.u_k_einsum(row, k_prev)\r\n        else:\r\n            row = row - self.rows\r\n            return self.b_k_einsum(row, k) + self.v_k_einsum(row, k_prev)\r\n        \r\n    def zum_tableau(self,  k:Tensor, k_prev:Tensor=None,) -> Tensor:\r\n        a_k_sum = torch.einsum('ij, j... -> i...', self.A, k[:self.cols])\r\n        u_k_sum = torch.einsum('ij, j... -> i...', self.U, k_prev[:self.cols]) if (self.U is not None and k_prev is not None) else 0\r\n        return a_k_sum + u_k_sum\r\n        \r\n    def get_x(self, data:Tensor, noise:Tensor, sigma:Tensor):\r\n        if self.VE_MODEL:\r\n            return data + sigma * noise\r\n        else:\r\n            return (self.sigma_max - sigma) * data + sigma * noise\r\n\r\n    def init_cfg_channelwise(self, x:Tensor, cfg_cw:float=1.0, **extra_args) -> Dict[str, Any]:\r\n        self.uncond = [torch.full_like(x, 0.0)]\r\n        self.cfg_cw = cfg_cw\r\n        if cfg_cw != 1.0:\r\n            def post_cfg_function(args):\r\n                self.uncond[0] = args[\"uncond_denoised\"]\r\n                return args[\"denoised\"]\r\n            model_options = extra_args.get(\"model_options\", {}).copy()\r\n            extra_args[\"model_options\"] = comfy.model_patcher.set_model_options_post_cfg_function(model_options, post_cfg_function, disable_cfg1_optimization=True)\r\n        return extra_args\r\n            \r\n            \r\n    def calc_cfg_channelwise(self, denoised:Tensor) -> Tensor:\r\n        if self.cfg_cw != 1.0:            \r\n            avg = 0\r\n            for b, c in itertools.product(range(denoised.shape[0]), range(denoised.shape[1])):\r\n                avg     += torch.norm(denoised[b][c] - self.uncond[0][b][c])\r\n            avg  /= denoised.shape[1]\r\n            \r\n            for b, c in itertools.product(range(denoised.shape[0]), range(denoised.shape[1])):\r\n                ratio     = torch.nan_to_num(torch.norm(denoised[b][c] - self.uncond[0][b][c])   /   avg,     0)\r\n                denoised_new = self.uncond[0] + ratio * self.cfg_cw * (denoised - self.uncond[0])\r\n            return denoised_new\r\n        else:\r\n            return denoised\r\n        \r\n\r\n    @staticmethod\r\n    def calculate_res_2m_step(\r\n                            x_0        : Tensor,\r\n                            denoised_  : Tensor,\r\n                            sigma_down : Tensor,\r\n                            sigmas     : Tensor,\r\n                            step       : int,\r\n                            ) -> Tuple[Tensor, Tensor]:\r\n        \r\n        if denoised_[2].sum() == 0:\r\n            return None, None\r\n        \r\n        sigma      = sigmas[step]\r\n        sigma_prev = sigmas[step-1]\r\n        \r\n        h_prev = -torch.log(sigma/sigma_prev)\r\n        h      = -torch.log(sigma_down/sigma)\r\n\r\n        c1 = 0\r\n        c2 = (-h_prev / h).item()\r\n\r\n        ci = [c1,c2]\r\n        φ = Phi(h, ci, analytic_solution=True)\r\n\r\n        b2 = φ(2)/c2\r\n        b1 = φ(1) - b2\r\n        \r\n        eps_2 = denoised_[1] - x_0\r\n        eps_1 = denoised_[0] - x_0\r\n\r\n        h_a_k_sum = h * (b1 * eps_1 + b2 * eps_2)\r\n        \r\n        x = torch.exp(-h) * x_0 + h_a_k_sum\r\n        \r\n        denoised = x_0 + (sigma / (sigma - sigma_down)) * h_a_k_sum\r\n\r\n        return x, denoised\r\n\r\n\r\n    @staticmethod\r\n    def calculate_res_3m_step(\r\n                            x_0        : Tensor,\r\n                            denoised_  : Tensor,\r\n                            sigma_down : Tensor,\r\n                            sigmas     : Tensor,\r\n                            step       : int,\r\n                            ) -> Tuple[Tensor, Tensor]:\r\n        \r\n        if denoised_[3].sum() == 0:\r\n            return None, None\r\n        \r\n        sigma       = sigmas[step]\r\n        sigma_prev  = sigmas[step-1]\r\n        sigma_prev2 = sigmas[step-2]\r\n\r\n        h       = -torch.log(sigma_down/sigma)\r\n        h_prev  = -torch.log(sigma/sigma_prev)\r\n        h_prev2 = -torch.log(sigma/sigma_prev2)\r\n\r\n        c1 = 0\r\n        c2 = (-h_prev  / h).item()\r\n        c3 = (-h_prev2 / h).item()\r\n\r\n        ci = [c1,c2,c3]\r\n        φ = Phi(h, ci, analytic_solution=True)\r\n        \r\n        gamma = (3*(c3**3) - 2*c3) / (c2*(2 - 3*c2))\r\n\r\n        b3 = (1 / (gamma * c2 + c3)) * φ(2, -h)      \r\n        b2 = gamma * b3 \r\n        b1 = φ(1, -h) - b2 - b3    \r\n        \r\n        eps_3 = denoised_[2] - x_0\r\n        eps_2 = denoised_[1] - x_0\r\n        eps_1 = denoised_[0] - x_0\r\n\r\n        h_a_k_sum = h * (b1 * eps_1 + b2 * eps_2 + b3 * eps_3)\r\n        \r\n        x = torch.exp(-h) * x_0 + h_a_k_sum\r\n        \r\n        denoised = x_0 + (sigma / (sigma - sigma_down)) * h_a_k_sum\r\n\r\n        return x, denoised\r\n\r\n    def swap_rk_type_at_step_or_threshold(self,\r\n                                            x_0               : Tensor,\r\n                                            data_prev_        : Tensor,\r\n                                            NS,\r\n                                            sigmas            : Tensor,\r\n                                            step              : Tensor,\r\n                                            rk_swap_step      : int,\r\n                                            rk_swap_threshold : float,\r\n                                            rk_swap_type      : str,\r\n                                            rk_swap_print     : bool,\r\n                                            ) -> str:\r\n        if rk_swap_type == \"\":\r\n            if self.EXPONENTIAL:\r\n                rk_swap_type = \"res_3m\" \r\n            else:\r\n                rk_swap_type = \"deis_3m\"\r\n            \r\n        if step > rk_swap_step and self.rk_type != rk_swap_type:\r\n            RESplain(\"Switching rk_type to:\", rk_swap_type)\r\n            self.rk_type = rk_swap_type\r\n            \r\n            if RK_Method_Beta.is_exponential(rk_swap_type):\r\n                self.__class__ = RK_Method_Exponential\r\n            else:\r\n                self.__class__ = RK_Method_Linear\r\n                \r\n            if rk_swap_type in get_implicit_sampler_name_list(nameOnly=True):\r\n                self.IMPLICIT   = True\r\n                self.row_offset = 0\r\n                NS.row_offset   = 0\r\n            else:\r\n                self.IMPLICIT   = False\r\n                self.row_offset = 1\r\n                NS.row_offset   = 1\r\n            NS.h_fn     = self.h_fn\r\n            NS.t_fn     = self.t_fn\r\n            NS.sigma_fn = self.sigma_fn\r\n            \r\n            \r\n            \r\n        if step > 2 and sigmas[step+1] > 0 and self.rk_type != rk_swap_type and rk_swap_threshold > 0:\r\n            x_res_2m, denoised_res_2m = self.calculate_res_2m_step(x_0, data_prev_, NS.sigma_down, sigmas, step)\r\n            x_res_3m, denoised_res_3m = self.calculate_res_3m_step(x_0, data_prev_, NS.sigma_down, sigmas, step)\r\n            if denoised_res_2m is not None:\r\n                if rk_swap_print:\r\n                    RESplain(\"res_3m - res_2m:\", torch.norm(denoised_res_3m - denoised_res_2m).item())\r\n                if rk_swap_threshold > torch.norm(denoised_res_2m - denoised_res_3m):\r\n                    RESplain(\"Switching rk_type to:\", rk_swap_type, \"at step:\", step)\r\n                    self.rk_type = rk_swap_type\r\n            \r\n                    if RK_Method_Beta.is_exponential(rk_swap_type):\r\n                        self.__class__ = RK_Method_Exponential\r\n                    else:\r\n                        self.__class__ = RK_Method_Linear\r\n                \r\n                    if rk_swap_type in get_implicit_sampler_name_list(nameOnly=True):\r\n                        self.IMPLICIT   = True\r\n                        self.row_offset = 0\r\n                        NS.row_offset   = 0\r\n                    else:\r\n                        self.IMPLICIT   = False\r\n                        self.row_offset = 1\r\n                        NS.row_offset   = 1\r\n                    NS.h_fn     = self.h_fn\r\n                    NS.t_fn     = self.t_fn\r\n                    NS.sigma_fn = self.sigma_fn\r\n            \r\n        return self.rk_type\r\n\r\n\r\n    def bong_iter(self,\r\n                    x_0       : Tensor,\r\n                    x_        : Tensor,\r\n                    eps_      : Tensor,\r\n                    eps_prev_ : Tensor,\r\n                    data_     : Tensor,\r\n                    sigma     : Tensor,\r\n                    s_        : Tensor,\r\n                    row       : int,\r\n                    row_offset: int,\r\n                    h         : Tensor,\r\n                    step      : int,\r\n                    step_sched: int,\r\n                    BONGMATH_Y : bool = False,\r\n                    y0_bongflow : Optional[Tensor] = None,\r\n                    noise_sync: Optional[Tensor] = None,\r\n                    eps_x_    : Optional[Tensor] = None,\r\n                    eps_y_    : Optional[Tensor] = None,\r\n                    #eps_x2y_  : Optional[Tensor] = None,\r\n                    data_x_   : Optional[Tensor] = None,\r\n                    data_y_   : Optional[Tensor] = None,\r\n                    #yt_       : Optional[Tensor] = None,\r\n                    #yt_0      : Optional[Tensor] = None,\r\n                    LG = None,\r\n                    ) -> Tuple[Tensor, Tensor, Tensor]:\r\n        \r\n        if x_0.ndim == 4:\r\n            norm_dim = (-2,-1)\r\n        elif x_0.ndim == 5:\r\n            norm_dim = (-4,-2,-1)\r\n        \r\n        if BONGMATH_Y:\r\n            lgw_mask_,      lgw_mask_inv_      = LG.get_masks_for_step(step_sched)\r\n            lgw_mask_sync_, lgw_mask_sync_inv_ = LG.get_masks_for_step(step_sched, lgw_type=\"sync\")\r\n\r\n            weight_mask = lgw_mask_+lgw_mask_inv_\r\n            if LG.SYNC_SEPARATE:\r\n                sync_mask = lgw_mask_sync_+lgw_mask_sync_inv_\r\n            else:\r\n                sync_mask = 1.\r\n            \r\n        \r\n        if self.EO(\"bong_start_step\", 0) > step or step > self.EO(\"bong_stop_step\", 10000) or (self.unsample_bongmath == False and s_[-1] > s_[0]):\r\n            return x_0, x_, eps_\r\n        \r\n        bong_iter_max_row = self.rows - row_offset\r\n        if self.EO(\"bong_iter_max_row_full\"):\r\n            bong_iter_max_row = self.rows\r\n            \r\n        if self.EO(\"bong_iter_lock_x_0_ch_means\"):\r\n            x_0_ch_means = x_0.mean(dim=norm_dim, keepdim=True)\r\n            \r\n        if self.EO(\"bong_iter_lock_x_row_ch_means\"):\r\n            x_row_means = []\r\n            for rr in range(row+row_offset):\r\n                x_row_mean = x_[rr].mean(dim=norm_dim, keepdim=True)\r\n                x_row_means.append(x_row_mean)\r\n        \r\n        if row < bong_iter_max_row   and   self.multistep_stages == 0:\r\n            bong_strength = self.EO(\"bong_strength\", 1.0)\r\n            \r\n            if bong_strength != 1.0:\r\n                x_0_tmp  = x_0 .clone()\r\n                x_tmp_   = x_  .clone()\r\n                eps_tmp_ = eps_.clone()\r\n\r\n            for i in range(100):     #bongmath for eps_prev_ not implemented?\r\n                x_0 = x_[row+row_offset] - h * self.zum(row+row_offset, eps_, eps_prev_)\r\n                \r\n                if self.EO(\"bong_iter_lock_x_0_ch_means\"):\r\n                    x_0 = x_0 - x_0.mean(dim=norm_dim, keepdim=True) + x_0_ch_means\r\n                \r\n                for rr in range(row+row_offset):\r\n                    x_[rr] = x_0 + h * self.zum(rr, eps_, eps_prev_)\r\n                \r\n                if self.EO(\"bong_iter_lock_x_row_ch_means\"):\r\n                    for rr in range(row+row_offset):\r\n                        x_[rr] = x_[rr] - x_[rr].mean(dim=norm_dim, keepdim=True) + x_row_means[rr]\r\n                \r\n                for rr in range(row+row_offset):\r\n                    if self.EO(\"zonkytar\"):\r\n                        #eps_[rr] = self.get_unsample_epsilon(x_[rr], x_0, data_[rr], sigma, s_[rr])\r\n                        eps_[rr] = self.get_epsilon(x_[rr], x_0, data_[rr], sigma, s_[rr])\r\n                    else:\r\n                        if BONGMATH_Y and not self.EO(\"disable_bongmath_y\"):\r\n                            if self.EXPONENTIAL:\r\n                                eps_x_ = data_x_ - x_0\r\n                                eps_x2y_ = data_y_ - x_0\r\n                                if self.VE_MODEL:\r\n                                    eps_ = sync_mask * eps_x_   +   (1-sync_mask) * eps_x2y_   +   weight_mask * (-eps_y_+sigma*(-noise_sync))\r\n                                    if self.EO(\"sync_x2y\"):\r\n                                        eps_ = sync_mask * eps_x_   +   (1-sync_mask) * eps_x2y_   +   weight_mask * (-eps_x2y_+sigma*(-noise_sync))\r\n                                else:\r\n                                    eps_ = sync_mask * eps_x_   +   (1-sync_mask) * eps_x2y_   +   weight_mask * (-eps_y_+sigma*(y0_bongflow-noise_sync))\r\n                                    if self.EO(\"sync_x2y\"):\r\n                                        eps_ = sync_mask * eps_x_   +   (1-sync_mask) * eps_x2y_   +   weight_mask * (-eps_x2y_+sigma*(y0_bongflow-noise_sync))\r\n                            else:\r\n                                eps_x_  [:s_.shape[0]] = (x_[:s_.shape[0]] - data_x_[:s_.shape[0]]) / s_.view(-1,1,1,1,1)   # or should it be vs x_0???\r\n                                eps_x2y_ = torch.zeros_like(eps_x_)\r\n                                eps_x2y_[:s_.shape[0]] = (x_[:s_.shape[0]] - data_y_[:s_.shape[0]]) / s_.view(-1,1,1,1,1)   # or should it be vs x_0???\r\n\r\n                                if self.VE_MODEL:\r\n                                    eps_ = sync_mask * eps_x_   +   (1-sync_mask) * eps_x2y_   +   weight_mask * (noise_sync-eps_y_)\r\n                                    if self.EO(\"sync_x2y\"):\r\n                                        eps_ = sync_mask * eps_x_   +   (1-sync_mask) * eps_x2y_   +   weight_mask * (noise_sync-eps_x2y_)\r\n                                else: \r\n                                    eps_ = sync_mask * eps_x_   +   (1-sync_mask) * eps_x2y_   +   weight_mask * (noise_sync-eps_y_-y0_bongflow)\r\n                                    if self.EO(\"sync_x2y\"):\r\n                                        eps_ = sync_mask * eps_x_   +   (1-sync_mask) * eps_x2y_   +   weight_mask * (noise_sync-eps_x2y_-y0_bongflow)\r\n\r\n                        else:\r\n                            eps_[rr] = self.get_epsilon(x_0, x_[rr], data_[rr], sigma, s_[rr])\r\n                    \r\n            if bong_strength != 1.0:\r\n                x_0  = x_0_tmp  + bong_strength * (x_0  - x_0_tmp)\r\n                x_   = x_tmp_   + bong_strength * (x_   - x_tmp_)\r\n                eps_ = eps_tmp_ + bong_strength * (eps_ - eps_tmp_)\r\n        \r\n        return x_0, x_, eps_ #,   yt_0, yt_\r\n\r\n\r\n    def newton_iter(self,\r\n                    x_0        : Tensor,\r\n                    x_         : Tensor,\r\n                    eps_       : Tensor,\r\n                    eps_prev_  : Tensor,\r\n                    data_      : Tensor,\r\n                    s_         : Tensor,\r\n                    row        : int,\r\n                    h          : Tensor,\r\n                    sigmas     : Tensor,\r\n                    step       : int,\r\n                    newton_name: str,\r\n                    SYNC_GUIDE_ACTIVE: bool,\r\n                    ) -> Tuple[Tensor, Tensor]:\r\n        if SYNC_GUIDE_ACTIVE:\r\n            return x_, eps_\r\n        newton_iter_name = \"newton_iter_\" + newton_name\r\n        \r\n        default_anchor_x_all = False\r\n        if newton_name == \"lying\":\r\n            default_anchor_x_all = True\r\n        \r\n        newton_iter                 = self.EO(newton_iter_name,                      100)\r\n        newton_iter_skip_last_steps = self.EO(newton_iter_name + \"_skip_last_steps\",   0)\r\n        newton_iter_mixing_rate     = self.EO(newton_iter_name + \"_mixing_rate\",     1.0)\r\n        \r\n        newton_iter_anchor          = self.EO(newton_iter_name + \"_anchor\",            0)\r\n        newton_iter_anchor_x_all    = self.EO(newton_iter_name + \"_anchor_x_all\",    default_anchor_x_all)\r\n        newton_iter_type            = self.EO(newton_iter_name + \"_type\",           \"from_epsilon\")\r\n        newton_iter_sequence        = self.EO(newton_iter_name + \"_sequence\",       \"double\")\r\n        \r\n        row_b_offset = 0\r\n        if self.EO(newton_iter_name + \"_include_row_b\"):\r\n            row_b_offset = 1\r\n        \r\n        if step >= len(sigmas)-1-newton_iter_skip_last_steps   or   sigmas[step+1] == 0   or   not self.IMPLICIT:\r\n            return x_, eps_\r\n        \r\n        sigma = sigmas[step]\r\n        \r\n        start, stop = 0, self.rows+row_b_offset\r\n        if newton_name   == \"pre\":\r\n            start = row\r\n        elif newton_name == \"post\":\r\n            start = row + 1\r\n            \r\n        if newton_iter_anchor >= 0:\r\n            eps_anchor = eps_[newton_iter_anchor].clone()\r\n            \r\n        if newton_iter_anchor_x_all:\r\n            x_orig_ = x_.clone()\r\n            \r\n        for n_iter in range(newton_iter):\r\n            for r in range(start, stop):\r\n                if newton_iter_anchor >= 0:\r\n                    eps_[newton_iter_anchor] = eps_anchor.clone()\r\n                if newton_iter_anchor_x_all:\r\n                    x_ = x_orig_.clone()\r\n                x_tmp, eps_tmp = x_[r].clone(), eps_[r].clone()\r\n                \r\n                seq_start, seq_stop = r, r+1\r\n                \r\n                if newton_iter_sequence == \"double\":\r\n                    seq_start, seq_stop = start, stop\r\n                    \r\n                for r_ in range(seq_start, seq_stop):\r\n                    x_[r_] = x_0 + h * self.zum(r_, eps_, eps_prev_)\r\n\r\n                for r_ in range(seq_start, seq_stop):\r\n                    if newton_iter_type == \"from_data\":\r\n                        data_[r_] = get_data_from_step(x_0, x_[r_], sigma, s_[r_])  \r\n                        eps_ [r_] = self.get_epsilon(x_0, x_[r_], data_[r_], sigma, s_[r_])\r\n                    elif newton_iter_type == \"from_step\":\r\n                        eps_ [r_] = get_epsilon_from_step(x_0, x_[r_], sigma, s_[r_])\r\n                    elif newton_iter_type == \"from_alt\":\r\n                        eps_ [r_] = x_0/sigma - x_[r_]/s_[r_]\r\n                    elif newton_iter_type == \"from_epsilon\":\r\n                        eps_ [r_] = self.get_epsilon(x_0, x_[r_], data_[r_], sigma, s_[r_])\r\n                    \r\n                    if self.EO(newton_iter_name + \"_opt\"):\r\n                        opt_timing, opt_type, opt_subtype = self.EO(newton_iter_name+\"_opt\", [str])\r\n                        \r\n                        opt_start, opt_stop = 0, self.rows+row_b_offset\r\n                        if    opt_timing == \"early\":\r\n                            opt_stop  = row + 1\r\n                        elif  opt_timing == \"late\":\r\n                            opt_start = row + 1\r\n\r\n                        for r2 in range(opt_start, opt_stop): \r\n                            if r_ != r2:\r\n                                if   opt_subtype == \"a\":\r\n                                    eps_a = eps_[r2]\r\n                                    eps_b = eps_[r_]\r\n                                elif opt_subtype == \"b\":\r\n                                    eps_a = eps_[r_]\r\n                                    eps_b = eps_[r2]\r\n                                \r\n                                if   opt_type == \"ortho\":\r\n                                    eps_ [r_] = get_orthogonal(eps_a, eps_b)\r\n                                elif opt_type == \"collin\":\r\n                                    eps_ [r_] = get_collinear (eps_a, eps_b)\r\n                                elif opt_type == \"proj\":\r\n                                    eps_ [r_] = get_collinear (eps_a, eps_b) + get_orthogonal(eps_b, eps_a)\r\n                                    \r\n                    x_  [r_] =   x_tmp + newton_iter_mixing_rate * (x_  [r_] -   x_tmp)\r\n                    eps_[r_] = eps_tmp + newton_iter_mixing_rate * (eps_[r_] - eps_tmp)\r\n                    \r\n                if newton_iter_sequence == \"double\":\r\n                    break\r\n        \r\n        return x_, eps_\r\n\r\n\r\n\r\n\r\nclass RK_Method_Exponential(RK_Method_Beta):\r\n    def __init__(self,\r\n                model,\r\n                rk_type       : str,\r\n                VE_MODEL      : bool,\r\n                noise_anchor  : float,\r\n                noise_boost_normalize  : bool,\r\n\r\n                model_device  : str         = 'cuda',\r\n                work_device   : str         = 'cpu',\r\n                dtype         : torch.dtype = torch.float64,\r\n                extra_options : str         = \"\",\r\n                ):\r\n        \r\n        super().__init__(model,\r\n                        rk_type,\r\n                        VE_MODEL,\r\n                        noise_anchor,\r\n                        noise_boost_normalize,\r\n                        model_device  = model_device,\r\n                        work_device   = work_device,\r\n                        dtype         = dtype,\r\n                        extra_options = extra_options,\r\n                        ) \r\n        \r\n    @staticmethod\r\n    def alpha_fn(neg_h:Tensor) -> Tensor:\r\n        return torch.exp(neg_h)\r\n\r\n    @staticmethod\r\n    def sigma_fn(t:Tensor) -> Tensor:\r\n        #return 1/(torch.exp(-t)+1)\r\n        return t.neg().exp()\r\n\r\n    @staticmethod\r\n    def t_fn(sigma:Tensor) -> Tensor:\r\n        #return -torch.log((1.-sigma)/sigma)\r\n        return sigma.log().neg()\r\n    \r\n    @staticmethod\r\n    def h_fn(sigma_down:Tensor, sigma:Tensor) -> Tensor:\r\n        #return (-torch.log((1.-sigma_down)/sigma_down)) - (-torch.log((1.-sigma)/sigma))\r\n        return -torch.log(sigma_down/sigma)\r\n\r\n    def __call__(self,\r\n                x         : Tensor,\r\n                sub_sigma : Tensor,\r\n                x_0       : Optional[Tensor] = None,\r\n                sigma     : Optional[Tensor] = None,\r\n                transformer_options : Optional[dict] = None,\r\n                ) -> Tuple[Tensor, Tensor]:\r\n        \r\n        x_0   = x         if x_0   is None else x_0\r\n        sigma = sub_sigma if sigma is None else sigma\r\n        \r\n        if transformer_options is not None:\r\n            self.extra_args.setdefault(\"model_options\", {}).setdefault(\"transformer_options\", {}).update(transformer_options)\r\n\r\n        denoised = self.model_denoised(x.to(self.model_device), sub_sigma.to(self.model_device), **self.extra_args).to(sigma.device)\r\n        \r\n        eps_anchored = (x_0 - denoised) / sigma\r\n        eps_unmoored = (x   - denoised) / sub_sigma\r\n        \r\n        eps      = eps_unmoored + self.LINEAR_ANCHOR_X_0 * (eps_anchored - eps_unmoored)\r\n        \r\n        denoised = x_0 - sigma * eps\r\n        \r\n        epsilon  = denoised - x_0\r\n        \r\n        #epsilon = denoised - x\r\n        \r\n        if self.EO(\"exp2lin_override\"):\r\n            epsilon = (x_0 - denoised) / sigma\r\n        \r\n        return epsilon, denoised\r\n    \r\n    def get_eps(self, *args):\r\n        if   len(args) == 3:\r\n            x, denoised, sigma = args\r\n            return denoised - x\r\n        elif len(args) == 5:\r\n            x_0, x, denoised, sigma, sub_sigma = args\r\n            eps_anchored = (x_0 - denoised) / sigma\r\n            eps_unmoored = (x   - denoised) / sub_sigma\r\n            eps      = eps_unmoored + self.LINEAR_ANCHOR_X_0 * (eps_anchored - eps_unmoored)\r\n            denoised = x_0 - sigma * eps\r\n            eps_out = denoised - x_0\r\n            if self.EO(\"exp2lin_override\"):\r\n                eps_out = (x_0 - denoised) / sigma\r\n            return eps_out\r\n            \r\n        else:\r\n            raise ValueError(f\"get_eps expected 3 or 5 arguments, got {len(args)}\")\r\n    \r\n    def get_epsilon(self,\r\n                    x_0       : Tensor,\r\n                    x         : Tensor,\r\n                    denoised  : Tensor,\r\n                    sigma     : Tensor,\r\n                    sub_sigma : Tensor,\r\n                    ) -> Tensor:\r\n        \r\n        eps_anchored = (x_0 - denoised) / sigma\r\n        eps_unmoored = (x   - denoised) / sub_sigma\r\n        \r\n        eps      = eps_unmoored + self.LINEAR_ANCHOR_X_0 * (eps_anchored - eps_unmoored)\r\n        \r\n        denoised = x_0 - sigma * eps\r\n        if self.EO(\"exp2lin_override\"):\r\n            return (x_0 - denoised) / sigma\r\n        else:\r\n            return denoised - x_0\r\n    \r\n    \r\n    \r\n    def get_epsilon_anchored(self, x_0:Tensor, denoised:Tensor, sigma:Tensor) -> Tensor:\r\n        return denoised - x_0\r\n    \r\n    \r\n    \r\n    def get_guide_epsilon(self,\r\n                            x_0           : Tensor,\r\n                            x             : Tensor,\r\n                            y             : Tensor,\r\n                            sigma         : Tensor,\r\n                            sigma_cur     : Tensor,\r\n                            sigma_down    : Optional[Tensor] = None,\r\n                            epsilon_scale : Optional[Tensor] = None,\r\n                            ) -> Tensor:\r\n\r\n        sigma_cur = epsilon_scale if epsilon_scale is not None else sigma_cur\r\n\r\n        if sigma_down > sigma:\r\n            eps_unmoored = (sigma_cur/(self.sigma_max - sigma_cur)) * (x   - y)\r\n        else:\r\n            eps_unmoored = y - x \r\n        \r\n        if self.EO(\"manually_anchor_unsampler\"):\r\n            if sigma_down > sigma:\r\n                eps_anchored = (sigma    /(self.sigma_max - sigma)) * (x_0 - y)\r\n            else:\r\n                eps_anchored = y - x_0\r\n            eps_guide = eps_unmoored + self.LINEAR_ANCHOR_X_0 * (eps_anchored - eps_unmoored)\r\n        else:\r\n            eps_guide = eps_unmoored\r\n        \r\n        return eps_guide\r\n\r\n\r\n\r\nclass RK_Method_Linear(RK_Method_Beta):\r\n    def __init__(self,\r\n                model,\r\n                rk_type       : str,\r\n                VE_MODEL      : bool,\r\n                noise_anchor  : float,\r\n                noise_boost_normalize  : bool,\r\n                model_device  : str         = 'cuda',\r\n                work_device   : str         = 'cpu',\r\n                dtype         : torch.dtype = torch.float64,\r\n                extra_options : str         = \"\",\r\n                ):\r\n        \r\n        super().__init__(model,\r\n                        rk_type,\r\n                        VE_MODEL,\r\n                        noise_anchor,\r\n                        noise_boost_normalize,\r\n                        model_device  = model_device,\r\n                        work_device   = work_device,\r\n                        dtype         = dtype,\r\n                        extra_options = extra_options,\r\n                        ) \r\n        \r\n    @staticmethod\r\n    def alpha_fn(neg_h:Tensor) -> Tensor:\r\n        return torch.ones_like(neg_h)\r\n\r\n    @staticmethod\r\n    def sigma_fn(t:Tensor) -> Tensor:\r\n        return t\r\n\r\n    @staticmethod\r\n    def t_fn(sigma:Tensor) -> Tensor:\r\n        return sigma\r\n    \r\n    @staticmethod\r\n    def h_fn(sigma_down:Tensor, sigma:Tensor) -> Tensor:\r\n        return sigma_down - sigma\r\n    \r\n    def __call__(self,\r\n                x         : Tensor,\r\n                sub_sigma : Tensor,\r\n                x_0       : Optional[Tensor] = None,\r\n                sigma     : Optional[Tensor] = None,\r\n                transformer_options : Optional[dict] = None,\r\n                ) -> Tuple[Tensor, Tensor]:\r\n        \r\n        x_0   = x         if x_0   is None else x_0\r\n        sigma = sub_sigma if sigma is None else sigma\r\n        \r\n        if transformer_options is not None:\r\n            self.extra_args.setdefault(\"model_options\", {}).setdefault(\"transformer_options\", {}).update(transformer_options)     \r\n        \r\n        denoised = self.model_denoised(x.to(self.model_device), sub_sigma.to(self.model_device), **self.extra_args).to(sigma.device)\r\n\r\n        epsilon_anchor   = (x_0 - denoised) / sigma\r\n        epsilon_unmoored =   (x - denoised) / sub_sigma\r\n        \r\n        epsilon = epsilon_unmoored + self.LINEAR_ANCHOR_X_0 * (epsilon_anchor - epsilon_unmoored)\r\n\r\n        return epsilon, denoised\r\n\r\n    def get_eps(self, *args):\r\n        if   len(args) == 3:\r\n            x, denoised, sigma = args\r\n            return (x - denoised) / sigma\r\n        elif len(args == 5):\r\n            x_0, x, denoised, sigma, sub_sigma = args\r\n            eps_anchor   = (x_0 - denoised) / sigma\r\n            eps_unmoored =   (x - denoised) / sub_sigma\r\n            return eps_unmoored + self.LINEAR_ANCHOR_X_0 * (eps_anchor - eps_unmoored)\r\n        else:\r\n            raise ValueError(f\"get_eps expected 3 or 5 arguments, got {len(args)}\")\r\n\r\n    def get_epsilon(self,\r\n                    x_0       : Tensor,\r\n                    x         : Tensor,\r\n                    denoised  : Tensor,\r\n                    sigma     : Tensor,\r\n                    sub_sigma : Tensor,\r\n                    ) -> Tensor:\r\n        \r\n        eps_anchor   = (x_0 - denoised) / sigma\r\n        eps_unmoored =   (x - denoised) / sub_sigma\r\n        \r\n        return eps_unmoored + self.LINEAR_ANCHOR_X_0 * (eps_anchor - eps_unmoored)\r\n    \r\n    \r\n    \r\n    def get_epsilon_anchored(self, x_0:Tensor, denoised:Tensor, sigma:Tensor) -> Tensor:\r\n        return (x_0 - denoised) / sigma\r\n    \r\n    \r\n    \r\n    def get_guide_epsilon(self, \r\n                            x_0           : Tensor, \r\n                            x             : Tensor, \r\n                            y             : Tensor, \r\n                            sigma         : Tensor, \r\n                            sigma_cur     : Tensor, \r\n                            sigma_down    : Optional[Tensor] = None, \r\n                            epsilon_scale : Optional[Tensor] = None, \r\n                            ) -> Tensor:\r\n\r\n        if sigma_down > sigma:\r\n            sigma_ratio = self.sigma_max - sigma_cur.clone()\r\n        else:\r\n            sigma_ratio = sigma_cur.clone()\r\n        sigma_ratio = epsilon_scale if epsilon_scale is not None else sigma_ratio\r\n\r\n        if sigma_down is None:\r\n            return (x - y) / sigma_ratio\r\n        else:\r\n            if sigma_down > sigma:\r\n                return (y - x) / sigma_ratio\r\n            else:\r\n                return (x - y) / sigma_ratio\r\n\r\n\r\n\r\n\r\n\"\"\"\r\n\r\n\r\n\r\n\r\nif EO(\"bong2m\") and RK.multistep_stages > 0 and step < len(sigmas)-4:\r\n    h_no_eta       = -torch.log(sigmas[step+1]/sigmas[step])\r\n    h_prev1_no_eta = -torch.log(sigmas[step]  /sigmas[step-1])\r\n    c2_prev = (-h_prev1_no_eta / h_no_eta).item()\r\n    eps_prev = denoised_data_prev - x_0\r\n    \r\n    φ = Phi(h_prev, [0.,c2_prev])\r\n    a2_1 = c2_prev * φ(1,2)\r\n    for i in range(100):\r\n        x_prev = x_0 - h_prev * (a2_1 * eps_prev)\r\n        eps_prev = denoised_data_prev - x_prev\r\n        \r\n    eps_[1] = eps_prev\r\n    \r\nif EO(\"bong3m\") and RK.multistep_stages > 0 and step < len(sigmas)-10:\r\n    h_no_eta       = -torch.log(sigmas[step+1]/sigmas[step])\r\n    h_prev1_no_eta = -torch.log(sigmas[step]  /sigmas[step-1])\r\n    h_prev2_no_eta = -torch.log(sigmas[step]  /sigmas[step-2])\r\n    c2_prev        = (-h_prev1_no_eta / h_no_eta).item()\r\n    c3_prev        = (-h_prev2_no_eta / h_no_eta).item()      \r\n    \r\n    eps_prev2 = denoised_data_prev2 - x_0\r\n    eps_prev  = denoised_data_prev  - x_0\r\n    \r\n    φ = Phi(h_prev1_no_eta, [0.,c2_prev, c3_prev])\r\n    a2_1 = c2_prev * φ(1,2)\r\n    for i in range(100):\r\n        x_prev = x_0 - h_prev1_no_eta * (a2_1 * eps_prev)\r\n        eps_prev = denoised_data_prev2 - x_prev\r\n        \r\n    eps_[1] = eps_prev\r\n    \r\n    φ = Phi(h_prev2_no_eta, [0.,c3_prev, c3_prev])\r\n    \r\n    def calculate_gamma(c2_prev, c3_prev):\r\n        return (3*(c3_prev**3) - 2*c3_prev) / (c2_prev*(2 - 3*c2_prev))\r\n    gamma = calculate_gamma(c2_prev, c3_prev)\r\n    \r\n    a2_1 = c2_prev * φ(1,2)\r\n    a3_2 = gamma * c2_prev * φ(2,2) + (c3_prev ** 2 / c2_prev) * φ(2, 3)\r\n    a3_1 = c3_prev * φ(1,3) - a3_2\r\n    \r\n    for i in range(100):\r\n        x_prev2 = x_0     - h_prev2_no_eta * (a3_1 * eps_prev + a3_2 * eps_prev2)\r\n        x_prev  = x_prev2 + h_prev2_no_eta * (a2_1 * eps_prev)\r\n        \r\n        eps_prev2 = denoised_data_prev - x_prev2\r\n        eps_prev  = denoised_data_prev2 - x_prev\r\n        \r\n    eps_[2] = eps_prev2\r\n\"\"\""
  },
  {
    "path": "beta/rk_noise_sampler_beta.py",
    "content": "import torch\r\n\r\nfrom torch  import Tensor\r\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKING, TypeVar\r\n\r\nif TYPE_CHECKING:\r\n    from .rk_method_beta import RK_Method_Exponential, RK_Method_Linear\r\n\r\nimport comfy.model_patcher\r\nimport comfy.supported_models\r\n\r\nfrom .noise_classes import NOISE_GENERATOR_CLASSES, NOISE_GENERATOR_CLASSES_SIMPLE\r\nfrom .constants     import MAX_STEPS\r\n\r\nfrom ..helper       import ExtraOptions, has_nested_attr \r\nfrom ..latents      import normalize_zscore, get_orthogonal, get_collinear\r\nfrom ..res4lyf      import RESplain\r\n\r\n\r\n\r\n\r\nNOISE_MODE_NAMES = [\"none\",\r\n                    #\"hard_sq\",\r\n                    \"hard\",\r\n                    \"lorentzian\", \r\n                    \"soft\", \r\n                    \"soft-linear\",\r\n                    \"softer\",\r\n                    \"eps\",\r\n                    \"sinusoidal\",\r\n                    \"exp\", \r\n                    \"vpsde\",\r\n                    \"er4\",\r\n                    \"hard_var\", \r\n                    ]\r\n\r\n\r\n\r\ndef get_data_from_step(x, x_next, sigma, sigma_next): # assumes 100% linear trajectory\r\n    h = sigma_next - sigma\r\n    return (sigma_next * x - sigma * x_next) / h\r\n\r\ndef get_epsilon_from_step(x, x_next, sigma, sigma_next):\r\n    h = sigma_next - sigma\r\n    return (x - x_next) / h\r\n\r\n\r\n\r\nclass RK_NoiseSampler:\r\n    def __init__(self,\r\n                RK            : Union[\"RK_Method_Exponential\", \"RK_Method_Linear\"],\r\n                model,\r\n                step          : int=0,\r\n                device        : str='cuda',\r\n                dtype         : torch.dtype=torch.float64,\r\n                extra_options : str=\"\"\r\n                ):\r\n        \r\n        self.device                 = device\r\n        self.dtype                  = dtype\r\n        \r\n        self.model                  = model\r\n\r\n        if has_nested_attr(model, \"inner_model.inner_model.model_sampling\"):\r\n            model_sampling = model.inner_model.inner_model.model_sampling\r\n        elif has_nested_attr(model, \"model.model_sampling\"):\r\n            model_sampling = model.model.model_sampling\r\n            \r\n        self.sigma_max              = model_sampling.sigma_max.to(dtype=self.dtype, device=self.device)\r\n        self.sigma_min              = model_sampling.sigma_min.to(dtype=self.dtype, device=self.device)\r\n        \r\n                        \r\n        self.sigma_fn               = RK.sigma_fn\r\n        self.t_fn                   = RK.t_fn\r\n        self.h_fn                   = RK.h_fn\r\n\r\n        self.row_offset             = 1 if not RK.IMPLICIT else 0\r\n        \r\n        self.step                   = step\r\n        \r\n        self.noise_sampler          = None\r\n        self.noise_sampler2         = None\r\n        \r\n        self.noise_mode_sde         = None\r\n        self.noise_mode_sde_substep = None\r\n        \r\n        self.LOCK_H_SCALE           = True\r\n        \r\n        self.CONST                  = isinstance(model_sampling, comfy.model_sampling.CONST)\r\n        self.VARIANCE_PRESERVING    = isinstance(model_sampling, comfy.model_sampling.CONST)\r\n        \r\n        self.extra_options          = extra_options\r\n        self.EO                     = ExtraOptions(extra_options)\r\n        \r\n        self.DOWN_SUBSTEP           = self.EO(\"down_substep\")\r\n        self.DOWN_STEP              = self.EO(\"down_step\")\r\n        \r\n        self.init_noise             = None\r\n\r\n\r\n\r\n\r\n    def init_noise_samplers(self,\r\n                            x                      : Tensor,              \r\n                            noise_seed             : int,\r\n                            noise_seed_substep     : int,\r\n                            noise_sampler_type     : str,\r\n                            noise_sampler_type2    : str,\r\n                            noise_mode_sde         : str,\r\n                            noise_mode_sde_substep : str,\r\n                            overshoot_mode         : str,\r\n                            overshoot_mode_substep : str,\r\n                            noise_boost_step       : float,\r\n                            noise_boost_substep    : float,\r\n                            alpha                  : float,\r\n                            alpha2                 : float,\r\n                            k                      : float = 1.0,\r\n                            k2                     : float = 1.0,\r\n                            scale                  : float = 0.1,\r\n                            scale2                 : float = 0.1,\r\n                            last_rng                       = None,\r\n                            last_rng_substep               = None,\r\n                            ) -> None:\r\n        \r\n        self.noise_sampler_type     = noise_sampler_type\r\n        self.noise_sampler_type2    = noise_sampler_type2\r\n        self.noise_mode_sde         = noise_mode_sde\r\n        self.noise_mode_sde_substep = noise_mode_sde_substep\r\n        self.overshoot_mode         = overshoot_mode\r\n        self.overshoot_mode_substep = overshoot_mode_substep\r\n        self.noise_boost_step       = noise_boost_step\r\n        self.noise_boost_substep    = noise_boost_substep\r\n        self.s_in                   = x.new_ones([1], dtype=self.dtype, device=self.device)\r\n        \r\n        if noise_seed < 0 and last_rng is None:\r\n            seed = torch.initial_seed()+1 \r\n            RESplain(\"SDE noise seed: \", seed, \" (set via torch.initial_seed()+1)\", debug=True)\r\n        if noise_seed < 0 and last_rng is not None:\r\n            seed = torch.initial_seed() \r\n            RESplain(\"SDE noise seed: \", seed, \" (set via torch.initial_seed())\", debug=True)\r\n        else:\r\n            seed = noise_seed\r\n            RESplain(\"SDE noise seed: \", seed, debug=True)\r\n\r\n            \r\n        #seed2 = seed + MAX_STEPS #for substep noise generation. offset needed to ensure seeds are not reused\r\n            \r\n        if noise_sampler_type == \"fractal\":\r\n            self.noise_sampler        = NOISE_GENERATOR_CLASSES.get(noise_sampler_type )(x=x, seed=seed,               sigma_min=self.sigma_min, sigma_max=self.sigma_max)\r\n            self.noise_sampler.alpha  = alpha\r\n            self.noise_sampler.k      = k\r\n            self.noise_sampler.scale  = scale\r\n        if noise_sampler_type2 == \"fractal\":\r\n            self.noise_sampler2       = NOISE_GENERATOR_CLASSES.get(noise_sampler_type2)(x=x, seed=noise_seed_substep, sigma_min=self.sigma_min, sigma_max=self.sigma_max)\r\n            self.noise_sampler2.alpha = alpha2\r\n            self.noise_sampler2.k     = k2\r\n            self.noise_sampler2.scale = scale2\r\n        else:\r\n            self.noise_sampler  = NOISE_GENERATOR_CLASSES_SIMPLE.get(noise_sampler_type )(x=x, seed=seed,               sigma_min=self.sigma_min, sigma_max=self.sigma_max)\r\n            self.noise_sampler2 = NOISE_GENERATOR_CLASSES_SIMPLE.get(noise_sampler_type2)(x=x, seed=noise_seed_substep, sigma_min=self.sigma_min, sigma_max=self.sigma_max)\r\n            \r\n        if last_rng is not None:\r\n            self.noise_sampler .generator.set_state(last_rng)\r\n            self.noise_sampler2.generator.set_state(last_rng_substep)\r\n            \r\n            \r\n    def set_substep_list(self, RK:Union[\"RK_Method_Exponential\", \"RK_Method_Linear\"]) -> None:\r\n        \r\n        self.multistep_stages = RK.multistep_stages\r\n        self.rows = RK.rows\r\n        self.C    = RK.C\r\n        self.s_ = self.sigma_fn(self.t_fn(self.sigma) + self.h * self.C)\r\n    \r\n    \r\n    def get_substep_list(self, RK:Union[\"RK_Method_Exponential\", \"RK_Method_Linear\"], sigma, h) -> None:\r\n        s_ = RK.sigma_fn(RK.t_fn(sigma) + h * RK.C)\r\n        return s_\r\n    \r\n    \r\n    def get_sde_coeff(self, sigma_next:Tensor, sigma_down:Tensor=None, sigma_up:Tensor=None, eta:float=0.0, VP_OVERRIDE=None) -> Tuple[Tensor,Tensor,Tensor]:\r\n        VARIANCE_PRESERVING = VP_OVERRIDE if VP_OVERRIDE is not None else self.VARIANCE_PRESERVING\r\n\r\n        if VARIANCE_PRESERVING:\r\n            if sigma_down is not None:\r\n                alpha_ratio = (1 - sigma_next) / (1 - sigma_down)\r\n                sigma_up = (sigma_next ** 2 - sigma_down ** 2 * alpha_ratio ** 2) ** 0.5 \r\n                \r\n            elif sigma_up is not None:\r\n                if sigma_up >= sigma_next:\r\n                    RESplain(\"Maximum VPSDE noise level exceeded: falling back to hard noise mode.\", debug=True)\r\n                    if eta >= 1:\r\n                        sigma_up = sigma_next * 0.9999 #avoid sqrt(neg_num) later \r\n                    else:\r\n                        sigma_up = sigma_next * eta \r\n                    \r\n                if VP_OVERRIDE is not None:\r\n                    sigma_signal   =              1 - sigma_next\r\n                else:\r\n                    sigma_signal   = self.sigma_max - sigma_next\r\n                sigma_residual = (sigma_next ** 2 - sigma_up ** 2) ** .5\r\n                alpha_ratio    = sigma_signal + sigma_residual\r\n                sigma_down     = sigma_residual / alpha_ratio     \r\n        \r\n        else:\r\n            alpha_ratio = torch.ones_like(sigma_next)\r\n            \r\n            if sigma_down is not None:\r\n                sigma_up   = (sigma_next ** 2 - sigma_down ** 2) ** .5   # not sure this is correct           #TODO: CHECK THIS\r\n            elif sigma_up is not None:\r\n                sigma_down = (sigma_next ** 2 - sigma_up   ** 2) ** .5    \r\n        \r\n        return alpha_ratio, sigma_down, sigma_up\r\n\r\n\r\n\r\n    def set_sde_step(self, sigma:Tensor, sigma_next:Tensor, eta:float, overshoot:float, s_noise:float) -> None:\r\n        self.sigma_0    = sigma\r\n        self.sigma_next = sigma_next\r\n        \r\n        self.s_noise    = s_noise\r\n        self.eta        = eta\r\n        self.overshoot  = overshoot\r\n        \r\n        self.sigma_up_eta, self.sigma_eta, self.sigma_down_eta, self.alpha_ratio_eta \\\r\n            = self.get_sde_step(sigma, sigma_next, eta, self.noise_mode_sde, self.DOWN_STEP, SUBSTEP=False)\r\n            \r\n        self.sigma_up, self.sigma, self.sigma_down, self.alpha_ratio \\\r\n            = self.get_sde_step(sigma, sigma_next, overshoot, self.overshoot_mode, self.DOWN_STEP, SUBSTEP=False)\r\n        \r\n        self.h          = self.h_fn(self.sigma_down, self.sigma)\r\n        self.h_no_eta   = self.h_fn(self.sigma_next, self.sigma)\r\n        self.h          = self.h + self.noise_boost_step * (self.h_no_eta - self.h)\r\n        \r\n\r\n        \r\n        \r\n        \r\n    def set_sde_substep(self,\r\n                        row                 : int,\r\n                        multistep_stages    : int,\r\n                        eta_substep         : float,\r\n                        overshoot_substep   : float,\r\n                        s_noise_substep     : float,\r\n                        full_iter           : int = 0,\r\n                        diag_iter           : int = 0,\r\n                        implicit_steps_full : int = 0,\r\n                        implicit_steps_diag : int = 0\r\n                        ) -> None:    \r\n        \r\n        # start with stepsizes for no overshoot/noise addition/noise swapping\r\n        self.sub_sigma_up_eta    = self.sub_sigma_up                          = 0.0\r\n        self.sub_sigma_eta       = self.sub_sigma                             = self.s_[row]\r\n        self.sub_sigma_down_eta  = self.sub_sigma_down  = self.sub_sigma_next = self.s_[row+self.row_offset+multistep_stages]\r\n        self.sub_alpha_ratio_eta = self.sub_alpha_ratio                       = 1.0\r\n        \r\n        self.s_noise_substep     = s_noise_substep\r\n        self.eta_substep         = eta_substep\r\n        self.overshoot_substep   = overshoot_substep\r\n\r\n\r\n        if row < self.rows   and   self.s_[row+self.row_offset+multistep_stages] > 0:\r\n            if   diag_iter > 0 and diag_iter == implicit_steps_diag and self.EO(\"implicit_substep_skip_final_eta\"):\r\n                pass\r\n            elif diag_iter > 0 and                                      self.EO(\"implicit_substep_only_first_eta\"):\r\n                pass\r\n            elif full_iter > 0 and full_iter == implicit_steps_full and self.EO(\"implicit_step_skip_final_eta\"):\r\n                pass\r\n            elif full_iter > 0 and                                      self.EO(\"implicit_step_only_first_eta\"):\r\n                pass\r\n            elif (full_iter > 0 or diag_iter > 0)                   and self.noise_sampler_type2 == \"brownian\":\r\n                pass # brownian noise does not increment its seed when generated, deactivate on implicit repeats to avoid burn\r\n            elif full_iter > 0 and                                      self.EO(\"implicit_step_only_first_all_eta\"):\r\n                self.sigma_down_eta   = self.sigma_next\r\n                self.sigma_up_eta    *= 0\r\n                self.alpha_ratio_eta /= self.alpha_ratio_eta\r\n                \r\n                self.sigma_down       = self.sigma_next\r\n                self.sigma_up        *= 0\r\n                self.alpha_ratio     /= self.alpha_ratio\r\n                \r\n                self.h_new = self.h = self.h_no_eta\r\n            \r\n            elif (row < self.rows-self.row_offset-multistep_stages   or   diag_iter < implicit_steps_diag)   or   self.EO(\"substep_eta_use_final\"):\r\n                self.sub_sigma_up,     self.sub_sigma,     self.sub_sigma_down,     self.sub_alpha_ratio     = self.get_sde_substep(sigma               = self.s_[row],\r\n                                                                                                                                    sigma_next          = self.s_[row+self.row_offset+multistep_stages],\r\n                                                                                                                                    eta                 = overshoot_substep,\r\n                                                                                                                                    noise_mode_override = self.overshoot_mode_substep,\r\n                                                                                                                                    DOWN                = self.DOWN_SUBSTEP)\r\n                \r\n                self.sub_sigma_up_eta, self.sub_sigma_eta, self.sub_sigma_down_eta, self.sub_alpha_ratio_eta = self.get_sde_substep(sigma               = self.s_[row],\r\n                                                                                                                                    sigma_next          = self.s_[row+self.row_offset+multistep_stages],\r\n                                                                                                                                    eta                 = eta_substep,\r\n                                                                                                                                    noise_mode_override = self.noise_mode_sde_substep,\r\n                                                                                                                                    DOWN                = self.DOWN_SUBSTEP)\r\n\r\n        if self.h_fn(self.sub_sigma_next, self.sigma) != 0:\r\n            self.h_new      = self.h * self.h_fn(self.sub_sigma_down,     self.sigma) / self.h_fn(self.sub_sigma_next, self.sigma) \r\n            self.h_eta      = self.h * self.h_fn(self.sub_sigma_down_eta, self.sigma) / self.h_fn(self.sub_sigma_next, self.sigma) \r\n            self.h_new_orig = self.h_new.clone()\r\n            self.h_new      = self.h_new + self.noise_boost_substep * (self.h - self.h_eta)\r\n        else:\r\n            self.h_new = self.h_eta = self.h\r\n            self.h_new_orig = self.h_new.clone()\r\n        \r\n        \r\n        \r\n\r\n    def get_sde_substep(self,\r\n                        sigma               :Tensor,\r\n                        sigma_next          :Tensor,\r\n                        eta                 :float         = 0.0  ,\r\n                        noise_mode_override :Optional[str] = None ,\r\n                        DOWN                :bool          = False,\r\n                        ) -> Tuple[Tensor,Tensor,Tensor,Tensor]:\r\n        \r\n        return self.get_sde_step(sigma=sigma, sigma_next=sigma_next, eta=eta, noise_mode_override=noise_mode_override, DOWN=DOWN, SUBSTEP=True,)\r\n\r\n    def get_sde_step(self,\r\n                        sigma               :Tensor,\r\n                        sigma_next          :Tensor,\r\n                        eta                 :float         = 0.0  ,\r\n                        noise_mode_override :Optional[str] = None ,\r\n                        DOWN                :bool          = False,\r\n                        SUBSTEP             :bool          = False,\r\n                        VP_OVERRIDE                        = None,\r\n                        ) -> Tuple[Tensor,Tensor,Tensor,Tensor]:\r\n        \r\n        VARIANCE_PRESERVING = VP_OVERRIDE if VP_OVERRIDE is not None else self.VARIANCE_PRESERVING\r\n            \r\n        if noise_mode_override is not None:\r\n            noise_mode = noise_mode_override\r\n        elif SUBSTEP:\r\n            noise_mode = self.noise_mode_sde_substep\r\n        else:\r\n            noise_mode = self.noise_mode_sde\r\n        \r\n        if DOWN: #calculates noise level by first scaling sigma_down from sigma_next, instead of sigma_up from sigma_next\r\n            eta_fn = lambda eta_scale: 1-eta_scale\r\n            sud_fn = lambda sd: (sd, None)\r\n        else:\r\n            eta_fn = lambda eta_scale:   eta_scale\r\n            sud_fn = lambda su: (None, su)\r\n        \r\n        su, sd, sud = None, None, None\r\n        eta_ratio   = None\r\n        sigma_base  = sigma_next\r\n        \r\n        sigmax      = self.sigma_max if VP_OVERRIDE is None else 1\r\n        \r\n        match noise_mode:\r\n            case \"hard\":\r\n                eta_ratio = eta\r\n            case \"exp\": \r\n                h = -(sigma_next/sigma).log()\r\n                eta_ratio = (1 - (-2*eta*h).exp())**.5\r\n            case \"soft\":\r\n                eta_ratio = 1-(1 - eta) + eta * ((sigma_next) / sigma)\r\n            case \"softer\":\r\n                eta_ratio = 1-torch.sqrt(1 - (eta**2 * (sigma**2 - sigma_next**2)) / sigma**2)\r\n            case \"soft-linear\":\r\n                eta_ratio = 1-eta * (sigma_next - sigma)\r\n            case \"sinusoidal\":\r\n                eta_ratio = eta * torch.sin(torch.pi * (sigma_next / sigmax)) ** 2\r\n            case \"eps\":\r\n                eta_ratio = eta * torch.sqrt((sigma_next/sigma) ** 2 * (sigma ** 2 - sigma_next ** 2) ) \r\n                \r\n            case \"lorentzian\":\r\n                eta_ratio  = eta\r\n                alpha      = 1 / ((sigma_next.to(sigma.dtype))**2 + 1)\r\n                sigma_base = ((1 - alpha) ** 0.5).to(sigma.dtype)\r\n                \r\n            case \"hard_var\":\r\n                sigma_var = (-1 + torch.sqrt(1 + 4 * sigma)) / 2\r\n                if sigma_next > sigma_var:\r\n                    eta_ratio  = 0\r\n                    sigma_base = sigma_next\r\n                else:\r\n                    eta_ratio  = eta\r\n                    sigma_base = torch.sqrt((sigma - sigma_next).abs() + 1e-10)\r\n            \r\n            case \"hard_sq\":\r\n                sigma_hat = sigma * (1 + eta)\r\n                su        = (sigma_hat ** 2 - sigma ** 2) ** .5    #su\r\n                \r\n                if VARIANCE_PRESERVING:\r\n                    alpha_ratio, sd, su = self.get_sde_coeff(sigma_next, None, su, eta, VARIANCE_PRESERVING)\r\n                else:\r\n                    sd          = sigma_next\r\n                    sigma       = sigma_hat\r\n                    alpha_ratio = torch.ones_like(sigma)\r\n                    \r\n            case \"vpsde\":\r\n                alpha_ratio, sd, su = self.get_vpsde_step_RF(sigma, sigma_next, eta)\r\n                \r\n            case \"er4\":\r\n                #def noise_scaler(sigma):\r\n                #    return sigma * ((sigma ** 0.3).exp() + 10.0)\r\n                noise_scaler = lambda sigma: sigma * ((sigma ** eta).exp() + 10.0)\r\n                alpha_ratio = noise_scaler(sigma_next) / noise_scaler(sigma)\r\n                sigma_up    = (sigma_next ** 2 - sigma ** 2 * alpha_ratio ** 2) ** 0.5\r\n                eta_ratio = sigma_up / sigma_next\r\n\r\n                \r\n        if eta_ratio is not None:\r\n            sud = sigma_base * eta_fn(eta_ratio)\r\n            alpha_ratio, sd, su = self.get_sde_coeff(sigma_next, *sud_fn(sud), eta, VARIANCE_PRESERVING)\r\n        \r\n        su          = torch.nan_to_num(su,          0.0)\r\n        sd          = torch.nan_to_num(sd,    float(sigma_next))\r\n        alpha_ratio = torch.nan_to_num(alpha_ratio, 1.0)\r\n\r\n        return su, sigma, sd, alpha_ratio\r\n    \r\n    def get_vpsde_step_RF(self, sigma:Tensor, sigma_next:Tensor, eta:float) -> Tuple[Tensor,Tensor,Tensor]:\r\n        dt          = sigma - sigma_next\r\n        sigma_up    = eta * sigma * dt**0.5\r\n        alpha_ratio = 1 - dt * (eta**2/4) * (1 + sigma)\r\n        sigma_down  = sigma_next - (eta/4)*sigma*(1-sigma)*(sigma - sigma_next)\r\n        return sigma_up, sigma_down, alpha_ratio\r\n    \r\n    def linear_noise_init(self, y:Tensor, sigma_curr:Tensor, x_base:Optional[Tensor]=None, x_curr:Optional[Tensor]=None, mask:Optional[Tensor]=None) -> Tensor: \r\n\r\n        y_noised = (self.sigma_max - sigma_curr) * y + sigma_curr * self.init_noise\r\n\r\n        if x_curr is not None:\r\n            x_curr = x_curr + sigma_curr * (self.init_noise - y)\r\n            x_base = x_base + self.sigma * (self.init_noise - y)\r\n            return y_noised, x_base, x_curr\r\n\r\n        if mask is not None:\r\n            y_noised = mask * y_noised + (1-mask) * y\r\n        \r\n        return y_noised\r\n\r\n    def linear_noise_step(self, y:Tensor, sigma_curr:Optional[Tensor]=None, x_base:Optional[Tensor]=None, x_curr:Optional[Tensor]=None, brownian_sigma:Optional[Tensor]=None, brownian_sigma_next:Optional[Tensor]=None, mask:Optional[Tensor]=None) -> Tensor:\r\n        if self.sigma_up_eta == 0   or   self.sigma_next == 0:\r\n            return y, x_base, x_curr\r\n        \r\n        sigma_curr = self.sub_sigma if sigma_curr is None else sigma_curr\r\n\r\n        brownian_sigma      = sigma_curr              if brownian_sigma      is None else brownian_sigma\r\n        brownian_sigma_next = self.sigma_next.clone() if brownian_sigma_next is None else brownian_sigma_next\r\n        \r\n        if brownian_sigma == brownian_sigma_next:\r\n            brownian_sigma_next *= 0.999\r\n            \r\n        if brownian_sigma_next > brownian_sigma and not self.EO(\"disable_brownian_swap\"): # should this really be done?\r\n            brownian_sigma, brownian_sigma_next = brownian_sigma_next, brownian_sigma\r\n        \r\n        noise = self.noise_sampler(sigma=brownian_sigma, sigma_next=brownian_sigma_next)\r\n        noise = normalize_zscore(noise, channelwise=True, inplace=True)\r\n\r\n        y_noised = (self.sigma_max - sigma_curr) * y + sigma_curr * noise\r\n        \r\n        if x_curr is not None:\r\n            x_curr = x_curr + sigma_curr * (noise - y)\r\n            x_base = x_base + self.sigma * (noise - y)\r\n            return y_noised, x_base, x_curr\r\n        \r\n        if mask is not None:\r\n            y_noised = mask * y_noised + (1-mask) * y\r\n        \r\n        return y_noised\r\n\r\n\r\n    def linear_noise_substep(self, y:Tensor, sigma_curr:Optional[Tensor]=None, x_base:Optional[Tensor]=None, x_curr:Optional[Tensor]=None, brownian_sigma:Optional[Tensor]=None, brownian_sigma_next:Optional[Tensor]=None, mask:Optional[Tensor]=None) -> Tensor:\r\n        if self.sub_sigma_up_eta == 0   or   self.sub_sigma_next == 0:\r\n            return y, x_base, x_curr\r\n        \r\n        sigma_curr = self.sub_sigma if sigma_curr is None else sigma_curr\r\n\r\n        brownian_sigma      = sigma_curr                  if brownian_sigma      is None else brownian_sigma\r\n        brownian_sigma_next = self.sub_sigma_next.clone() if brownian_sigma_next is None else brownian_sigma_next\r\n\r\n        if brownian_sigma == brownian_sigma_next:\r\n            brownian_sigma_next *= 0.999\r\n\r\n        if brownian_sigma_next > brownian_sigma and not self.EO(\"disable_brownian_swap\"): # should this really be done?\r\n            brownian_sigma, brownian_sigma_next = brownian_sigma_next, brownian_sigma\r\n        \r\n        noise = self.noise_sampler2(sigma=brownian_sigma, sigma_next=brownian_sigma_next)\r\n        noise = normalize_zscore(noise, channelwise=True, inplace=True)\r\n\r\n        y_noised = (self.sigma_max - sigma_curr) * y + sigma_curr * noise\r\n        \r\n        if x_curr is not None:\r\n            x_curr = x_curr + sigma_curr * (noise - y)\r\n            x_base = x_base + self.sigma * (noise - y)\r\n            return y_noised, x_base, x_curr\r\n        \r\n        if mask is not None:\r\n            y_noised = mask * y_noised + (1-mask) * y\r\n        \r\n        return y_noised\r\n\r\n\r\n    def swap_noise_step(self, x_0:Tensor, x_next:Tensor, brownian_sigma:Optional[Tensor]=None, brownian_sigma_next:Optional[Tensor]=None, mask:Optional[Tensor]=None) -> Tensor:\r\n        if self.sigma_up_eta == 0   or   self.sigma_next == 0:\r\n            return x_next\r\n\r\n        brownian_sigma      = self.sigma.clone()      if brownian_sigma      is None else brownian_sigma\r\n        brownian_sigma_next = self.sigma_next.clone() if brownian_sigma_next is None else brownian_sigma_next\r\n        \r\n        if brownian_sigma == brownian_sigma_next:\r\n            brownian_sigma_next *= 0.999\r\n            \r\n        eps_next      = (x_0 - x_next) / (self.sigma - self.sigma_next)\r\n        denoised_next = x_0 - self.sigma * eps_next\r\n        \r\n        if brownian_sigma_next > brownian_sigma and not self.EO(\"disable_brownian_swap\"): # should this really be done?\r\n            brownian_sigma, brownian_sigma_next = brownian_sigma_next, brownian_sigma\r\n        \r\n        noise = self.noise_sampler(sigma=brownian_sigma, sigma_next=brownian_sigma_next)\r\n        noise = normalize_zscore(noise, channelwise=True, inplace=True)\r\n\r\n        x_noised = self.alpha_ratio_eta * (denoised_next + self.sigma_down_eta * eps_next) + self.sigma_up_eta * noise * self.s_noise\r\n\r\n        if mask is not None:\r\n            x = mask * x_noised + (1-mask) * x_next\r\n        else:\r\n            x = x_noised\r\n        \r\n        return x\r\n\r\n\r\n    def swap_noise_substep(self, x_0:Tensor, x_next:Tensor, brownian_sigma:Optional[Tensor]=None, brownian_sigma_next:Optional[Tensor]=None, mask:Optional[Tensor]=None, guide:Optional[Tensor]=None) -> Tensor:\r\n        if self.sub_sigma_up_eta == 0   or   self.sub_sigma_next == 0:\r\n            return x_next\r\n        \r\n        brownian_sigma      = self.sub_sigma.clone()      if brownian_sigma      is None else brownian_sigma\r\n        brownian_sigma_next = self.sub_sigma_next.clone() if brownian_sigma_next is None else brownian_sigma_next\r\n\r\n        if brownian_sigma == brownian_sigma_next:\r\n            brownian_sigma_next *= 0.999\r\n            \r\n        eps_next      = (x_0 - x_next) / (self.sigma - self.sub_sigma_next)\r\n        denoised_next = x_0 - self.sigma * eps_next\r\n        \r\n        if brownian_sigma_next > brownian_sigma and not self.EO(\"disable_brownian_swap\"): # should this really be done?\r\n            brownian_sigma, brownian_sigma_next = brownian_sigma_next, brownian_sigma\r\n        \r\n        noise = self.noise_sampler2(sigma=brownian_sigma, sigma_next=brownian_sigma_next)\r\n        noise = normalize_zscore(noise, channelwise=True, inplace=True)\r\n\r\n        x_noised = self.sub_alpha_ratio_eta * (denoised_next + self.sub_sigma_down_eta * eps_next) + self.sub_sigma_up_eta * noise * self.s_noise_substep\r\n\r\n        if mask is not None:\r\n            x = mask * x_noised + (1-mask) * x_next\r\n        else:\r\n            x = x_noised\r\n        \r\n        return x\r\n\r\n\r\n\r\n\r\n    def swap_noise_inv_substep(self, x_0:Tensor, x_next:Tensor, eta_substep:float, row:int, row_offset_multistep_stages:int, brownian_sigma:Optional[Tensor]=None, brownian_sigma_next:Optional[Tensor]=None, mask:Optional[Tensor]=None, guide:Optional[Tensor]=None) -> Tensor:\r\n        if self.sub_sigma_up_eta == 0   or   self.sub_sigma_next == 0:\r\n            return x_next\r\n        \r\n        brownian_sigma      = self.sub_sigma.clone()      if brownian_sigma      is None else brownian_sigma\r\n        brownian_sigma_next = self.sub_sigma_next.clone() if brownian_sigma_next is None else brownian_sigma_next\r\n\r\n        if brownian_sigma == brownian_sigma_next:\r\n            brownian_sigma_next *= 0.999\r\n            \r\n        eps_next      = (x_0 - x_next) / ((1-self.sigma) - (1-self.sub_sigma_next))\r\n        denoised_next = x_0 - (1-self.sigma) * eps_next\r\n        \r\n        if brownian_sigma_next > brownian_sigma and not self.EO(\"disable_brownian_swap\"): # should this really be done?\r\n            brownian_sigma, brownian_sigma_next = brownian_sigma_next, brownian_sigma\r\n        \r\n        noise = self.noise_sampler2(sigma=brownian_sigma, sigma_next=brownian_sigma_next)\r\n        noise = normalize_zscore(noise, channelwise=True, inplace=True)\r\n        \r\n        sub_sigma_up,     sub_sigma,     sub_sigma_down,     sub_alpha_ratio     = self.get_sde_substep(sigma               = 1-self.s_[row],\r\n                                                                                                                            sigma_next          = 1-self.s_[row_offset_multistep_stages],\r\n                                                                                                                            eta                 = eta_substep,\r\n                                                                                                                            noise_mode_override = self.noise_mode_sde_substep,\r\n                                                                                                                            DOWN                = self.DOWN_SUBSTEP)\r\n\r\n        x_noised = sub_alpha_ratio * (denoised_next + sub_sigma_down * eps_next) + sub_sigma_up * noise * self.s_noise_substep\r\n\r\n        if mask is not None:\r\n            x = mask * x_noised + (1-mask) * x_next\r\n        else:\r\n            x = x_noised\r\n        \r\n        return x\r\n\r\n\r\n    def swap_noise(self,\r\n                    x_0                 :Tensor,\r\n                    x_next              :Tensor,\r\n                    sigma_0             :Tensor,\r\n                    sigma               :Tensor,\r\n                    sigma_next          :Tensor,\r\n                    sigma_down          :Tensor,\r\n                    sigma_up            :Tensor,\r\n                    alpha_ratio         :Tensor,\r\n                    s_noise             :float,\r\n                    SUBSTEP             :bool             = False,\r\n                    brownian_sigma      :Optional[Tensor] = None,\r\n                    brownian_sigma_next :Optional[Tensor] = None,\r\n                    ) -> Tensor:\r\n        \r\n        if sigma_up == 0:\r\n            return x_next\r\n        \r\n        if brownian_sigma is None:\r\n            brownian_sigma = sigma.clone()\r\n        if brownian_sigma_next is None:\r\n            brownian_sigma_next = sigma_next.clone()\r\n        if sigma_next == 0:\r\n            return x_next\r\n        if brownian_sigma == brownian_sigma_next:\r\n            brownian_sigma_next *= 0.999\r\n        eps_next      = (x_0 - x_next) / (sigma_0 - sigma_next)\r\n        denoised_next = x_0 - sigma_0 * eps_next\r\n        \r\n        if brownian_sigma_next > brownian_sigma:\r\n            s_tmp               = brownian_sigma\r\n            brownian_sigma      = brownian_sigma_next\r\n            brownian_sigma_next = s_tmp\r\n        \r\n        if not SUBSTEP:\r\n            noise = self.noise_sampler(sigma=brownian_sigma, sigma_next=brownian_sigma_next)\r\n        else:\r\n            noise = self.noise_sampler2(sigma=brownian_sigma, sigma_next=brownian_sigma_next)\r\n            \r\n        noise = normalize_zscore(noise, channelwise=True, inplace=True)\r\n\r\n        x = alpha_ratio * (denoised_next + sigma_down * eps_next) + sigma_up * noise * s_noise\r\n        return x\r\n\r\n    # not used. WARNING: some parameters have a different order than swap_noise!\r\n    def add_noise_pre(self,\r\n                        x_0                :Tensor,\r\n                        x                  :Tensor,\r\n                        sigma_up           :Tensor,\r\n                        sigma_0            :Tensor,\r\n                        sigma              :Tensor,\r\n                        sigma_next         :Tensor,\r\n                        real_sigma_down    :Tensor,\r\n                        alpha_ratio        :Tensor,\r\n                        s_noise            :float,\r\n                        noise_mode         :str,\r\n                        SDE_NOISE_EXTERNAL :bool             = False,\r\n                        sde_noise_t        :Optional[Tensor] = None,\r\n                        SUBSTEP            :bool             = False,\r\n                        ) -> Tensor:\r\n        \r\n        if not self.CONST and noise_mode == \"hard_sq\": \r\n            if self.LOCK_H_SCALE:\r\n                x = self.swap_noise(x_0             = x_0,\r\n                                    x               = x,\r\n                                    sigma           = sigma,\r\n                                    sigma_0         = sigma_0,\r\n                                    sigma_next      = sigma_next,\r\n                                    real_sigma_down = real_sigma_down,\r\n                                    sigma_up        = sigma_up,\r\n                                    alpha_ratio     = alpha_ratio,\r\n                                    s_noise         = s_noise,\r\n                                    SUBSTEP         = SUBSTEP,\r\n                                    )\r\n            else:\r\n                x = self.add_noise( x                  = x,\r\n                                    sigma_up           = sigma_up,\r\n                                    sigma              = sigma,\r\n                                    sigma_next         = sigma_next,\r\n                                    alpha_ratio        = alpha_ratio,\r\n                                    s_noise            = s_noise,\r\n                                    SDE_NOISE_EXTERNAL = SDE_NOISE_EXTERNAL,\r\n                                    sde_noise_t        = sde_noise_t,\r\n                                    SUBSTEP            = SUBSTEP,\r\n                                    )\r\n                \r\n        return x\r\n        \r\n    # only used for handle_tiled_etc_noise_steps() in rk_guide_func_beta.py\r\n    def add_noise_post(self,\r\n                        x_0                :Tensor,\r\n                        x                  :Tensor,\r\n                        sigma_up           :Tensor,\r\n                        sigma_0            :Tensor,\r\n                        sigma              :Tensor,\r\n                        sigma_next         :Tensor,\r\n                        real_sigma_down    :Tensor,\r\n                        alpha_ratio        :Tensor,\r\n                        s_noise            :float,\r\n                        noise_mode         :str,\r\n                        SDE_NOISE_EXTERNAL :bool             = False,\r\n                        sde_noise_t        :Optional[Tensor] = None,\r\n                        SUBSTEP            :bool             = False,\r\n                        ) -> Tensor:\r\n        \r\n        if self.CONST   or   (not self.CONST and noise_mode != \"hard_sq\"):\r\n            if self.LOCK_H_SCALE:\r\n                x = self.swap_noise(x_0             = x_0,\r\n                                    x               = x,\r\n                                    sigma           = sigma,\r\n                                    sigma_0         = sigma_0,\r\n                                    sigma_next      = sigma_next,\r\n                                    real_sigma_down = real_sigma_down,\r\n                                    sigma_up        = sigma_up,\r\n                                    alpha_ratio     = alpha_ratio,\r\n                                    s_noise         = s_noise,\r\n                                    SUBSTEP         = SUBSTEP,\r\n                                    )\r\n            else:\r\n                x = self.add_noise( x                  = x,\r\n                                    sigma_up           = sigma_up,\r\n                                    sigma              = sigma,\r\n                                    sigma_next         = sigma_next,\r\n                                    alpha_ratio        = alpha_ratio,\r\n                                    s_noise            = s_noise,\r\n                                    SDE_NOISE_EXTERNAL = SDE_NOISE_EXTERNAL,\r\n                                    sde_noise_t        = sde_noise_t,\r\n                                    SUBSTEP            = SUBSTEP,\r\n                                    )\r\n        return x\r\n\r\n    def add_noise(self,\r\n                    x                 :Tensor,\r\n                    sigma_up          :Tensor,\r\n                    sigma             :Tensor,\r\n                    sigma_next        :Tensor,\r\n                    alpha_ratio       :Tensor,\r\n                    s_noise           :float,\r\n                    SDE_NOISE_EXTERNAL :bool             = False,\r\n                    sde_noise_t        :Optional[Tensor] = None,\r\n                    SUBSTEP            :bool             = False,\r\n                    ) -> Tensor:\r\n\r\n        if sigma_next > 0.0 and sigma_up > 0.0:\r\n            if sigma_next > sigma:\r\n                sigma, sigma_next = sigma_next, sigma\r\n            \r\n            if sigma == sigma_next:\r\n                sigma_next = sigma * 0.9999\r\n            if not SUBSTEP:\r\n                noise = self.noise_sampler (sigma=sigma, sigma_next=sigma_next)\r\n            else:\r\n                noise = self.noise_sampler2(sigma=sigma, sigma_next=sigma_next)\r\n\r\n            #noise_ortho = get_orthogonal(noise, x)\r\n            #noise_ortho = noise_ortho / noise_ortho.std()model,\r\n            noise = normalize_zscore(noise, channelwise=True, inplace=True)\r\n\r\n            if SDE_NOISE_EXTERNAL:\r\n                noise = (1-s_noise) * noise + s_noise * sde_noise_t\r\n            \r\n            x_next = alpha_ratio * x + noise * sigma_up * s_noise\r\n            \r\n            return x_next\r\n        \r\n        else:\r\n            return x\r\n    \r\n    def sigma_from_to(self,                           \r\n                        x_0        : Tensor,\r\n                        x_down     : Tensor,\r\n                        sigma      : Tensor,\r\n                        sigma_down : Tensor,\r\n                        sigma_next : Tensor) -> Tensor:   #sigma, sigma_from, sigma_to\r\n        \r\n        eps      = (x_0 - x_down) / (sigma - sigma_down)\r\n        denoised =  x_0 - sigma * eps\r\n        x_next   = denoised + sigma_next * eps              # VESDE vs VPSDE equiv.?\r\n        return x_next\r\n\r\n    def rebound_overshoot_step(self, x_0:Tensor, x:Tensor) -> Tensor:\r\n        eps      = (x_0 - x) / (self.sigma - self.sigma_down)\r\n        denoised =  x_0 - self.sigma * eps\r\n        x        = denoised + self.sigma_next * eps\r\n        return x\r\n    \r\n    def rebound_overshoot_substep(self, x_0:Tensor, x:Tensor) -> Tensor:\r\n        if self.sigma - self.sub_sigma_down > 0:\r\n            sub_eps      = (x_0 - x) / (self.sigma - self.sub_sigma_down)\r\n            sub_denoised =  x_0 - self.sigma * sub_eps\r\n            x            = sub_denoised + self.sub_sigma_next * sub_eps\r\n        return x\r\n\r\n    def prepare_sigmas(self,\r\n                        sigmas             : Tensor,\r\n                        sigmas_override    : Tensor,\r\n                        d_noise            : float,\r\n                        d_noise_start_step : int,\r\n                        sampler_mode       : str) -> Tuple[Tensor,bool]:\r\n        #SIGMA_MIN = torch.full_like(self.sigma_min, 0.00227896) if self.sigma_min < 0.00227896 else self.sigma_min        # prevent black image with unsampling flux, which has a sigma_min of 0.0002\r\n        SIGMA_MIN = self.sigma_min #torch.full_like(self.sigma_min, max(0.01, self.sigma_min.item()))\r\n        if sigmas_override is not None:\r\n            sigmas = sigmas_override.clone().to(sigmas.device).to(sigmas.dtype)\r\n            \r\n        if d_noise_start_step == 0:\r\n            sigmas = sigmas.clone() * d_noise\r\n        \r\n        UNSAMPLE_FROM_ZERO = False\r\n        if sigmas[0] == 0.0:      #remove padding used to prevent comfy from adding noise to the latent (for unsampling, etc.)\r\n            UNSAMPLE = True\r\n            if sigmas[-1] == 0.0:\r\n                UNSAMPLE_FROM_ZERO = True\r\n            #sigmas   = sigmas[1:-1]   # was cleaving off 1.0 at the end when restart looping\r\n            sigmas   = sigmas[1:]\r\n            if sigmas[-1] == 0.0:\r\n                sigmas = sigmas[:-1]\r\n        else:\r\n            UNSAMPLE = False\r\n        \r\n        if hasattr(self.model, \"sigmas\"):\r\n            self.model.sigmas = sigmas\r\n            \r\n        if sampler_mode == \"standard\":\r\n            UNSAMPLE = False\r\n        \r\n        consecutive_duplicate_mask = torch.cat((torch.tensor([True], device=sigmas.device), torch.diff(sigmas) != 0))\r\n        sigmas = sigmas[consecutive_duplicate_mask]\r\n                \r\n        if sigmas[-1] == 0:\r\n            if sigmas[-2] < SIGMA_MIN:\r\n                sigmas[-2] = SIGMA_MIN\r\n            elif (sigmas[-2] - SIGMA_MIN).abs() > 1e-4:\r\n                sigmas = torch.cat((sigmas[:-1], SIGMA_MIN.unsqueeze(0), sigmas[-1:]))\r\n                \r\n        elif UNSAMPLE_FROM_ZERO and not torch.isclose(sigmas[0], SIGMA_MIN):\r\n            sigmas = torch.cat([SIGMA_MIN.unsqueeze(0), sigmas])\r\n        \r\n        self.sigmas       = sigmas\r\n        self.UNSAMPLE     = UNSAMPLE\r\n        self.d_noise      = d_noise\r\n        self.sampler_mode = sampler_mode\r\n        \r\n        return sigmas, UNSAMPLE\r\n    \r\n    \r\n    \r\n\r\n\r\n\r\ndef extract_latent_swap_noise(self, x:Tensor, x_noise_swapped:Tensor, sigma:Tensor, old_noise:Tensor) -> Tensor:\r\n    return (x - x_noise_swapped) / sigma + old_noise\r\n\r\ndef update_latent_swap_noise(self, x:Tensor, sigma:Tensor, old_noise:Tensor, new_noise:Tensor) -> Tensor:\r\n    return x + sigma * (new_noise - old_noise)\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "beta/rk_sampler_beta.py",
    "content": "import torch\r\nfrom torch import Tensor\r\nimport torch.nn.functional as F\r\nfrom tqdm.auto import trange\r\nimport gc\r\nfrom typing import Optional, Callable, Tuple, List, Dict, Any, Union\r\nimport math\r\nimport copy\r\n\r\nfrom comfy.model_sampling import EPS\r\nimport comfy\r\n\r\nfrom ..res4lyf              import RESplain\r\nfrom ..helper               import ExtraOptions, FrameWeightsManager\r\nfrom ..latents              import lagrange_interpolation, get_collinear, get_orthogonal, get_cosine_similarity, get_pearson_similarity, get_slerp_weight_for_cossim, get_slerp_ratio, slerp_tensor, get_edge_mask, normalize_zscore, compute_slerp_ratio_for_target, find_slerp_ratio_grid\r\nfrom ..style_transfer       import apply_scattersort_spatial, apply_adain_spatial\r\n\r\nfrom .rk_method_beta        import RK_Method_Beta\r\nfrom .rk_noise_sampler_beta import RK_NoiseSampler\r\nfrom .rk_guide_func_beta    import LatentGuide\r\nfrom .phi_functions         import Phi\r\nfrom .constants             import MAX_STEPS, GUIDE_MODE_NAMES_PSEUDOIMPLICIT\r\n\r\ndef init_implicit_sampling(\r\n        RK             : RK_Method_Beta,\r\n        x_0            : Tensor,\r\n        x_             : Tensor,\r\n        eps_           : Tensor,\r\n        eps_prev_      : Tensor,\r\n        data_          : Tensor,\r\n        eps            : Tensor,\r\n        denoised       : Tensor,\r\n        denoised_prev2 : Tensor,\r\n        step           : int,\r\n        sigmas         : Tensor,\r\n        h              : Tensor,\r\n        s_             : Tensor,\r\n        EO             : ExtraOptions,\r\n        SYNC_GUIDE_ACTIVE,\r\n        ):\r\n    \r\n    sigma = sigmas[step]\r\n    if EO(\"implicit_skip_model_call_at_start\") and denoised.sum() + eps.sum() != 0:\r\n        if denoised_prev2.sum() == 0:\r\n            eps_ [0] = eps.clone()\r\n            data_[0] = denoised.clone()\r\n            eps_ [0] = RK.get_epsilon_anchored(x_0, denoised, sigma)\r\n        else:\r\n            sratio = sigma - s_[0]\r\n            data_[0] = denoised + sratio * (denoised - denoised_prev2)\r\n            \r\n    elif EO(\"implicit_full_skip_model_call_at_start\") and denoised.sum() + eps.sum() != 0:\r\n        if denoised_prev2.sum() == 0:\r\n            eps_ [0] = eps.clone()\r\n            data_[0] = denoised.clone()\r\n            eps_ [0] = RK.get_epsilon_anchored(x_0, denoised, sigma)\r\n        else:\r\n            for r in range(RK.rows):\r\n                sratio = sigma - s_[r]\r\n                data_[r] = denoised + sratio * (denoised - denoised_prev2)\r\n                eps_ [r] = RK.get_epsilon_anchored(x_0, data_[r], s_[r])\r\n\r\n    elif EO(\"implicit_lagrange_skip_model_call_at_start\") and denoised.sum() + eps.sum() != 0:\r\n        if denoised_prev2.sum() == 0:\r\n            eps_ [0] = eps.clone()\r\n            data_[0] = denoised.clone()\r\n            eps_ [0] = RK.get_epsilon_anchored(x_0, denoised, sigma)   \r\n        else:\r\n            sigma_prev    = sigmas[step-1]\r\n            h_prev        = sigma - sigma_prev\r\n            w             = h / h_prev\r\n            substeps_prev = len(RK.C[:-1])\r\n            \r\n            for r in range(RK.rows):\r\n                sratio = sigma - s_[r]\r\n                data_[r] = lagrange_interpolation([0,1], [denoised_prev2, denoised], 1 + w*RK.C[r]).squeeze(0) + denoised_prev2 - denoised\r\n                eps_ [r] = RK.get_epsilon_anchored(x_0, data_[r], s_[r])      \r\n                \r\n            if EO(\"implicit_lagrange_skip_model_call_at_start_0_only\"):\r\n                for r in range(RK.rows):\r\n                    eps_ [r] = eps_ [0].clone() * s_[0] / s_[r]\r\n                    data_[r] = denoised.clone()\r\n\r\n\r\n    elif EO(\"implicit_lagrange_init\") and denoised.sum() + eps.sum() != 0:\r\n        sigma_prev    = sigmas[step-1]\r\n        h_prev        = sigma - sigma_prev\r\n        w             = h / h_prev\r\n        substeps_prev = len(RK.C[:-1])\r\n\r\n        z_prev_ = eps_.clone()\r\n        for r in range (substeps_prev):\r\n            z_prev_[r] = h * RK.zum(r, eps_) # u,v not implemented for lagrange guess for implicit\r\n        zi_1  = lagrange_interpolation(RK.C[:-1], z_prev_[:substeps_prev], RK.C[0]).squeeze(0) # + x_prev - x_0\"\"\"\r\n        x_[0] = x_0 + zi_1\r\n        \r\n    else:\r\n        \r\n        eps_[0], data_[0] = RK(x_[0], sigma, x_0, sigma)\r\n\r\n    if not EO((\"implicit_lagrange_init\", \"radaucycle\", \"implicit_full_skip_model_call_at_start\", \"implicit_lagrange_skip_model_call_at_start\")):\r\n        for r in range(RK.rows):\r\n            eps_ [r] = eps_ [0].clone() * sigma / s_[r]\r\n            data_[r] = data_[0].clone()\r\n\r\n    x_, eps_ = RK.newton_iter(x_0, x_, eps_, eps_prev_, data_, s_, 0, h, sigmas, step, \"init\", SYNC_GUIDE_ACTIVE)\r\n    return x_, eps_, data_\r\n\r\n\r\n@torch.no_grad()\r\ndef sample_rk_beta(\r\n        model,\r\n        x                             : Tensor,\r\n        sigmas                        : Tensor,\r\n        sigmas_override               : Optional[Tensor]   = None,\r\n        \r\n        extra_args                    : Optional[Tensor]   = None,\r\n        callback                      : Optional[Callable] = None,\r\n        disable                       : bool               = None,\r\n        \r\n        sampler_mode                  : str                = \"standard\",\r\n\r\n        rk_type                       : str                = \"res_2m\",\r\n        implicit_sampler_name         : str                = \"use_explicit\",\r\n\r\n        c1                            : float              =  0.0,\r\n        c2                            : float              =  0.5,\r\n        c3                            : float              =  1.0,\r\n\r\n        noise_sampler_type            : str                = \"gaussian\",\r\n        noise_sampler_type_substep    : str                = \"gaussian\",\r\n        noise_mode_sde                : str                = \"hard\",\r\n        noise_mode_sde_substep        : str                = \"hard\",\r\n\r\n        eta                           : float              =  0.5,\r\n        eta_substep                   : float              =  0.5,\r\n\r\n\r\n\r\n\r\n        noise_scaling_weight          : float              = 0.0,\r\n        noise_scaling_type            : str                = \"sampler\",\r\n        noise_scaling_mode            : str                = \"linear\",\r\n        noise_scaling_eta             : float              = 0.0,\r\n        noise_scaling_cycles          : int                = 1,\r\n        \r\n        noise_scaling_weights         : Optional[Tensor]   = None,\r\n        noise_scaling_etas            : Optional[Tensor]   = None,\r\n        \r\n        noise_boost_step              : float              = 0.0,\r\n        noise_boost_substep           : float              = 0.0,\r\n        noise_boost_normalize         : bool               = True,\r\n        noise_anchor                  : float              = 1.0,\r\n        \r\n        s_noise                       : float              = 1.0,\r\n        s_noise_substep               : float              = 1.0,\r\n        d_noise                       : float              = 1.0,\r\n        d_noise_start_step            : int                = 0,\r\n        d_noise_inv                   : float              = 1.0,\r\n        d_noise_inv_start_step        : int                = 0,\r\n        \r\n        \r\n        \r\n        alpha                         : float              = -1.0,\r\n        alpha_substep                 : float              = -1.0,\r\n        k                             : float              =  1.0,\r\n        k_substep                     : float              =  1.0,\r\n        \r\n        momentum                      : float              =  0.0,\r\n\r\n\r\n        overshoot_mode                : str                = \"hard\",\r\n        overshoot_mode_substep        : str                = \"hard\",\r\n        overshoot                     : float              =  0.0,\r\n        overshoot_substep             : float              =  0.0,\r\n\r\n        implicit_type                 : str                = \"predictor-corrector\",\r\n        implicit_type_substeps        : str                = \"predictor-corrector\",\r\n        \r\n        implicit_steps_diag           : int                =  0,\r\n        implicit_steps_full           : int                =  0,\r\n\r\n        etas                          : Optional[Tensor]   = None,\r\n        etas_substep                  : Optional[Tensor]   = None,\r\n        s_noises                      : Optional[Tensor]   = None,\r\n        s_noises_substep              : Optional[Tensor]   = None,\r\n        \r\n        momentums                     : Optional[Tensor]   = None,\r\n\r\n        regional_conditioning_weights : Optional[Tensor]   = None,\r\n        regional_conditioning_floors  : Optional[Tensor]   = None,\r\n        narcissism_start_step         : int                = 0,\r\n        narcissism_end_step           : int                = 5,\r\n                \r\n        LGW_MASK_RESCALE_MIN          : bool                          = True,\r\n        guides                        : Optional[Tuple[Any, ...]]     = None,\r\n        epsilon_scales                : Optional[Tensor]              = None,\r\n        frame_weights_mgr             : Optional[FrameWeightsManager] = None,\r\n\r\n        sde_noise                     : list    [Tensor]   = [],\r\n\r\n        noise_seed                    : int                = -1,\r\n        noise_initial                 : Optional[Tensor]   = None,\r\n        image_initial                 : Optional[Tensor]   = None,\r\n\r\n        cfgpp                         : float              = 0.0,\r\n        cfg_cw                        : float              = 1.0,\r\n\r\n        BONGMATH                      : bool               = True,\r\n        unsample_bongmath             = None,\r\n\r\n        state_info                    : Optional[dict[str, Any]] = None,\r\n        state_info_out                : Optional[dict[str, Any]] = None,\r\n        \r\n        rk_swap_type                  : str                = \"\",\r\n        rk_swap_step                  : int                = MAX_STEPS,\r\n        rk_swap_threshold             : float              = 0.0,\r\n        rk_swap_print                 : bool               = False,\r\n        \r\n        steps_to_run                  : int                = -1,\r\n        start_at_step                 : int                = -1,\r\n        tile_sizes                    : Optional[List[Tuple[int,int]]] = None,\r\n        \r\n        flow_sync_eps                 : float              = 0.0,\r\n        \r\n        sde_mask                      : Optional[Tensor]   = None,\r\n        \r\n        batch_num                     : int                = 0,\r\n        \r\n        extra_options                 : str                = \"\",\r\n        \r\n        AttnMask   = None,\r\n        RegContext = None,\r\n        RegParam   = None,\r\n        \r\n        AttnMask_neg   = None,\r\n        RegContext_neg = None,\r\n        RegParam_neg   = None,\r\n        ):\r\n    \r\n    if sampler_mode == \"NULL\":\r\n        return x\r\n    \r\n    EO             = ExtraOptions(extra_options)\r\n    default_dtype  = EO(\"default_dtype\", torch.float64)\r\n    \r\n    extra_args     = {} if extra_args     is None else extra_args\r\n    model_device   = model.inner_model.inner_model.device #x.device\r\n    work_device    = 'cpu' if EO(\"work_device_cpu\") else model_device\r\n\r\n    state_info     = {} if state_info     is None else state_info\r\n    state_info_out = {} if state_info_out is None else state_info_out\r\n    \r\n    VE_MODEL = isinstance(model.inner_model.inner_model.model_sampling, EPS)\r\n    \r\n    RENOISE = False\r\n    if 'raw_x' in state_info and sampler_mode in {\"resample\", \"unsample\"}:\r\n        if x.shape == state_info['raw_x'].shape:\r\n            x = state_info['raw_x'].to(work_device) #clone()\r\n        else:\r\n            denoised = comfy.utils.bislerp(state_info['denoised'], x.shape[-1], x.shape[-2])\r\n            x = denoised.to(x)\r\n            RENOISE = True\r\n        RESplain(\"Continuing from raw latent from previous sampler.\", debug=False)\r\n    \r\n    \r\n    \r\n    start_step = 0\r\n    if 'end_step' in state_info and (sampler_mode == \"resample\" or sampler_mode == \"unsample\"):\r\n\r\n        if state_info['completed'] != True and state_info['end_step'] != 0 and state_info['end_step'] != -1 and state_info['end_step'] < len(state_info['sigmas'])-1 :   #incomplete run in previous sampler node\r\n            \r\n            if state_info['sampler_mode'] in {\"standard\",\"resample\"} and sampler_mode == \"unsample\" and sigmas[2] < sigmas[1]:\r\n                sigmas = torch.flip(state_info['sigmas'], dims=[0])\r\n                start_step = (len(sigmas)-1) - (state_info['end_step']) #-1) #removed -1 at the end here. correct?\r\n                \r\n            if state_info['sampler_mode'] == \"unsample\"              and sampler_mode == \"resample\" and sigmas[2] > sigmas[1]:\r\n                sigmas = torch.flip(state_info['sigmas'], dims=[0])\r\n                start_step = (len(sigmas)-1) - state_info['end_step'] #-1)\r\n        elif state_info['sampler_mode'] == \"unsample\" and sampler_mode == \"resample\":\r\n            start_step = 0\r\n        \r\n        if state_info['sampler_mode'] in {\"standard\", \"resample\"}    and sampler_mode == \"resample\":\r\n            start_step = state_info['end_step'] if state_info['end_step'] != -1 else 0\r\n            if start_step > 0:\r\n                sigmas = state_info['sigmas'].clone()\r\n\r\n            \r\n            \r\n    if sde_mask is not None:\r\n        from .rk_guide_func_beta import prepare_mask\r\n        sde_mask, _ = prepare_mask(x, sde_mask, LGW_MASK_RESCALE_MIN)\r\n        sde_mask = sde_mask.to(x.device).to(x.dtype)\r\n    \r\n\r\n\r\n    x      = x     .to(dtype=default_dtype, device=work_device)\r\n    sigmas = sigmas.to(dtype=default_dtype, device=work_device)\r\n    \r\n    c1                          = EO(\"c1\"                         , c1)\r\n    c2                          = EO(\"c2\"                         , c2)\r\n    c3                          = EO(\"c3\"                         , c3)\r\n    \r\n    cfg_cw                      = EO(\"cfg_cw\"                     , cfg_cw)\r\n    \r\n    noise_seed                  = EO(\"noise_seed\"                 , noise_seed)\r\n    noise_seed_substep          = EO(\"noise_seed_substep\"         , noise_seed + MAX_STEPS)\r\n    \r\n    pseudoimplicit_row_weights  = EO(\"pseudoimplicit_row_weights\" , [1. for _ in range(100)])\r\n    pseudoimplicit_step_weights = EO(\"pseudoimplicit_step_weights\", [1. for _ in range(max(implicit_steps_diag, implicit_steps_full)+1)])\r\n\r\n    noise_scaling_cycles = EO(\"noise_scaling_cycles\", 1)\r\n    noise_boost_step     = EO(\"noise_boost_step\",     0.0)\r\n    noise_boost_substep  = EO(\"noise_boost_substep\",  0.0)\r\n    \r\n    # SETUP SAMPLER\r\n    if implicit_sampler_name not in (\"use_explicit\", \"none\"):\r\n        rk_type = implicit_sampler_name\r\n    RESplain(\"rk_type:\", rk_type)\r\n    if implicit_sampler_name == \"none\":\r\n        implicit_steps_diag = implicit_steps_full = 0\r\n        \r\n    RK            = RK_Method_Beta.create(model, rk_type, VE_MODEL, noise_anchor, noise_boost_normalize, model_device=model_device, work_device=work_device, dtype=default_dtype, extra_options=extra_options)\r\n    RK.extra_args = RK.init_cfg_channelwise(x, cfg_cw, **extra_args)\r\n    RK.tile_sizes = tile_sizes\r\n    RK.extra_args['model_options']['transformer_options']['regional_conditioning_weight'] = 0.0\r\n    RK.extra_args['model_options']['transformer_options']['regional_conditioning_floor']  = 0.0\r\n    \r\n    RK.unsample_bongmath = BONGMATH if unsample_bongmath is None else unsample_bongmath # allow turning off bongmath for unsampling with cycles\r\n\r\n\r\n    # SETUP SIGMAS\r\n    sigmas_orig = sigmas.clone()\r\n    NS               = RK_NoiseSampler(RK, model, device=work_device, dtype=default_dtype, extra_options=extra_options)\r\n    sigmas, UNSAMPLE = NS.prepare_sigmas(sigmas, sigmas_override, d_noise, d_noise_start_step, sampler_mode)\r\n    if UNSAMPLE and sigmas_orig[0] == 0.0 and sigmas_orig[0] != sigmas[0] and sigmas[1] < sigmas[2]:\r\n        sigmas = torch.cat([torch.full_like(sigmas[0], 0.0).unsqueeze(0), sigmas])\r\n        if start_step == 0:\r\n            start_step  = 1\r\n        else:\r\n            start_step -= 1\r\n    \r\n    if sampler_mode in {\"resample\", \"unsample\"}:\r\n        state_info_sigma_next = state_info.get('sigma_next', -1)\r\n        state_info_start_step = (sigmas == state_info_sigma_next).nonzero().flatten()\r\n        if state_info_start_step.shape[0] > 0:\r\n            start_step = state_info_start_step.item()\r\n            \r\n    start_step = start_at_step if start_at_step >= 0 else start_step\r\n\r\n    \r\n    SDE_NOISE_EXTERNAL = False\r\n    if sde_noise is not None:\r\n        if len(sde_noise) > 0 and sigmas[1] > sigmas[2]:\r\n            SDE_NOISE_EXTERNAL = True\r\n            sigma_up_total = torch.zeros_like(sigmas[0])\r\n            for i in range(len(sde_noise)-1):\r\n                sigma_up_total += sigmas[i+1]\r\n            etas = torch.full_like(sigmas, eta / sigma_up_total)\r\n    \r\n    if 'last_rng' in state_info and sampler_mode in {\"resample\", \"unsample\"}:\r\n        last_rng         = state_info['last_rng'].clone()\r\n        last_rng_substep = state_info['last_rng_substep'].clone()\r\n    else:\r\n        last_rng         = None\r\n        last_rng_substep = None\r\n    \r\n    NS.init_noise_samplers(x, noise_seed, noise_seed_substep, noise_sampler_type, noise_sampler_type_substep, noise_mode_sde, noise_mode_sde_substep, \\\r\n                            overshoot_mode, overshoot_mode_substep, noise_boost_step, noise_boost_substep, alpha, alpha_substep, k, k_substep, \\\r\n                            last_rng=last_rng, last_rng_substep=last_rng_substep,)\r\n\r\n    data_               = None\r\n    eps_                = None\r\n    eps                 = torch.zeros_like(x, dtype=default_dtype, device=work_device)\r\n    denoised            = torch.zeros_like(x, dtype=default_dtype, device=work_device)\r\n    denoised_prev       = torch.zeros_like(x, dtype=default_dtype, device=work_device)\r\n    denoised_prev2      = torch.zeros_like(x, dtype=default_dtype, device=work_device)\r\n    x_                  = None\r\n    eps_prev_           = None\r\n    denoised_data_prev  = None\r\n    denoised_data_prev2 = None\r\n    h_prev              = None\r\n    eps_y2x_            = None\r\n    eps_x2y_            = None\r\n    eps_y_              = None\r\n    eps_prev_y_         = None\r\n    data_y_             = None\r\n    yt_                 = None\r\n    yt_0                = None\r\n    eps_yt_             = None\r\n    eps_x_              = None \r\n    data_y_             = None\r\n    data_x_             = None\r\n    z_                  = None # for tracking residual noise for model scattersort/synchronized diffusion\r\n    \r\n    y0_bongflow         = state_info.get('y0_bongflow')\r\n    y0_bongflow_orig    = state_info.get('y0_bongflow_orig')\r\n    noise_bongflow      = state_info.get('noise_bongflow')\r\n    y0_standard_guide   = state_info.get('y0_standard_guide')\r\n    y0_inv_standard_guide = state_info.get('y0_inv_standard_guide')\r\n    \r\n    data_prev_y_        = state_info.get('data_prev_y_')\r\n    data_prev_x_        = state_info.get('data_prev_x_')\r\n    data_prev_x2y_      = state_info.get('data_prev_x2y_')\r\n\r\n    # BEGIN SAMPLING LOOP    \r\n    num_steps = len(sigmas[start_step:])-2 if sigmas[-1] == 0 else len(sigmas[start_step:])-1\r\n    \r\n    if steps_to_run >= 0:\r\n        current_steps =              min(num_steps, steps_to_run)\r\n        num_steps     = start_step + min(num_steps, steps_to_run)\r\n    else:\r\n        current_steps =              num_steps\r\n        num_steps     = start_step + num_steps\r\n    #current_steps = current_steps + 1 if sigmas[-1] == 0 and steps_to_run < 0 and UNSAMPLE else current_steps\r\n    \r\n    INIT_SAMPLE_LOOP = True\r\n    step = start_step\r\n    sigma, sigma_next, data_prev_ = None, None, None\r\n    \r\n    if (num_steps-1) == len(sigmas)-2 and sigmas[-1] == 0 and sigmas[-2] == NS.sigma_min:\r\n        progress_bar = trange(current_steps+1, disable=disable)\r\n    else:\r\n        progress_bar = trange(current_steps, disable=disable)\r\n    \r\n\r\n    # SETUP GUIDES\r\n    LG = LatentGuide(model, sigmas, UNSAMPLE, VE_MODEL, LGW_MASK_RESCALE_MIN, extra_options, device=work_device, dtype=default_dtype, frame_weights_mgr=frame_weights_mgr)\r\n\r\n    guide_inversion_y0     = state_info.get('guide_inversion_y0')\r\n    guide_inversion_y0_inv = state_info.get('guide_inversion_y0_inv')\r\n\r\n    x = LG.init_guides(x, RK.IMPLICIT, guides, NS.noise_sampler, batch_num, sigmas[step], guide_inversion_y0, guide_inversion_y0_inv)\r\n    LG.y0     = y0_standard_guide     if y0_standard_guide     is not None else LG.y0\r\n    LG.y0_inv = y0_inv_standard_guide if y0_inv_standard_guide is not None else LG.y0_inv\r\n    if (LG.mask != 1.0).any()   and  ((LG.y0 == 0).all() or (LG.y0_inv == 0).all()) : #  and   not LG.guide_mode.startswith(\"flow\"):  # (LG.y0.sum() == 0 or LG.y0_inv.sum() == 0):\r\n        SKIP_PSEUDO = True\r\n        RESplain(\"skipping pseudo...\")\r\n        if   LG.y0    .sum() == 0:\r\n            SKIP_PSEUDO_Y = \"y0\"\r\n        elif LG.y0_inv.sum() == 0:\r\n            SKIP_PSEUDO_Y = \"y0_inv\"\r\n    else:\r\n        SKIP_PSEUDO = False\r\n    if guides is not None and guides.get('guide_mode', '') != \"inversion\" or sampler_mode != \"unsample\":  #do not set denoised_prev to noise guide with inversion!\r\n        if   LG.y0.sum()     != 0 and LG.y0_inv.sum() != 0:\r\n            denoised_prev = LG.mask * LG.y0 + (1-LG.mask) * LG.y0_inv         \r\n        elif LG.y0.sum()     != 0:\r\n            denoised_prev = LG.y0\r\n        elif LG.y0_inv.sum() != 0:\r\n            denoised_prev = LG.y0_inv\r\n    data_cached = None\r\n        \r\n    if EO(\"pseudo_mix_strength\"):\r\n        orig_y0     = LG.y0.clone()\r\n        orig_y0_inv = LG.y0_inv.clone()\r\n    \r\n    #gc.collect()\r\n    BASE_STARTED = False\r\n    INV_STARTED  = False\r\n    FLOW_STARTED = False\r\n    FLOW_STOPPED = False\r\n    noise_xt, noise_yt = None, None\r\n    FLOW_RESUMED = False\r\n    if state_info.get('FLOW_STARTED', False) and not state_info.get('FLOW_STOPPED', False):\r\n        FLOW_RESUMED = True\r\n        y0 = state_info['y0'].to(work_device) \r\n        data_cached = state_info['data_cached'].to(work_device) \r\n        data_x_prev_ = state_info['data_x_prev_'].to(work_device) \r\n\r\n    if noise_initial is not None:\r\n        x_init = noise_initial.to(x)\r\n        RK.update_transformer_options({'x_init': x_init._copy() if hasattr(x_init, 'is_nested') and x_init.is_nested else x_init.clone()})\r\n\r\n    #progress_bar = trange(len(sigmas)-1-start_step, disable=disable)\r\n    \r\n    #if EO(\"eps_adain\") or EO(\"x_init_to_model\"):\r\n    \r\n    if AttnMask is not None:\r\n        RK.update_transformer_options({'AttnMask'  : AttnMask})\r\n        RK.update_transformer_options({'RegContext': RegContext})\r\n\r\n    if AttnMask_neg is not None:\r\n        RK.update_transformer_options({'AttnMask_neg'  : AttnMask_neg})\r\n        RK.update_transformer_options({'RegContext_neg': RegContext_neg})\r\n        \r\n    if EO(\"y0_to_transformer_options\"):\r\n        RK.update_transformer_options({'y0':  LG.y0.clone()})\r\n    \r\n    if EO(\"y0_inv_to_transformer_options\"):\r\n        RK.update_transformer_options({'y0_inv':  LG.y0_inv.clone()})\r\n        for block in model.inner_model.inner_model.diffusion_model.double_stream_blocks:\r\n            for attr in [\"txt_q_cache\", \"txt_k_cache\", \"txt_v_cache\", \"img_q_cache\", \"img_k_cache\", \"img_v_cache\"]:\r\n                if hasattr(block.block.attn1, attr):\r\n                    delattr(block.block.attn1, attr)\r\n\r\n        for block in model.inner_model.inner_model.diffusion_model.single_stream_blocks:\r\n            block.block.attn1.EO = EO \r\n            for attr in [\"txt_q_cache\", \"txt_k_cache\", \"txt_v_cache\", \"img_q_cache\", \"img_k_cache\", \"img_v_cache\"]:\r\n                if hasattr(block.block.attn1, attr):\r\n                    delattr(block.block.attn1, attr)\r\n\r\n    RK.update_transformer_options({'ExtraOptions': copy.deepcopy(EO)})\r\n    if EO(\"update_cross_attn\"):\r\n        update_cross_attn = {\r\n            'src_llama_start': EO('src_llama_start', 0),\r\n            'src_llama_end':   EO('src_llama_end', 0),\r\n            'src_t5_start':    EO('src_t5_start', 0),\r\n            'src_t5_end':      EO('src_t5_end', 0),\r\n\r\n            'tgt_llama_start': EO('tgt_llama_start', 0),\r\n            'tgt_llama_end':   EO('tgt_llama_end', 0),\r\n            'tgt_t5_start':    EO('tgt_t5_start', 0),\r\n            'tgt_t5_end':      EO('tgt_t5_end', 0),\r\n            'skip_cross_attn': EO('skip_cross_attn', False),\r\n            \r\n            'update_q':        EO('update_q', False),\r\n            'update_k':        EO('update_k', True),\r\n            'update_v':        EO('update_v', True),\r\n            \r\n            \r\n            'lamb':  EO('lamb', 0.01),\r\n            'erase': EO('erase', 10.0),\r\n        }\r\n        RK.update_transformer_options({'update_cross_attn':  update_cross_attn})\r\n    else:\r\n        RK.update_transformer_options({'update_cross_attn':  None})\r\n\r\n    if LG.HAS_LATENT_GUIDE_ADAIN:\r\n        RK.update_transformer_options({'blocks_adain_cache': []})\r\n    if LG.HAS_LATENT_GUIDE_ATTNINJ:\r\n        RK.update_transformer_options({'blocks_attninj_cache': []})\r\n    if LG.HAS_LATENT_GUIDE_STYLE_POS:\r\n        if LG.HAS_LATENT_GUIDE and y0_standard_guide is None:\r\n            y0_cache = LG.y0.clone().cpu()\r\n            RK.update_transformer_options({'y0_standard_guide': LG.y0})\r\n        \r\n    sigmas_scheduled = sigmas.clone() # store for return in state_info_out\r\n    \r\n    if EO(\"sigma_restarts\"):\r\n        sigma_restarts = 1 + EO(\"sigma_restarts\", 0)\r\n        sigmas = sigmas[step:num_steps+1].repeat(sigma_restarts)\r\n        step = 0\r\n        num_steps = 2 * sigma_restarts - 1\r\n        \r\n    if RENOISE:      # TODO: adapt for noise inversion somehow\r\n        if VE_MODEL:\r\n            x = x + sigmas[step] * NS.noise_sampler(sigma=sigmas[step], sigma_next=sigmas[step+1])\r\n        else:\r\n            x = (1 - sigmas[step]) * x + sigmas[step] * NS.noise_sampler(sigma=sigmas[step], sigma_next=sigmas[step+1])\r\n    LG.ADAIN_NOISE_MODE = \"\"\r\n    StyleMMDiT = None\r\n    if guides is not None:\r\n        RK.update_transformer_options({\"freqsep_lowpass_method\": guides.get(\"freqsep_lowpass_method\")})\r\n        RK.update_transformer_options({\"freqsep_sigma\":          guides.get(\"freqsep_sigma\")})\r\n        RK.update_transformer_options({\"freqsep_kernel_size\":    guides.get(\"freqsep_kernel_size\")})\r\n        RK.update_transformer_options({\"freqsep_inner_kernel_size\":    guides.get(\"freqsep_inner_kernel_size\")})\r\n        RK.update_transformer_options({\"freqsep_stride\":    guides.get(\"freqsep_stride\")})\r\n\r\n        \r\n        RK.update_transformer_options({\"freqsep_lowpass_weight\": guides.get(\"freqsep_lowpass_weight\")})\r\n        RK.update_transformer_options({\"freqsep_highpass_weight\":guides.get(\"freqsep_highpass_weight\")})\r\n        RK.update_transformer_options({\"freqsep_mask\":           guides.get(\"freqsep_mask\")})\r\n\r\n        StyleMMDiT = guides.get('StyleMMDiT')\r\n        if StyleMMDiT is not None:\r\n            StyleMMDiT.init_guides(model)\r\n            LG.ADAIN_NOISE_MODE = StyleMMDiT.noise_mode\r\n            \r\n            if EO(\"mycoshock\"):\r\n                StyleMMDiT.Retrojector = model.inner_model.inner_model.diffusion_model.Retrojector\r\n                image_initial_shock = StyleMMDiT.apply_data_shock(image_initial.to(x))\r\n                if VE_MODEL:\r\n                    x = image_initial_shock.to(x) + sigmas[0] * noise_initial.to(x)\r\n                else:\r\n                    x = (1 - sigmas[0]) * image_initial_shock.to(x) + sigmas[0] * noise_initial.to(x)\r\n\r\n    RK.update_transformer_options({\"model_sampling\": model.inner_model.inner_model.model_sampling})\r\n    # BEGIN SAMPLING LOOP\r\n    \r\n    while step < num_steps:\r\n        sigma, sigma_next = sigmas[step], sigmas[step+1]\r\n        if sigma_next > sigma:\r\n            step_sched = torch.where(torch.flip(sigmas, dims=[0]) == sigma)[0][0].item()\r\n        else:\r\n            step_sched = step\r\n        \r\n        SYNC_GUIDE_ACTIVE = LG.guide_mode.startswith(\"sync\") and (LG.lgw[step_sched] != 0 or LG.lgw_inv[step_sched] != 0 or LG.lgw_sync[step_sched] != 0 or LG.lgw_sync_inv[step_sched] != 0)\r\n        \r\n        if StyleMMDiT is not None:\r\n            RK.update_transformer_options({'StyleMMDiT': StyleMMDiT})\r\n        else:\r\n            if LG.HAS_LATENT_GUIDE_ADAIN:\r\n                if LG.lgw_adain[step_sched] == 0.0:\r\n                    RK.update_transformer_options({'y0_adain': None})\r\n                    RK.update_transformer_options({'blocks_adain': {}})\r\n                    RK.update_transformer_options({'sort_and_scatter': {}})\r\n                else:\r\n                    RK.update_transformer_options({'y0_adain': LG.y0_adain.clone()})\r\n                    if 'blocks_adain_mmdit' in guides:\r\n                        blocks_adain = {\r\n                            \"double_weights\": [val * LG.lgw_adain[step_sched] for val in guides['blocks_adain_mmdit']['double_weights']],\r\n                            \"single_weights\": [val * LG.lgw_adain[step_sched] for val in guides['blocks_adain_mmdit']['single_weights']],\r\n                            \"double_blocks\" : guides['blocks_adain_mmdit']['double_blocks'],\r\n                            \"single_blocks\" : guides['blocks_adain_mmdit']['single_blocks'],\r\n                        }\r\n                    RK.update_transformer_options({'blocks_adain': blocks_adain})\r\n                    RK.update_transformer_options({'sort_and_scatter': guides['sort_and_scatter']})\r\n                    RK.update_transformer_options({'noise_mode_adain': guides['sort_and_scatter']['noise_mode']})\r\n                        \r\n            \r\n            if LG.HAS_LATENT_GUIDE_ATTNINJ:\r\n                if LG.lgw_attninj[step_sched] == 0.0:\r\n                    RK.update_transformer_options({'y0_attninj': None})\r\n                    RK.update_transformer_options({'blocks_attninj'    : {}})\r\n                    RK.update_transformer_options({'blocks_attninj_qkv': {}})\r\n                else:\r\n                    RK.update_transformer_options({'y0_attninj': LG.y0_attninj.clone()})\r\n                    if 'blocks_attninj_mmdit' in guides:\r\n                        blocks_attninj = {\r\n                            \"double_weights\": [val * LG.lgw_attninj[step_sched] for val in guides['blocks_attninj_mmdit']['double_weights']],\r\n                            \"single_weights\": [val * LG.lgw_attninj[step_sched] for val in guides['blocks_attninj_mmdit']['single_weights']],\r\n                            \"double_blocks\" : guides['blocks_attninj_mmdit']['double_blocks'],\r\n                            \"single_blocks\" : guides['blocks_attninj_mmdit']['single_blocks'],\r\n                        }\r\n                    RK.update_transformer_options({'blocks_attninj'    : blocks_attninj})\r\n                    RK.update_transformer_options({'blocks_attninj_qkv': guides['blocks_attninj_qkv']})\r\n        \r\n        if LG.HAS_LATENT_GUIDE_STYLE_POS:\r\n            if LG.lgw_style_pos[step_sched] == 0.0:\r\n                RK.update_transformer_options({'y0_style_pos':        None})\r\n                RK.update_transformer_options({'y0_style_pos_weight': 0.0})\r\n                RK.update_transformer_options({'y0_style_pos_synweight': 0.0})\r\n                RK.update_transformer_options({'y0_style_pos_mask': None})\r\n            else:\r\n                RK.update_transformer_options({'y0_style_pos':        LG.y0_style_pos.clone()})\r\n                RK.update_transformer_options({'y0_style_pos_weight': LG.lgw_style_pos[step_sched]})\r\n                RK.update_transformer_options({'y0_style_pos_synweight': guides['synweight_style_pos']})\r\n                RK.update_transformer_options({'y0_style_pos_mask': LG.mask_style_pos})\r\n                RK.update_transformer_options({'y0_style_pos_mask_edge': guides.get('mask_edge_style_pos')})\r\n                RK.update_transformer_options({'y0_style_method': guides['style_method']})\r\n                RK.update_transformer_options({'y0_style_tile_height': guides.get('style_tile_height')})\r\n                RK.update_transformer_options({'y0_style_tile_width': guides.get('style_tile_width')})\r\n                RK.update_transformer_options({'y0_style_tile_padding': guides.get('style_tile_padding')})\r\n                \r\n                if EO(\"style_edge_width\"):\r\n                    RK.update_transformer\r\n                \r\n                #if LG.HAS_LATENT_GUIDE:\r\n                #    y0_cache = LG.y0.clone().cpu()\r\n                #    RK.update_transformer_options({'y0_standard_guide': LG.y0})\r\n                    \r\n                if LG.HAS_LATENT_GUIDE_INV and y0_inv_standard_guide is None:\r\n                    y0_inv_cache = LG.y0_inv.clone().cpu()\r\n                    RK.update_transformer_options({'y0_inv_standard_guide': LG.y0_inv})\r\n                    \r\n\r\n        if LG.HAS_LATENT_GUIDE_STYLE_NEG:\r\n            if LG.lgw_style_neg[step_sched] == 0.0:\r\n                RK.update_transformer_options({'y0_style_neg':        None})\r\n                RK.update_transformer_options({'y0_style_neg_weight': 0.0})\r\n                RK.update_transformer_options({'y0_style_neg_synweight': 0.0})\r\n                RK.update_transformer_options({'y0_style_neg_mask': None})\r\n            else:\r\n                RK.update_transformer_options({'y0_style_neg':        LG.y0_style_neg.clone()})\r\n                RK.update_transformer_options({'y0_style_neg_weight': LG.lgw_style_neg[step_sched]})\r\n                RK.update_transformer_options({'y0_style_neg_synweight': guides['synweight_style_neg']})\r\n                RK.update_transformer_options({'y0_style_neg_mask': LG.mask_style_neg})\r\n                RK.update_transformer_options({'y0_style_neg_mask_edge': guides.get('mask_edge_style_neg')})\r\n                RK.update_transformer_options({'y0_style_method': guides['style_method']})\r\n                RK.update_transformer_options({'y0_style_tile_height': guides.get('style_tile_height')})\r\n                RK.update_transformer_options({'y0_style_tile_width': guides.get('style_tile_width')})\r\n                RK.update_transformer_options({'y0_style_tile_padding': guides.get('style_tile_padding')})\r\n\r\n        if AttnMask_neg is not None:\r\n            RK.update_transformer_options({'regional_conditioning_weight_neg': RegParam_neg.weights[step_sched]})\r\n            RK.update_transformer_options({'regional_conditioning_floor_neg':  RegParam_neg.floors[step_sched]})\r\n\r\n        if AttnMask is not None:\r\n            RK.update_transformer_options({'regional_conditioning_weight': RegParam.weights[step_sched]})\r\n            RK.update_transformer_options({'regional_conditioning_floor':  RegParam.floors[step_sched]})\r\n\r\n        elif regional_conditioning_weights is not None:\r\n            RK.extra_args['model_options']['transformer_options']['regional_conditioning_weight'] = regional_conditioning_weights[step_sched]\r\n            RK.extra_args['model_options']['transformer_options']['regional_conditioning_floor']  = regional_conditioning_floors [step_sched]\r\n        \r\n        epsilon_scale        = float(epsilon_scales [step_sched]) if epsilon_scales        is not None else None\r\n        eta                  = etas                 [step_sched].to(x)  if etas                  is not None else eta\r\n        eta_substep          = etas_substep         [step_sched].to(x)  if etas_substep          is not None else eta_substep\r\n        s_noise              = s_noises             [step_sched].to(x)  if s_noises              is not None else s_noise\r\n        s_noise_substep      = s_noises_substep     [step_sched].to(x)  if s_noises_substep      is not None else s_noise_substep\r\n        noise_scaling_eta    = noise_scaling_etas   [step_sched].to(x)  if noise_scaling_etas    is not None else noise_scaling_eta\r\n        noise_scaling_weight = noise_scaling_weights[step_sched].to(x)  if noise_scaling_weights is not None else noise_scaling_weight\r\n        \r\n        NS.set_sde_step(sigma, sigma_next, eta, overshoot, s_noise)\r\n        RK.set_coeff(rk_type, NS.h, c1, c2, c3, step, sigmas, NS.sigma_down)\r\n        NS.set_substep_list(RK)\r\n\r\n        if (noise_scaling_eta > 0 or noise_scaling_weight != 0) and noise_scaling_type != \"model_d\":\r\n            if noise_scaling_type == \"model_alpha\":\r\n                VP_OVERRIDE=True\r\n            else:\r\n                VP_OVERRIDE=None\r\n            if noise_scaling_type in {\"sampler\", \"model\", \"model_alpha\"}:\r\n                if noise_scaling_type == \"model_alpha\":\r\n                    sigma_divisor = NS.sigma_max\r\n                else:\r\n                    sigma_divisor = 1.0\r\n                \r\n                if RK.multistep_stages > 0:                                              # hardcoded s_[1] for multistep samplers, which are never multistage\r\n                    lying_su, lying_sigma, lying_sd, lying_alpha_ratio = NS.get_sde_step(NS.s_[1]/sigma_divisor, NS.s_[0]/sigma_divisor, noise_scaling_eta, noise_scaling_mode, VP_OVERRIDE=VP_OVERRIDE)\r\n                    \r\n                else:\r\n                    lying_su, lying_sigma, lying_sd, lying_alpha_ratio = NS.get_sde_step(sigma/sigma_divisor, NS.sigma_down/sigma_divisor, noise_scaling_eta, noise_scaling_mode, VP_OVERRIDE=VP_OVERRIDE)\r\n                    for _ in range(noise_scaling_cycles-1):\r\n                        lying_su, lying_sigma, lying_sd, lying_alpha_ratio = NS.get_sde_step(sigma/sigma_divisor, lying_sd/sigma_divisor, noise_scaling_eta, noise_scaling_mode, VP_OVERRIDE=VP_OVERRIDE)\r\n                lying_s_ = NS.get_substep_list(RK, sigma, RK.h_fn(lying_sd, lying_sigma))\r\n                lying_s_ = NS.s_ + noise_scaling_weight * (lying_s_ - NS.s_)\r\n            else:\r\n                lying_s_ = NS.s_.clone()\r\n        \r\n\r\n        rk_swap_stages = 3 if rk_swap_type != \"\" else 0\r\n        data_prev_len = len(data_prev_)-1 if data_prev_ is not None else 3\r\n        recycled_stages = max(rk_swap_stages, RK.multistep_stages, RK.hybrid_stages, data_prev_len)\r\n        \r\n        if INIT_SAMPLE_LOOP:\r\n            INIT_SAMPLE_LOOP = False\r\n            x_, data_, eps_, eps_prev_ = (torch.zeros(RK.rows+2, *x.shape, dtype=default_dtype, device=work_device) for _ in range(4))\r\n            if LG.ADAIN_NOISE_MODE == \"smart\":\r\n                z_ = torch.zeros(RK.rows+2, *x.shape, dtype=default_dtype, device=work_device)\r\n                z_[0] = noise_initial.clone()\r\n                RK.update_transformer_options({'z_' : z_})\r\n            \r\n            if sampler_mode in {\"unsample\", \"resample\"}:\r\n                data_prev_ = state_info.get('data_prev_')\r\n                if data_prev_ is not None:\r\n                    if x.shape == state_info['raw_x'].shape:\r\n                        data_prev_ = state_info['data_prev_'].clone().to(dtype=default_dtype, device=work_device)\r\n                    else:\r\n                        data_prev_ = torch.stack([comfy.utils.bislerp(data_prev_item, x.shape[-1], x.shape[-2]) for data_prev_item in state_info['data_prev_']])\r\n                        data_prev_ = data_prev_.to(x)\r\n                else:\r\n                    data_prev_ =  torch.zeros(4, *x.shape, dtype=default_dtype, device=work_device) # multistep max is 4m... so 4 needed\r\n            else:\r\n                data_prev_ =  torch.zeros(4, *x.shape, dtype=default_dtype, device=work_device) # multistep max is 4m... so 4 needed\r\n            \r\n            recycled_stages = len(data_prev_)-1\r\n        \r\n        if RK.rows+2 > x_.shape[0]:\r\n            row_gap = RK.rows+2 - x_.shape[0]\r\n            x_gap_, data_gap_, eps_gap_, eps_prev_gap_ = (torch.zeros(row_gap, *x.shape, dtype=default_dtype, device=work_device) for _ in range(4))\r\n            x_        = torch.cat((x_       ,x_gap_)       , dim=0)\r\n            data_     = torch.cat((data_    ,data_gap_)    , dim=0)\r\n            eps_      = torch.cat((eps_     ,eps_gap_)     , dim=0)\r\n            eps_prev_ = torch.cat((eps_prev_,eps_prev_gap_), dim=0)\r\n            \r\n            if LG.ADAIN_NOISE_MODE == \"smart\":\r\n                z_gap_ = torch.zeros(row_gap, *x.shape, dtype=default_dtype, device=work_device)\r\n                z_    = torch.cat((z_       ,z_gap_)       , dim=0)\r\n                RK.update_transformer_options({'z_' : z_})\r\n\r\n        sde_noise_t = None\r\n        if SDE_NOISE_EXTERNAL:\r\n            if step >= len(sde_noise):\r\n                SDE_NOISE_EXTERNAL=False\r\n            else:\r\n                sde_noise_t = sde_noise[step]\r\n        \r\n        x_[0] = x.clone()\r\n        # PRENOISE METHOD HERE!\r\n        x_0   = x_[0].clone()\r\n        if EO(\"guide_step_cutoff\") or EO(\"guide_step_min\"):\r\n            x_0_orig = x_0.clone()\r\n        \r\n        # RECYCLE STAGES FOR MULTISTEP\r\n        if RK.multistep_stages > 0 or RK.hybrid_stages > 0:\r\n            if SYNC_GUIDE_ACTIVE:\r\n                lgw_mask_, lgw_mask_inv_ = LG.get_masks_for_step(step)\r\n                lgw_mask_sync_, lgw_mask_sync_inv_ = LG.get_masks_for_step(step, lgw_type=\"sync\")\r\n\r\n                weight_mask = lgw_mask_+lgw_mask_inv_\r\n                if LG.SYNC_SEPARATE:\r\n                    sync_mask = lgw_mask_sync_+lgw_mask_sync_inv_\r\n                else:\r\n                    sync_mask = 1.\r\n                            \r\n                if VE_MODEL:\r\n                    yt_0 = y0_bongflow + sigma * noise_bongflow\r\n                else:\r\n                    yt_0 = (1-sigma) * y0_bongflow  + sigma * noise_bongflow\r\n                for ms in range(min(len(data_prev_), len(eps_))):\r\n                    eps_x   = RK.get_epsilon_anchored(x_0,  data_prev_x_[ms], sigma)\r\n                    eps_y   = RK.get_epsilon_anchored(yt_0, data_prev_y_[ms], sigma)\r\n                    eps_x2y = RK.get_epsilon_anchored(yt_0, data_prev_y_[ms], sigma)\r\n\r\n                    if RK.EXPONENTIAL:\r\n                        if VE_MODEL:\r\n                            eps_[ms] = sync_mask * eps_x  +  (1-sync_mask) * eps_x2y  +  weight_mask * (-eps_y + sigma*(-noise_bongflow))\r\n                            if EO(\"sync_x2y\"):\r\n                                eps_[ms] = sync_mask * eps_x  +  (1-sync_mask) * eps_x2y  +  weight_mask * (-eps_x2y + sigma*(-noise_bongflow))\r\n                        else:\r\n                            eps_[ms] = sync_mask * eps_x  +  (1-sync_mask) * eps_x2y  +  weight_mask * (-eps_y + sigma*(y0_bongflow-noise_bongflow))\r\n                            if EO(\"sync_x2y\"):\r\n                                eps_[ms] = sync_mask * eps_x  +  (1-sync_mask) * eps_x2y  +  weight_mask * (-eps_x2y + sigma*(y0_bongflow-noise_bongflow))\r\n                    else:\r\n                        if VE_MODEL:\r\n                            eps_[ms] = sync_mask * eps_x  +  (1-sync_mask) * eps_x2y  +  weight_mask * (-eps_y + (noise_bongflow))\r\n                            if EO(\"sync_x2y\"):\r\n                                eps_[ms] = sync_mask * eps_x  +  (1-sync_mask) * eps_x2y  +  weight_mask * (-eps_x2y + (noise_bongflow))\r\n                        else:\r\n                            eps_[ms] = sync_mask * eps_x  +  (1-sync_mask) * eps_x2y  +  weight_mask * (-eps_y + (noise_bongflow-y0_bongflow))\r\n                            if EO(\"sync_x2y\"):\r\n                                eps_[ms] = sync_mask * eps_x  +  (1-sync_mask) * eps_x2y  +  weight_mask * (-eps_x2y + (noise_bongflow-y0_bongflow))\r\n\r\n                    #if RK.EXPONENTIAL:\r\n                    #    if VE_MODEL:\r\n                    #        eps_[ms] = sync_mask * weight_mask_inv * (eps_x - weight_mask * eps_y) +  weight_mask * sigma*(-noise_bongflow)\r\n                    #    else:\r\n                    #        #eps_[ms] = (lgw_mask_sync_+lgw_mask_sync_inv_) * (1-(lgw_mask_+lgw_mask_inv_)) * (eps_x - (lgw_mask_+lgw_mask_inv_) * eps_y) +  (lgw_mask_+lgw_mask_inv_) * sigma*(y0_bongflow-noise_bongflow)\r\n                    #        eps_[ms] = sync_mask * weight_mask_inv * (eps_x - weight_mask * eps_y) +  weight_mask * sigma*(y0_bongflow-noise_bongflow)\r\n                    #else:\r\n                    #    if VE_MODEL:\r\n                    #        eps_[ms] = sync_mask * weight_mask_inv * (eps_x - weight_mask * eps_y) +  weight_mask *       (noise_bongflow)\r\n                    #    else:\r\n                    #        #eps_[ms] = (lgw_mask_sync_+lgw_mask_sync_inv_) * (1-(lgw_mask_+lgw_mask_inv_)) * (eps_x - (lgw_mask_+lgw_mask_inv_) * eps_y) +  (lgw_mask_+lgw_mask_inv_) *       (noise_bongflow-y0_bongflow)\r\n                    #        eps_[ms] = sync_mask * weight_mask_inv * (eps_x - weight_mask * eps_y) +  weight_mask *       (noise_bongflow-y0_bongflow)\r\n                eps_prev_ = eps_.clone()\r\n            \r\n            else:\r\n                for ms in range(min(len(data_prev_), len(eps_))):\r\n                    eps_[ms] = RK.get_epsilon_anchored(x_0, data_prev_[ms], sigma)\r\n                eps_prev_ = eps_.clone()\r\n\r\n\r\n\r\n        # INITIALIZE IMPLICIT SAMPLING\r\n        if RK.IMPLICIT:\r\n            x_, eps_, data_ = init_implicit_sampling(RK, x_0, x_, eps_, eps_prev_, data_, eps, denoised, denoised_prev2, step, sigmas, NS.h, NS.s_, EO, SYNC_GUIDE_ACTIVE)\r\n\r\n        implicit_steps_total = (implicit_steps_full + 1) * (implicit_steps_diag + 1)\r\n\r\n        # BEGIN FULLY IMPLICIT LOOP\r\n        cossim_counter = 0\r\n        adaptive_lgw = LG.lgw.clone()\r\n        full_iter = 0\r\n        while full_iter < implicit_steps_full+1:\r\n\r\n            if RK.IMPLICIT:\r\n                x_, eps_ = RK.newton_iter(x_0, x_, eps_, eps_prev_, data_, NS.s_, 0, NS.h, sigmas, step, \"init\", SYNC_GUIDE_ACTIVE)\r\n\r\n            # PREPARE FULLY PSEUDOIMPLICIT GUIDES\r\n            if step > 0 or not SKIP_PSEUDO:\r\n                if full_iter > 0 and EO(\"fully_implicit_reupdate_x\"):\r\n                    x_[0] = NS.sigma_from_to(x_0, x, sigma, sigma_next, NS.s_[0])\r\n                    x_0   = NS.sigma_from_to(x_0, x, sigma, sigma_next, sigma)\r\n                \r\n                if EO(\"fully_pseudo_init\") and full_iter == 0:\r\n                    guide_mode_tmp = LG.guide_mode\r\n                    LG.guide_mode = \"fully_\" + LG.guide_mode\r\n                x_0, x_, eps_ = LG.prepare_fully_pseudoimplicit_guides_substep(x_0, x_, eps_, eps_prev_, data_, denoised_prev, 0, step, step_sched, sigmas, eta_substep, overshoot_substep, s_noise_substep, \\\r\n                                                                                NS, RK, pseudoimplicit_row_weights, pseudoimplicit_step_weights, full_iter, BONGMATH)\r\n                if EO(\"fully_pseudo_init\") and full_iter == 0:\r\n                    LG.guide_mode = guide_mode_tmp\r\n\r\n            # TABLEAU LOOP\r\n            for row in range(RK.rows - RK.multistep_stages - RK.row_offset + 1):\r\n                diag_iter = 0\r\n                while diag_iter < implicit_steps_diag+1:\r\n                    \r\n\r\n                    if noise_sampler_type_substep == \"brownian\" and (full_iter > 0 or diag_iter > 0):\r\n                        eta_substep = 0.\r\n                    \r\n                    NS.set_sde_substep(row, RK.multistep_stages, eta_substep, overshoot_substep, s_noise_substep, full_iter, diag_iter, implicit_steps_full, implicit_steps_diag)\r\n\r\n                    # PRENOISE METHOD HERE!\r\n                    \r\n                    # A-TABLEAU\r\n                    if row < RK.rows:\r\n\r\n                        # PREPARE PSEUDOIMPLICIT GUIDES\r\n                        if step > 0 or not SKIP_PSEUDO:\r\n                            x_0, x_, eps_, x_row_pseudoimplicit, sub_sigma_pseudoimplicit = LG.process_pseudoimplicit_guides_substep(x_0, x_, eps_, eps_prev_, data_, denoised_prev, row, step, step_sched, sigmas, NS, RK, \\\r\n                                                                                                                        pseudoimplicit_row_weights, pseudoimplicit_step_weights, full_iter, BONGMATH)\r\n                        \r\n                        # PREPARE MODEL CALL\r\n                        if LG.guide_mode in GUIDE_MODE_NAMES_PSEUDOIMPLICIT and (step > 0 or not SKIP_PSEUDO) and (LG.lgw[step_sched] > 0 or LG.lgw_inv[step_sched] > 0) and x_row_pseudoimplicit is not None:\r\n\r\n                            x_tmp =     x_row_pseudoimplicit \r\n                            s_tmp = sub_sigma_pseudoimplicit \r\n\r\n                        # Fully implicit iteration (explicit only)                   # or... Fully implicit iteration (implicit only... not standard) \r\n                        elif (full_iter > 0 and RK.row_offset == 1 and row == 0)   or   (full_iter > 0 and RK.row_offset == 0 and row == 0 and EO(\"fully_implicit_update_x\")):\r\n                            if EO(\"fully_explicit_pogostick_eta\"): \r\n                                super_alpha_ratio, super_sigma_down, super_sigma_up = NS.get_sde_coeff(sigma, sigma_next, None, eta)\r\n                                x = super_alpha_ratio * x + super_sigma_up * NS.noise_sampler(sigma=sigma_next, sigma_next=sigma)\r\n                                \r\n                                x_tmp = x\r\n                                s_tmp = sigma\r\n                            elif EO(\"enable_fully_explicit_lagrange_rebound1\"):\r\n                                substeps_prev = len(RK.C[:-1]) \r\n                                x_tmp = lagrange_interpolation(RK.C[1:-1], x_[1:substeps_prev], RK.C[0]).squeeze(0)\r\n                                \r\n                            elif EO(\"enable_fully_explicit_lagrange_rebound2\"):\r\n                                substeps_prev = len(RK.C[:-1]) \r\n                                x_tmp = lagrange_interpolation(RK.C[1:], x_[1:substeps_prev+1], RK.C[0]).squeeze(0)\r\n\r\n                            elif EO(\"enable_fully_explicit_rebound1\"):  # 17630, faded dots, just crap\r\n                                eps_tmp, denoised_tmp = RK(x, sigma_next, x, sigma_next)\r\n                                eps_tmp = (x - denoised_tmp) / sigma_next\r\n                                x_[0] = denoised_tmp + sigma * eps_tmp\r\n                                \r\n                                x_0 =   x_[0]\r\n                                x_tmp = x_[0]\r\n                                s_tmp = sigma\r\n                                \r\n                            elif implicit_type == \"rebound\":          # TODO: ADAPT REBOUND IMPLICIT TO WORK WITH FLOW GUIDE MODE\r\n                                eps_tmp, denoised_tmp = RK(x, sigma_next, x_0, sigma)\r\n                                eps_tmp = (x - denoised_tmp) / sigma_next\r\n                                x = denoised_tmp + sigma * eps_tmp\r\n                                \r\n                                x_tmp = x\r\n                                s_tmp = sigma\r\n                                \r\n                            elif implicit_type == \"retro-eta\" and (NS.sub_sigma_up > 0 or NS.sub_sigma_up_eta > 0): \r\n                                x_tmp = NS.sigma_from_to(x_0, x, sigma, sigma_next, sigma)\r\n                                s_tmp = sigma\r\n                                \r\n                            elif implicit_type == \"bongmath\" and (NS.sub_sigma_up > 0 or NS.sub_sigma_up_eta > 0): \r\n                                if BONGMATH:\r\n                                    x_tmp =    x_[row]\r\n                                    s_tmp = NS.s_[row]\r\n                                else:\r\n                                    x_tmp = NS.sigma_from_to(x_0, x, sigma, sigma_next, sigma)\r\n                                    s_tmp = sigma\r\n                                \r\n                            else:\r\n                                x_tmp = x\r\n                                s_tmp = sigma_next\r\n\r\n\r\n\r\n                        # All others\r\n                        else:\r\n                            # three potential toggle options: force rebound/model call, force PC style, force pogostick style\r\n                            if diag_iter > 0: # Diagonally implicit iteration (explicit or implicit)\r\n                                if EO(\"diag_explicit_pogostick_eta\"): \r\n                                    super_alpha_ratio, super_sigma_down, super_sigma_up = NS.get_sde_coeff(NS.s_[row], NS.s_[row+RK.row_offset+RK.multistep_stages], None, eta)\r\n                                    x_[row+RK.row_offset] = super_alpha_ratio * x_[row+RK.row_offset] + super_sigma_up * NS.noise_sampler(sigma=NS.s_[row+RK.row_offset+RK.multistep_stages], sigma_next=NS.s_[row])\r\n                                    \r\n                                    x_tmp = x_[row+RK.row_offset]\r\n                                    s_tmp = sigma\r\n                                \r\n                                elif implicit_type_substeps == \"rebound\":\r\n                                    eps_[row], data_[row] = RK(x_[row+RK.row_offset], NS.s_[row+RK.row_offset+RK.multistep_stages], x_0, sigma)\r\n                                    \r\n                                    x_ = RK.update_substep(x_0, x_, eps_, eps_prev_, row, RK.row_offset, NS.h_new, NS.h_new_orig)\r\n                                    x_[row+RK.row_offset] = NS.rebound_overshoot_substep(x_0, x_[row+RK.row_offset])\r\n\r\n                                    x_[row+RK.row_offset] = NS.sigma_from_to(x_0,    x_[row+RK.row_offset],    sigma,    NS.s_[row+RK.row_offset+RK.multistep_stages],    NS.s_[row])\r\n                                    x_tmp = x_[row+RK.row_offset]\r\n                                    s_tmp = NS.s_[row]\r\n                                    \r\n                                elif implicit_type_substeps == \"retro-eta\" and (NS.sub_sigma_up > 0 or NS.sub_sigma_up_eta > 0):\r\n                                    x_tmp = NS.sigma_from_to(x_0,   x_[row+RK.row_offset],    sigma,    NS.s_[row+RK.row_offset+RK.multistep_stages],    NS.s_[row])\r\n                                    s_tmp = NS.s_[row]\r\n                                    \r\n                                elif implicit_type_substeps == \"bongmath\" and (NS.sub_sigma_up > 0 or NS.sub_sigma_up_eta > 0) and not EO(\"disable_diag_explicit_bongmath_rebound\"): \r\n                                    if BONGMATH:\r\n                                        x_tmp =    x_[row]\r\n                                        s_tmp = NS.s_[row]\r\n                                    else:\r\n                                        x_tmp = NS.sigma_from_to(x_0, x_[row+RK.row_offset], sigma, NS.s_[row+RK.row_offset+RK.multistep_stages], NS.s_[row])\r\n                                        s_tmp = NS.s_[row]\r\n                                    \r\n                                else:\r\n                                    x_tmp =    x_[row+RK.row_offset]\r\n                                    s_tmp = NS.s_[row+RK.row_offset+RK.multistep_stages]\r\n                            else:\r\n                                x_tmp = x_[row]\r\n                                s_tmp = NS.sub_sigma \r\n\r\n\r\n\r\n                            if RK.IMPLICIT: \r\n                                if not EO(\"disable_implicit_guide_preproc\"):\r\n                                    eps_, x_      = LG.process_guides_substep(x_0, x_, eps_,      data_, row, step_sched, sigma, sigma_next, NS.sigma_down, NS.s_, epsilon_scale, RK)\r\n                                    eps_prev_, x_ = LG.process_guides_substep(x_0, x_, eps_prev_, data_, row, step_sched, sigma, sigma_next, NS.sigma_down, NS.s_, epsilon_scale, RK)\r\n                                if row == 0 and (EO(\"implicit_lagrange_init\")  or   EO(\"radaucycle\")):\r\n                                    pass\r\n                                else:\r\n                                    x_[row+RK.row_offset] = x_0 + NS.h_new * RK.zum(row+RK.row_offset, eps_, eps_prev_)\r\n                                    x_[row+RK.row_offset] = NS.rebound_overshoot_substep(x_0, x_[row+RK.row_offset])\r\n                                    if row > 0:\r\n                                        if not LG.guide_mode.startswith(\"flow\") or (LG.lgw[step_sched] == 0 and LG.lgw[step+1] == 0   and   LG.lgw_inv[step_sched] == 0 and LG.lgw_inv[step+1] == 0):\r\n                                            x_row_tmp = NS.swap_noise_substep(x_0, x_[row+RK.row_offset], mask=sde_mask, guide=LG.y0)\r\n                                            \r\n                                            if LG.ADAIN_NOISE_MODE == \"smart\": #_smartnoise_implicit\"):\r\n                                                data_next = denoised + NS.h_new * RK.zum(row+RK.row_offset+RK.multistep_stages, data_, data_prev_) \r\n                                                if VE_MODEL:\r\n                                                    z_[row+RK.row_offset] = (x_row_tmp - data_next) / s_tmp\r\n                                                else:\r\n                                                    z_[row+RK.row_offset] = (x_row_tmp - (NS.sigma_max-s_tmp)*data_next) / s_tmp\r\n                                                RK.update_transformer_options({'z_' : z_})\r\n                                            \r\n                                            if SYNC_GUIDE_ACTIVE:\r\n                                                noise_bongflow_new = (x_row_tmp - x_[row+RK.row_offset]) / s_tmp + noise_bongflow\r\n                                                yt_[row+RK.row_offset] += s_tmp * (noise_bongflow_new - noise_bongflow)\r\n                                                x_0 += sigma * (noise_bongflow_new - noise_bongflow)\r\n                                                if not EO(\"disable_i_bong\"):\r\n                                                    for i_bong in range(len(NS.s_)):\r\n                                                        x_[i_bong] += NS.s_[i_bong] * (noise_bongflow_new - noise_bongflow)\r\n                                                noise_bongflow = noise_bongflow_new\r\n                                            \r\n                                            x_[row+RK.row_offset] = x_row_tmp\r\n                                        \r\n                                        if SYNC_GUIDE_ACTIVE:\r\n                                            if VE_MODEL:\r\n                                                yt_[:NS.s_.shape[0], 0] = y0_bongflow + NS.s_.view(-1, *[1]*(x.ndim-1)) * (noise_bongflow)\r\n                                                yt_0   = y0_bongflow + sigma * (noise_bongflow)\r\n                                            else:\r\n                                                yt_[:NS.s_.shape[0], 0] = y0_bongflow + NS.s_.view(-1, *[1]*(x.ndim-1)) * (noise_bongflow - y0_bongflow)\r\n                                                yt_0   = y0_bongflow + sigma * (noise_bongflow - y0_bongflow)\r\n                                            \r\n                                            if RK.EXPONENTIAL:\r\n                                                eps_y_ = data_y_ - yt_0 # yt_        # watch out for fuckery with size of tableau being smaller later in a chained sampler\r\n                                            else:\r\n                                                if BONGMATH:\r\n                                                    eps_y_[:NS.s_.shape[0]] = (yt_[:NS.s_.shape[0]] - data_y_[:NS.s_.shape[0]]) / NS.s_.view(-1,*[1]*(x_.ndim-1)) \r\n                                                else:\r\n                                                    eps_y_[:NS.s_.shape[0]] = (yt_0.repeat(NS.s_.shape[0], *[1]*(x_.ndim-1)) - data_y_[:NS.s_.shape[0]]) / sigma    # calc exact to c0 node\r\n                                            if not BONGMATH:\r\n                                                if RK.EXPONENTIAL:\r\n                                                    eps_x_ = data_x_ - x_0 \r\n                                                else:\r\n                                                    eps_x_ = (x_0 - data_x_) / sigma\r\n                                            \r\n                                            weight_mask = lgw_mask_+lgw_mask_inv_\r\n                                            if LG.SYNC_SEPARATE:\r\n                                                sync_mask = lgw_mask_sync_+lgw_mask_sync_inv_\r\n                                            else:\r\n                                                sync_mask = 1.\r\n                                            \r\n                                            for ms in range(len(eps_)):\r\n                                                if RK.EXPONENTIAL:\r\n                                                    if VE_MODEL:         # ZERO IS THIS                      # ONE IS THIS\r\n                                                        eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_y_[ms] + sigma*(-noise_bongflow))\r\n                                                        if EO(\"sync_x2y\"):\r\n                                                            eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_x2y_[ms] + sigma*(-noise_bongflow))\r\n                                                    else:\r\n                                                        eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_y_[ms] + sigma*(y0_bongflow-noise_bongflow))\r\n                                                        if EO(\"sync_x2y\"):\r\n                                                            eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_x2y_[ms] + sigma*(y0_bongflow-noise_bongflow))\r\n                                                else:\r\n                                                    if VE_MODEL:\r\n                                                        eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_y_[ms] + (noise_bongflow))\r\n                                                        if EO(\"sync_x2y\"):\r\n                                                            eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_x2y_[ms] + (noise_bongflow))\r\n                                                    else:\r\n                                                        eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_y_[ms] + (noise_bongflow-y0_bongflow))\r\n                                                        if EO(\"sync_x2y\"):\r\n                                                            eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_x2y_[ms] + (noise_bongflow-y0_bongflow))\r\n\r\n                                            \r\n                                        if BONGMATH and step < sigmas.shape[0]-1 and sigma > 0.03 and not EO(\"disable_implicit_prebong\"):\r\n                                            BONGMATH_Y = SYNC_GUIDE_ACTIVE\r\n                                            \r\n                                            x_0, x_, eps_ = RK.bong_iter(x_0, x_, eps_, eps_prev_, data_, sigma, NS.s_, row, RK.row_offset, NS.h, step, step_sched,\r\n                                                                        BONGMATH_Y, y0_bongflow, noise_bongflow, eps_x_, eps_y_, data_x_, data_y_, LG)     # TRY WITH h_new ??\r\n                                            #                            BONGMATH_Y, y0_bongflow, noise_bongflow, eps_x_, eps_y_, eps_x2y_, data_x_, LG)     # TRY WITH h_new ??\r\n                                            \r\n                                            #if EO(\"eps_adain_smartnoise_bongmath\"):\r\n                                            if LG.ADAIN_NOISE_MODE == \"smart\":\r\n                                                if VE_MODEL:\r\n                                                    z_[:NS.s_.shape[0], ...] = (x_ - data_)[:NS.s_.shape[0], ...] / NS.s_.view(-1,*[1]*(x_.ndim-1))\r\n                                                else:\r\n                                                    z_[:NS.s_.shape[0], ...] = (x_[:NS.s_.shape[0], ...] - (NS.sigma_max - NS.s_.view(-1,*[1]*(x_.ndim-1)))*data_[:NS.s_.shape[0], ...])[:NS.s_.shape[0], ...] / NS.s_.view(-1,*[1]*(x_.ndim-1))\r\n                                                RK.update_transformer_options({'z_' : z_})\r\n                                            \r\n                                    x_tmp = x_[row+RK.row_offset]\r\n\r\n                        lying_eps_row_factor = 1.0\r\n                        # MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL\r\n                        if RK.IMPLICIT   and   row == 0   and   (EO(\"implicit_lazy_recycle_first_model_call_at_start\")   or   EO(\"radaucycle\")  or RK.C[0] == 0.0):\r\n                            pass\r\n                        else: \r\n                            if s_tmp == 0:\r\n                                break\r\n                            x_, eps_ = RK.newton_iter(x_0, x_, eps_, eps_prev_, data_, NS.s_, row, NS.h, sigmas, step, \"pre\", SYNC_GUIDE_ACTIVE) # will this do anything? not x_tmp\r\n\r\n                            # DETAIL BOOST\r\n                            if noise_scaling_type == \"model_alpha\" and noise_scaling_weight != 0 and noise_scaling_eta > 0:\r\n                                s_tmp = s_tmp + noise_scaling_weight * (s_tmp * lying_alpha_ratio   -   s_tmp)\r\n                            if noise_scaling_type == \"model\"       and noise_scaling_weight != 0 and noise_scaling_eta > 0:\r\n                                s_tmp = lying_s_[row]\r\n                                if RK.multistep_stages > 0:\r\n                                    s_tmp = lying_sd\r\n\r\n                            # SYNC GUIDE ---------------------------\r\n                            if LG.guide_mode.startswith(\"sync\") and (LG.lgw[step_sched] == 0 and LG.lgw_inv[step_sched] == 0 and LG.lgw_sync[step_sched] == 0 and LG.lgw_sync_inv[step_sched] == 0):\r\n                                data_cached = None\r\n                            elif SYNC_GUIDE_ACTIVE:\r\n                                lgw_mask_,         lgw_mask_inv_         = LG.get_masks_for_step(step_sched)\r\n                                lgw_mask_sync_,    lgw_mask_sync_inv_    = LG.get_masks_for_step(step_sched, lgw_type=\"sync\")\r\n                                lgw_mask_drift_x_, lgw_mask_drift_x_inv_ = LG.get_masks_for_step(step_sched, lgw_type=\"drift_x\")\r\n                                lgw_mask_drift_y_, lgw_mask_drift_y_inv_ = LG.get_masks_for_step(step_sched, lgw_type=\"drift_y\")\r\n                                lgw_mask_lure_x_,  lgw_mask_lure_x_inv_  = LG.get_masks_for_step(step_sched, lgw_type=\"lure_x\")\r\n                                lgw_mask_lure_y_,  lgw_mask_lure_y_inv_  = LG.get_masks_for_step(step_sched, lgw_type=\"lure_y\")\r\n                                \r\n                                weight_mask  = lgw_mask_         + lgw_mask_inv_\r\n                                sync_mask    = lgw_mask_sync_    + lgw_mask_sync_inv_\r\n\r\n\r\n                                drift_x_mask = lgw_mask_drift_x_ + lgw_mask_drift_x_inv_\r\n                                drift_y_mask = lgw_mask_drift_y_ + lgw_mask_drift_y_inv_\r\n                                lure_x_mask  = lgw_mask_lure_x_  + lgw_mask_lure_x_inv_\r\n                                lure_y_mask  = lgw_mask_lure_y_  + lgw_mask_lure_y_inv_\r\n                                \r\n                                if eps_x_ is None:\r\n                                    eps_x_       = torch.zeros(RK.rows+2, *x.shape, dtype=default_dtype, device=work_device)\r\n                                    data_x_      = torch.zeros(RK.rows+2, *x.shape, dtype=default_dtype, device=work_device)\r\n                                    eps_y2x_     = torch.zeros(RK.rows+2, *x.shape, dtype=default_dtype, device=work_device)\r\n                                    eps_x2y_     = torch.zeros(RK.rows+2, *x.shape, dtype=default_dtype, device=work_device)\r\n                                    eps_yt_      = torch.zeros(RK.rows+2, *x.shape, dtype=default_dtype, device=work_device)\r\n                                    eps_y_       = torch.zeros(RK.rows+2, *x.shape, dtype=default_dtype, device=work_device)\r\n                                    eps_prev_y_  = torch.zeros(RK.rows+2, *x.shape, dtype=default_dtype, device=work_device)\r\n                                    data_y_      = torch.zeros(RK.rows+2, *x.shape, dtype=default_dtype, device=work_device)\r\n                                    yt_          = torch.zeros(RK.rows+2, *x.shape, dtype=default_dtype, device=work_device)\r\n                                    \r\n                                    RUN_X_0_COPY = False\r\n                                    if noise_bongflow is None:\r\n                                        RUN_X_0_COPY = True\r\n                                        data_prev_x_ = torch.zeros(4, *x.shape, dtype=default_dtype, device=work_device)\r\n                                        data_prev_y_ = torch.zeros(4, *x.shape, dtype=default_dtype, device=work_device)\r\n                                        \r\n                                        noise_bongflow = normalize_zscore(NS.noise_sampler(sigma=sigma, sigma_next=NS.sigma_min), channelwise=True, inplace=True)\r\n\r\n                                        _, _ = RK(noise_bongflow, s_tmp/s_tmp, noise_bongflow, sigma/sigma, transformer_options={'latent_type': 'xt'})\r\n\r\n                                        if RK.extra_args['model_options']['transformer_options'].get('y0_standard_guide') is not None:\r\n                                            if hasattr(model.inner_model.inner_model.diffusion_model, \"y0_standard_guide\"):\r\n                                                LG.y0 = y0_standard_guide = model.inner_model.inner_model.diffusion_model.y0_standard_guide.clone()\r\n                                                del model.inner_model.inner_model.diffusion_model.y0_standard_guide\r\n                                                RK.extra_args['model_options']['transformer_options']['y0_standard_guide'] = None\r\n                                        \r\n                                        if RK.extra_args['model_options']['transformer_options'].get('y0_inv_standard_guide') is not None:\r\n                                            if hasattr(model.inner_model.inner_model.diffusion_model, \"y0_inv_standard_guide\"):\r\n                                                LG.y0_inv = y0_inv_standard_guide = model.inner_model.inner_model.diffusion_model.y0_inv_standard_guide.clone() # RK.extra_args['model_options']['transformer_options'].get('y0_standard_guide')\r\n                                                del model.inner_model.inner_model.diffusion_model.y0_inv_standard_guide\r\n                                                RK.extra_args['model_options']['transformer_options']['y0_inv_standard_guide'] = None\r\n\r\n                                        y0_bongflow = LG.HAS_LATENT_GUIDE * LG.mask * LG.y0   +   LG.HAS_LATENT_GUIDE_INV * LG.mask_inv * LG.y0_inv  #LG.y0.clone()\r\n                                    \r\n                                    if VE_MODEL:\r\n                                        yt_0 = y0_bongflow + sigma * noise_bongflow\r\n                                        yt   = y0_bongflow + s_tmp * noise_bongflow\r\n                                    else:\r\n                                        yt_0 = (1-sigma) * y0_bongflow  + sigma * noise_bongflow\r\n                                        yt   = (1-s_tmp) * y0_bongflow  + s_tmp * noise_bongflow\r\n                                        \r\n                                    yt_[row] = yt\r\n                                    \r\n                                    if RUN_X_0_COPY:\r\n                                        x_0 = yt_0.clone()\r\n                                        x_tmp = x_[row] = yt.clone()\r\n                                else:\r\n                                    y0_bongflow_orig = y0_bongflow.clone() if y0_bongflow_orig is None else y0_bongflow_orig\r\n                                    y0_bongflow = y0_bongflow + LG.drift_x_data  * drift_x_mask * (data_x           - y0_bongflow) \\\r\n                                                              + LG.drift_x_sync  * drift_x_mask * (data_barf        - y0_bongflow) \\\r\n                                                              + LG.drift_y_data  * drift_y_mask * (data_y           - y0_bongflow) \\\r\n                                                              + LG.drift_y_sync  * drift_y_mask * (data_barf_y      - y0_bongflow) \\\r\n                                                              + LG.drift_y_guide * drift_y_mask * (y0_bongflow_orig - y0_bongflow)\r\n                                    \r\n                                    if torch.norm(y0_bongflow_orig - y0_bongflow) != 0 and EO(\"enable_y0_bongflow_update\"):\r\n                                        RK.update_transformer_options({'y0_style_pos': y0_bongflow.clone()})\r\n                                    \r\n                                    if not EO(\"skip_yt\"):\r\n                                        yt_0 = RK.get_x(y0_bongflow, noise_bongflow, sigma)\r\n                                        yt   = RK.get_x(y0_bongflow, noise_bongflow, s_tmp)\r\n                                        \r\n                                        yt_[row] = yt\r\n\r\n                                if ((LG.lgw[step_sched].item() in {1,0} and LG.lgw_inv[step_sched].item() in {1,0} and LG.lgw[step_sched] == 1-LG.lgw_sync[step_sched] and LG.lgw_inv[step_sched] == 1-LG.lgw_sync_inv[step_sched]) or EO(\"sync_speed_mode\")) and not EO(\"disable_sync_speed_mode\"):\r\n                                    data_y = y0_bongflow.clone()\r\n                                    eps_y  = RK.get_eps(yt_0, yt_[row], data_y, sigma, s_tmp)\r\n\r\n                                else:\r\n                                    eps_y, data_y = RK(yt_[row], s_tmp, yt_0,  sigma, transformer_options={'latent_type': 'yt'})\r\n                                    \r\n                                eps_x, data_x = RK(x_tmp, s_tmp, x_0, sigma, transformer_options={'latent_type': 'xt', 'row': row, \"x_tmp\": x_tmp})\r\n                                #if hasattr(model.inner_model.inner_model.diffusion_model, \"eps_out\"):\r\n                                    \r\n                                \r\n                                for sync_lure_iter in range(LG.sync_lure_iter):\r\n                                    if LG.sync_lure_sequence == \"x -> y\":\r\n\r\n                                        if lure_x_mask.abs().sum() > 0:\r\n                                            x_tmp = LG.swap_data(x_tmp, data_x, data_y, s_tmp, lure_x_mask)\r\n                                            eps_x_lure, data_x_lure = RK(x_tmp, s_tmp, x_0, sigma, transformer_options={'latent_type': 'xt'})\r\n                                            eps_x  = eps_x  + lure_x_mask * (eps_x_lure  - eps_x)\r\n                                            data_x = data_x + lure_x_mask * (data_x_lure - data_x)\r\n                                        \r\n                                        if lure_y_mask.abs().sum() > 0:                      \r\n                                            y_tmp = yt_[row].clone()\r\n                                            y_tmp = LG.swap_data(y_tmp, data_y, data_x, s_tmp, lure_y_mask) \r\n                                            eps_y_lure, data_y_lure = RK(y_tmp, s_tmp, yt_0, sigma, transformer_options={'latent_type': 'yt'})\r\n                                            eps_y  = eps_y  + lure_y_mask * (eps_y_lure  - eps_y)\r\n                                            data_y = data_y + lure_y_mask * (data_y_lure - data_y)\r\n                                        \r\n                                    elif LG.sync_lure_sequence == \"y -> x\":\r\n                                            \r\n                                        if lure_y_mask.abs().sum() > 0:                      \r\n                                            y_tmp = yt_[row].clone()\r\n                                            y_tmp = LG.swap_data(y_tmp, data_y, data_x, s_tmp, lure_y_mask) \r\n                                            eps_y_lure, data_y_lure = RK(y_tmp, s_tmp, yt_0, sigma, transformer_options={'latent_type': 'yt'})\r\n                                            eps_y  = eps_y  + lure_y_mask * (eps_y_lure  - eps_y)\r\n                                            data_y = data_y + lure_y_mask * (data_y_lure - data_y)\r\n                                        \r\n                                        if lure_x_mask.abs().sum() > 0:\r\n                                            x_tmp = LG.swap_data(x_tmp, data_x, data_y, s_tmp, lure_x_mask)\r\n                                            eps_x_lure, data_x_lure = RK(x_tmp, s_tmp, x_0, sigma, transformer_options={'latent_type': 'xt'})\r\n                                            eps_x  = eps_x  + lure_x_mask * (eps_x_lure  - eps_x)\r\n                                            data_x = data_x + lure_x_mask * (data_x_lure - data_x)\r\n\r\n                                    elif LG.sync_lure_sequence == \"xy -> xy\":\r\n                                        data_x_orig, data_y_orig = data_x.clone(), data_y.clone()\r\n                                        \r\n                                        if lure_x_mask.abs().sum() > 0:\r\n                                            x_tmp = LG.swap_data(x_tmp, data_x_orig, data_y_orig, s_tmp, lure_x_mask)\r\n                                            eps_x_lure, data_x_lure = RK(x_tmp, s_tmp, x_0, sigma, transformer_options={'latent_type': 'xt'})\r\n                                            eps_x  = eps_x  + lure_x_mask * (eps_x_lure  - eps_x)\r\n                                            data_x = data_x + lure_x_mask * (data_x_lure - data_x)\r\n                                        \r\n                                        if lure_y_mask.abs().sum() > 0:                      \r\n                                            y_tmp = yt_[row].clone()\r\n                                            y_tmp = LG.swap_data(y_tmp, data_y_orig, data_x_orig, s_tmp, lure_y_mask) \r\n                                            eps_y_lure, data_y_lure = RK(y_tmp, s_tmp, yt_0, sigma, transformer_options={'latent_type': 'yt'})\r\n                                            eps_y  = eps_y  + lure_y_mask * (eps_y_lure  - eps_y)\r\n                                            data_y = data_y + lure_y_mask * (data_y_lure - data_y)\r\n                                \r\n                                if EO(\"sync_proj_y\"):\r\n                                    d_collinear_d_lerp = get_collinear(eps_x, eps_y)  \r\n                                    d_lerp_ortho_d     = get_orthogonal(eps_y, eps_x)  \r\n                                    eps_y             = d_collinear_d_lerp + d_lerp_ortho_d\r\n                                    \r\n                                if EO(\"sync_proj_y2\"):\r\n                                    d_collinear_d_lerp = get_collinear(eps_y, eps_x)  \r\n                                    d_lerp_ortho_d     = get_orthogonal(eps_x, eps_y)  \r\n                                    eps_y             = d_collinear_d_lerp + d_lerp_ortho_d\r\n                                    \r\n                                if EO(\"sync_proj_x\"):\r\n                                    d_collinear_d_lerp = get_collinear(eps_y, eps_x)  \r\n                                    d_lerp_ortho_d     = get_orthogonal(eps_x, eps_y)  \r\n                                    eps_x             = d_collinear_d_lerp + d_lerp_ortho_d\r\n\r\n                                if EO(\"sync_proj_x2\"):\r\n                                    d_collinear_d_lerp = get_collinear(eps_x, eps_y)  \r\n                                    d_lerp_ortho_d     = get_orthogonal(eps_y, eps_x)  \r\n                                    eps_x             = d_collinear_d_lerp + d_lerp_ortho_d\r\n\r\n                                eps_x2y = RK.get_eps(x_0, x_[row], data_y, sigma, s_tmp)\r\n                                eps_x2y_[row] = eps_x2y\r\n                                \r\n                                eps_y2x = RK.get_eps(x_0, x_[row], data_y, sigma, s_tmp)\r\n                                eps_y2x_[row] = eps_y2x\r\n                                \r\n                                if RK.EXPONENTIAL:\r\n                                    if VE_MODEL:         # ZERO IS THIS                      # ONE IS THIS \r\n                                        eps_[row]  = sync_mask * eps_x   +   (1-sync_mask) * eps_x2y   +   weight_mask * (-eps_y + sigma*(-noise_bongflow)) \r\n                                        if EO(\"sync_x2y\"):\r\n                                            eps_[row]  = sync_mask * eps_x   +   (1-sync_mask) * eps_x2y   +   weight_mask * (-eps_x2y + sigma*(-noise_bongflow)) \r\n                                    else:\r\n                                        eps_[row]  = sync_mask * eps_x   +   (1-sync_mask) * eps_x2y   +   weight_mask * (-eps_y + sigma*(y0_bongflow-noise_bongflow))   #+   lure_x_mask * sigma*(data_y - data_x) \r\n                                        if EO(\"sync_x2y\"):\r\n                                            eps_[row]  = sync_mask * eps_x   -   (1-sync_mask) * eps_x2y   +   weight_mask * (-eps_x2y + sigma*(y0_bongflow-noise_bongflow)) \r\n                                        eps_yt_[row]  = sync_mask * eps_y   +   (1-sync_mask) * eps_y2x   +   weight_mask * (-eps_x + sigma*(y0_bongflow-noise_bongflow))         # differentiate guide as well toward the x pred?\r\n                                else:\r\n                                    if VE_MODEL:\r\n                                        eps_[row]  = sync_mask * eps_x   +   (1-sync_mask) * eps_x2y   +   weight_mask * (noise_bongflow - eps_y)\r\n                                        if EO(\"sync_x2y\"):\r\n                                            eps_[row]  = sync_mask * eps_x   +   (1-sync_mask) * eps_x2y   +   weight_mask * (noise_bongflow - eps_x2y)\r\n                                    else:\r\n                                        eps_[row]  = sync_mask * eps_x   +   (1-sync_mask) * eps_x2y   +   weight_mask * (noise_bongflow - eps_y - y0_bongflow)\r\n                                        if EO(\"sync_x2y\"):\r\n                                            eps_[row]  = sync_mask * eps_x   +   (1-sync_mask) * eps_x2y   +   weight_mask * (noise_bongflow - eps_x2y - y0_bongflow)\r\n                                        eps_yt_[row]  = sync_mask * eps_y   +   (1-sync_mask) * eps_y2x   +   weight_mask * (noise_bongflow - eps_x - y0_bongflow)         # differentiate guide as well toward the x pred?\r\n\r\n                                if VE_MODEL:\r\n                                    data_[row] = x_0   +   sync_mask * NS.h * eps_x   +   (1-sync_mask) * NS.h * eps_x2y   -   weight_mask * (sigma*(eps_y + noise_bongflow))   # -   lure_x_mask * (sigma*(eps_y + eps_x)) \r\n                                    data_barf_y = yt_0   +   sync_mask * NS.h * eps_y   +   (1-sync_mask) * NS.h * eps_y2x   -   weight_mask * (sigma*(eps_x + noise_bongflow))\r\n                                    if EO(\"sync_x2y\"):\r\n                                        data_[row] = x_0   +   sync_mask * NS.h * eps_x   +   (1-sync_mask) * NS.h * eps_x2y   -   weight_mask * (sigma*(eps_x2y + noise_bongflow)) \r\n\r\n                                else:\r\n\r\n                                    data_[row] = x_0   +   sync_mask * NS.h * eps_x   +   (1-sync_mask) * NS.h * eps_x2y   -   weight_mask * (NS.h * eps_y + sigma*(noise_bongflow-y0_bongflow)) \r\n                                    data_barf_y = yt_0   +   sync_mask * NS.h * eps_y   +   (1-sync_mask) * NS.h * eps_y2x   -   weight_mask * (NS.h * eps_x + sigma*(noise_bongflow-y0_bongflow)) \r\n                                    if EO(\"sync_x2y\"):\r\n                                        data_[row] = x_0   +   sync_mask * NS.h * eps_x   +   (1-sync_mask) * NS.h * eps_x2y   -   weight_mask * (NS.h * eps_x2y + sigma*(noise_bongflow-y0_bongflow)) \r\n\r\n                                if EO(\"data_is_y0_with_lure_x_mask\"):\r\n                                    data_[row] = data_[row] + lure_x_mask * (y0_bongflow - data_[row])\r\n\r\n                                if EO(\"eps_is_y0_with_lure_x_mask\"):\r\n                                    if RK.EXPONENTIAL:\r\n                                        eps_[row] = eps_[row] + lure_x_mask * ((y0_bongflow - x_0) - eps_[row])\r\n                                    else:\r\n                                        eps_[row] = eps_[row] + lure_x_mask * (((x_0 - y0_bongflow) / sigma) - eps_[row])\r\n                                data_barf   = data_[row]\r\n                                data_cached = data_x\r\n                                \r\n                                eps_x_ [row] = eps_x\r\n                                data_x_[row] = data_x\r\n                                \r\n                                eps_y_ [row] = eps_y\r\n                                data_y_[row] = data_y\r\n\r\n                                if EO(\"sync_use_fake_eps_y\"):\r\n                                    if RK.EXPONENTIAL:\r\n                                        if VE_MODEL:\r\n                                            eps_y_ [row] = sigma * ( - noise_bongflow)\r\n                                        else:\r\n                                            eps_y_ [row] = sigma * (y0_bongflow - noise_bongflow)\r\n                                    else:\r\n                                        if VE_MODEL:\r\n                                            eps_y_ [row] = noise_bongflow\r\n                                        else:\r\n                                            eps_y_ [row] = noise_bongflow - y0_bongflow\r\n                                if EO(\"sync_use_fake_data_y\"):\r\n                                    data_y_[row] = y0_bongflow\r\n                                    \r\n\r\n                            \r\n                            \r\n                            elif LG.guide_mode.startswith(\"flow\") and (LG.lgw[step_sched] > 0 or LG.lgw_inv[step_sched] > 0) and not FLOW_STOPPED and not EO(\"flow_sync\") :\r\n                                lgw_mask_, lgw_mask_inv_ = LG.get_masks_for_step(step)\r\n                                if not FLOW_STARTED and not FLOW_RESUMED:\r\n                                    FLOW_STARTED = True\r\n                                    data_x_prev_ = torch.zeros_like(data_prev_)\r\n\r\n                                    y0 = LG.HAS_LATENT_GUIDE * LG.mask * LG.y0   +   LG.HAS_LATENT_GUIDE_INV * LG.mask_inv * LG.y0_inv \r\n                                    \r\n                                    yx0 = y0.clone()\r\n                                    \r\n                                    if EO(\"flow_slerp\"):\r\n                                        y0_inv                 = LG.HAS_LATENT_GUIDE * LG.mask * LG.y0_inv   +   LG.HAS_LATENT_GUIDE_INV * LG.mask_inv * LG.y0 \r\n                                        y0 = LG.y0.clone()\r\n                                        y0_inv = LG.y0_inv.clone()\r\n                                        flow_slerp_guide_ratio = EO(\"flow_slerp_guide_ratio\", 0.5)\r\n                                        y_slerp                = slerp_tensor(flow_slerp_guide_ratio, y0, y0_inv)\r\n                                        yx0                    = y_slerp.clone()\r\n                                    \r\n                                    x_[row], x_0 =  yx0.clone(), yx0.clone()\r\n                                    if EO(\"guide_step_cutoff\") or EO(\"guide_step_min\"):\r\n                                        x_0_orig = yx0.clone()\r\n                                    \r\n                                    if EO(\"flow_yx0_init_y0_inv\"):\r\n                                        yx0 = LG.HAS_LATENT_GUIDE * LG.mask * LG.y0_inv   +   LG.HAS_LATENT_GUIDE_INV * LG.mask_inv * LG.y0\r\n                                    \r\n                                    if step > 0:\r\n                                        if EO(\"flow_manual_masks\"):\r\n                                            y0  = (1 - (LG.HAS_LATENT_GUIDE * LG.lgw[step_sched] * LG.mask + LG.HAS_LATENT_GUIDE_INV * LG.lgw_inv[step_sched] * LG.mask_inv)) * denoised   +   LG.HAS_LATENT_GUIDE * LG.lgw[step_sched] * LG.mask * LG.y0   +   LG.HAS_LATENT_GUIDE_INV * LG.lgw_inv[step_sched] * LG.mask_inv * LG.y0_inv\r\n                                        else:\r\n                                            y0  = (1 - (lgw_mask_ + lgw_mask_inv_)) * denoised   +   lgw_mask_ * LG.y0   +   lgw_mask_inv_ * LG.y0_inv\r\n                                        yx0 = y0.clone()\r\n                                        \r\n                                        if EO(\"flow_slerp\"):\r\n                                            if EO(\"flow_manual_masks\"):\r\n                                                y0_inv                 = (1 - (LG.HAS_LATENT_GUIDE * LG.lgw[step_sched] * LG.mask + LG.HAS_LATENT_GUIDE_INV * LG.lgw_inv[step_sched] * LG.mask_inv)) * denoised   +   LG.HAS_LATENT_GUIDE * LG.lgw[step_sched] * LG.mask * LG.y0_inv   +   LG.HAS_LATENT_GUIDE_INV * LG.lgw_inv[step_sched] * LG.mask_inv * LG.y0\r\n                                            else:\r\n                                                y0_inv  = (1 - (lgw_mask_ + lgw_mask_inv_)) * denoised   +   lgw_mask_ * LG.y0_inv   +   lgw_mask_inv_ * LG.y0\r\n                                            flow_slerp_guide_ratio = EO(\"flow_slerp_guide_ratio\", 0.5)\r\n                                            y_slerp                = slerp_tensor(flow_slerp_guide_ratio, y0, y0_inv)\r\n                                            yx0                    = y_slerp.clone()\r\n                                \r\n                                else:\r\n                                    yx0_prev = data_cached\r\n                                    if EO(\"flow_manual_masks\"):\r\n                                        yx0 = (1 - (LG.HAS_LATENT_GUIDE * LG.lgw[step_sched] * LG.mask + LG.HAS_LATENT_GUIDE_INV * LG.lgw_inv[step_sched] * LG.mask_inv)) * yx0_prev   +   LG.HAS_LATENT_GUIDE * LG.lgw[step_sched] * LG.mask * x_tmp   +   LG.HAS_LATENT_GUIDE_INV * LG.lgw_inv[step_sched] * LG.mask_inv * x_tmp\r\n                                    else:\r\n                                        yx0 = (1 - (lgw_mask_ + lgw_mask_inv_)) * yx0_prev   +   (lgw_mask_ + lgw_mask_inv_) * x_tmp\r\n\r\n                                    if not EO(\"flow_static_guides\"):\r\n                                        if EO(\"flow_manual_masks\"):\r\n                                            y0 = (1 - (LG.HAS_LATENT_GUIDE * LG.lgw[step_sched] * LG.mask + LG.HAS_LATENT_GUIDE_INV * LG.lgw_inv[step_sched] * LG.mask_inv)) * yx0_prev   +   LG.HAS_LATENT_GUIDE * LG.lgw[step_sched] * LG.mask * LG.y0   +   LG.HAS_LATENT_GUIDE_INV * LG.lgw_inv[step_sched] * LG.mask_inv * LG.y0_inv\r\n                                        else:\r\n                                            y0 = (1 - (lgw_mask_ + lgw_mask_inv_)) * yx0_prev   +   lgw_mask_ * LG.y0   +   lgw_mask_inv_ * LG.y0_inv\r\n                                        \r\n                                        if EO(\"flow_slerp\"):\r\n                                            if EO(\"flow_manual_masks\"):\r\n                                                y0_inv = (1 - (LG.HAS_LATENT_GUIDE * LG.lgw[step_sched] * LG.mask + LG.HAS_LATENT_GUIDE_INV * LG.lgw_inv[step_sched] * LG.mask_inv)) * yx0_prev   +   LG.HAS_LATENT_GUIDE * LG.lgw[step_sched] * LG.mask * LG.y0_inv   +   LG.HAS_LATENT_GUIDE_INV * LG.lgw_inv[step_sched] * LG.mask_inv * LG.y0\r\n                                            else:\r\n                                                y0_inv = (1 - (lgw_mask_ + lgw_mask_inv_)) * yx0_prev   +   lgw_mask_ * LG.y0_inv   +   lgw_mask_inv_ * LG.y0\r\n\r\n                                y0_orig = y0.clone()\r\n                                if EO(\"flow_proj_xy\"):\r\n                                    d_collinear_d_lerp = get_collinear(yx0, y0_orig)  \r\n                                    d_lerp_ortho_d     = get_orthogonal(y0_orig, yx0)  \r\n                                    y0                 = d_collinear_d_lerp + d_lerp_ortho_d\r\n                                \r\n                                if EO(\"flow_proj_yx\"):\r\n                                    d_collinear_d_lerp = get_collinear(y0_orig, yx0)  \r\n                                    d_lerp_ortho_d     = get_orthogonal(yx0, y0_orig)  \r\n                                    yx0                = d_collinear_d_lerp + d_lerp_ortho_d\r\n                                \r\n                                y0_inv_orig = None\r\n                                if EO(\"flow_proj_xy_inv\"):\r\n                                    y0_inv_orig = y0_inv.clone()\r\n                                    d_collinear_d_lerp = get_collinear(yx0, y0_inv)  \r\n                                    d_lerp_ortho_d     = get_orthogonal(y0_inv, yx0)  \r\n                                    y0_inv             = d_collinear_d_lerp + d_lerp_ortho_d\r\n                                    \r\n                                if EO(\"flow_proj_yx_inv\"):\r\n                                    y0_inv_orig = y0_inv if y0_inv_orig is None else y0_inv_orig\r\n                                    d_collinear_d_lerp = get_collinear(y0_inv_orig, yx0)  \r\n                                    d_lerp_ortho_d     = get_orthogonal(yx0, y0_inv_orig)  \r\n                                    yx0                = d_collinear_d_lerp + d_lerp_ortho_d\r\n                                del y0_orig\r\n\r\n                                flow_cossim_iter = EO(\"flow_cossim_iter\", 1)\r\n\r\n                                if step == 0:\r\n                                    noise_yt = noise_fn(y0, sigma, sigma_next, NS.noise_sampler, flow_cossim_iter) # normalize_zscore(NS.noise_sampler(sigma=sigma, sigma_next=sigma_next), channelwise=True, inplace=True)\r\n                                if not EO(\"flow_disable_renoise_y0\"):\r\n                                    if noise_yt is None:\r\n                                        noise_yt = noise_fn(x_0, sigma, sigma_next, NS.noise_sampler, flow_cossim_iter)\r\n                                    else:\r\n                                        noise_yt = (1-eta) * noise_yt + eta * noise_fn(x_0, sigma, sigma_next, NS.noise_sampler, flow_cossim_iter)\r\n\r\n                                if VE_MODEL:\r\n                                    yt        = y0 + s_tmp * noise_yt\r\n                                else:\r\n                                    yt        = (NS.sigma_max-s_tmp) * y0 + (s_tmp/NS.sigma_max) * noise_yt\r\n                                if not EO(\"flow_disable_doublenoise_y0\"):\r\n                                    if noise_yt is None:\r\n                                        noise_yt = noise_fn(x_0, sigma, sigma_next, NS.noise_sampler, flow_cossim_iter)\r\n                                    else:\r\n                                        noise_yt = (1-eta) * noise_yt + eta * noise_fn(x_0, sigma, sigma_next, NS.noise_sampler, flow_cossim_iter)\r\n\r\n                                if VE_MODEL:\r\n                                    y0_noised = y0 + sigma * noise_yt\r\n                                else:\r\n                                    y0_noised = (NS.sigma_max-sigma) * y0 + sigma * noise_yt\r\n                                \r\n                                if EO(\"flow_slerp\"):\r\n                                    noise = noise_fn(y0_inv, sigma, sigma_next, NS.noise_sampler, flow_cossim_iter) \r\n                                    yt_inv        = (NS.sigma_max-s_tmp) * y0_inv + (s_tmp/NS.sigma_max) * noise\r\n                                    if not EO(\"flow_disable_doublenoise_y0_inv\"):\r\n                                        noise = noise_fn(y0_inv, sigma, sigma_next, NS.noise_sampler, flow_cossim_iter) \r\n                                    y0_noised_inv = (NS.sigma_max-sigma) * y0_inv + sigma * noise\r\n                                \r\n                                if step == 0:\r\n                                    noise_xt = noise_fn(yx0, sigma, sigma_next, NS.noise_sampler, flow_cossim_iter) \r\n                                if EO(\"flow_slerp\"):\r\n                                    xt         = yx0 + (s_tmp/NS.sigma_max) * (noise - y_slerp)\r\n                                    if not EO(\"flow_disable_doublenoise_x_0\"):\r\n                                        noise = noise_fn(x_0, sigma, sigma_next, NS.noise_sampler, flow_cossim_iter) \r\n                                    x_0_noised = x_0 + sigma * (noise - y_slerp)\r\n                                else:\r\n                                    if not EO(\"flow_disable_renoise_x_0\"):\r\n                                        if noise_xt is None:\r\n                                            noise_xt = noise_fn(x_0, sigma, sigma_next, NS.noise_sampler, flow_cossim_iter)\r\n                                        else:\r\n                                            noise_xt = (1-eta_substep) * noise_xt + eta_substep * noise_fn(x_0, sigma, sigma_next, NS.noise_sampler, flow_cossim_iter)\r\n\r\n                                    if VE_MODEL:\r\n                                        xt         = yx0 + (s_tmp) * yx0 + (s_tmp) * (noise_xt - y0)\r\n                                    else:\r\n                                        xt         = yx0 + (s_tmp/NS.sigma_max) * (noise_xt - y0)\r\n                                    if not EO(\"flow_disable_doublenoise_x_0\"):\r\n                                        if noise_xt is None:\r\n                                            noise_xt = noise_fn(x_0, sigma, sigma_next, NS.noise_sampler, flow_cossim_iter)\r\n                                        else:\r\n                                            noise_xt = (1-eta_substep) * noise_xt + eta_substep * noise_fn(x_0, sigma, sigma_next, NS.noise_sampler, flow_cossim_iter)\r\n                                    if VE_MODEL:\r\n                                        x_0_noised = x_0 + (sigma) * x_0 + (sigma) * (noise_xt - y0)\r\n                                    else:\r\n                                        x_0_noised = x_0 + (sigma/NS.sigma_max) * (noise_xt - y0)    # just lerp noise add, (1-sigma)*y0 + sigma*noise assuming x_0 == y0, which is true initially...\r\n                                \r\n                                eps_y, data_y = RK(yt, s_tmp, y0_noised,  sigma, transformer_options={'latent_type': 'yt'})\r\n                                eps_x, data_x = RK(xt, s_tmp, x_0_noised, sigma, transformer_options={'latent_type': 'xt'})\r\n                    \r\n                                if EO(\"flow_slerp\"):\r\n                                    eps_y_inv, data_y_inv = RK(yt_inv, s_tmp, y0_noised_inv, sigma, transformer_options={'latent_type': 'yt_inv'})\r\n                                \r\n                                if LG.lgw[step+1] == 0 and LG.lgw_inv[step+1] == 0:    # break out of differentiating x0 and return to differentiating eps/velocity field\r\n                                    if EO(\"flow_shit_out_yx0\"):\r\n                                        eps_ [row]       = eps_x - eps_y\r\n                                        data_[row]       = yx0\r\n                                        if row == 0:\r\n                                            x_[row] = x_0 = xt \r\n                                        else:\r\n                                            x_[row] = xt\r\n                                    if not EO(\"flow_shit_out_new\"):\r\n                                        eps_ [row]       = eps_x\r\n                                        data_[row]       = data_x\r\n                                        if row == 0:\r\n                                            x_[row] = x_0 = xt \r\n                                        else:\r\n                                            x_[row] = xt\r\n                                    \r\n                                    else:\r\n                                        eps_ [row]        = (1 - (lgw_mask_ + lgw_mask_inv_)) * eps_x   +   (lgw_mask_ + lgw_mask_inv_) * eps_y\r\n                                        data_[row]        = (1 - (lgw_mask_ + lgw_mask_inv_)) * data_x   +   (lgw_mask_ + lgw_mask_inv_) * data_y\r\n                                        if row == 0:\r\n                                            x_[row] = x_0 = (1 - (lgw_mask_ + lgw_mask_inv_)) * xt   +   (lgw_mask_ + lgw_mask_inv_) * yt \r\n                                        else:\r\n                                            x_[row] = (1 - (lgw_mask_ + lgw_mask_inv_)) * xt   +   (lgw_mask_ + lgw_mask_inv_) * yt\r\n                                \r\n                                    FLOW_STOPPED = True\r\n                                else:\r\n                                    if not EO(\"flow_slerp\"):\r\n                                        if RK.EXPONENTIAL:\r\n                                            eps_y_alt = data_y - x_0\r\n                                            eps_x_alt = data_x - x_0\r\n                                        else:\r\n                                            eps_y_alt = (x_0 - data_y) / sigma\r\n                                            eps_x_alt = (x_0 - data_x) / sigma\r\n                                            \r\n                                        if EO(\"flow_y_zero\"):\r\n                                            eps_y_alt *= LG.mask\r\n                                        \r\n                                        eps_[row]  = eps_yx = (eps_y_alt - eps_x_alt)\r\n                                        eps_y_lin           = (x_0 - data_y) / sigma\r\n                                        if EO(\"flow_y_zero\"):\r\n                                            eps_y_lin *= LG.mask\r\n                                        eps_x_lin           = (x_0 - data_x) / sigma\r\n                                        eps_yx_lin          = (eps_y_lin - eps_x_lin)\r\n                                        \r\n                                        data_[row] = (1 - (lgw_mask_ + lgw_mask_inv_)) * data_x   +   (lgw_mask_ + lgw_mask_inv_) * data_y\r\n                                        \r\n                                        if EO(\"flow_reverse_data_masks\"):\r\n                                            data_[row] = (1 - (lgw_mask_ + lgw_mask_inv_)) * data_y   +   (lgw_mask_ + lgw_mask_inv_) * data_x\r\n\r\n                                        if flow_sync_eps != 0.0:\r\n                                            if RK.EXPONENTIAL:\r\n                                                eps_[row] = (1-flow_sync_eps) * eps_[row] + flow_sync_eps * (data_[row] - x_0)\r\n                                            else:\r\n                                                eps_[row] = (1-flow_sync_eps) * eps_[row] + flow_sync_eps * (x_0 - data_[row]) / sigma\r\n                                        \r\n                                        if EO(\"flow_sync_eps_mask\"): \r\n                                            flow_sync_eps = EO(\"flow_sync_eps_mask\", 1.0)\r\n                                            if RK.EXPONENTIAL:\r\n                                                eps_[row] = (lgw_mask_ + lgw_mask_inv_) * (1-flow_sync_eps) * eps_[row] + (1 - (lgw_mask_ + lgw_mask_inv_)) * flow_sync_eps * (data_[row] - x_0) \r\n                                            else:\r\n                                                eps_[row] = (lgw_mask_ + lgw_mask_inv_) * (1-flow_sync_eps) * eps_[row] + (1 - (lgw_mask_ + lgw_mask_inv_)) * flow_sync_eps * (x_0 - data_[row]) / sigma\r\n\r\n                                        if EO(\"flow_sync_eps_revmask\"): \r\n                                            flow_sync_eps = EO(\"flow_sync_eps_revmask\", 1.0)\r\n                                            if RK.EXPONENTIAL:\r\n                                                eps_[row] = (1 - (lgw_mask_ + lgw_mask_inv_)) * (1-flow_sync_eps) * eps_[row] + (lgw_mask_ + lgw_mask_inv_) * flow_sync_eps * (data_[row] - x_0) \r\n                                            else:\r\n                                                eps_[row] = (1 - (lgw_mask_ + lgw_mask_inv_)) * (1-flow_sync_eps) * eps_[row] + (lgw_mask_ + lgw_mask_inv_) * flow_sync_eps * (x_0 - data_[row]) / sigma\r\n\r\n                                        if EO(\"flow_sync_eps_maskonly\"):\r\n                                            flow_sync_eps = EO(\"flow_sync_eps_maskonly\", 1.0)\r\n                                            if RK.EXPONENTIAL:\r\n                                                eps_[row] = (lgw_mask_ + lgw_mask_inv_) * eps_[row] + (1 - (lgw_mask_ + lgw_mask_inv_)) * (data_[row] - x_0) \r\n                                            else:\r\n                                                eps_[row] = (lgw_mask_ + lgw_mask_inv_) * eps_[row] + (1 - (lgw_mask_ + lgw_mask_inv_)) * (x_0 - data_[row]) / sigma\r\n\r\n                                        if EO(\"flow_sync_eps_revmaskonly\"): \r\n                                            flow_sync_eps = EO(\"flow_sync_eps_revmaskonly\", 1.0)\r\n                                            if RK.EXPONENTIAL:\r\n                                                eps_[row] = (1 - (lgw_mask_ + lgw_mask_inv_)) * eps_[row] + (lgw_mask_ + lgw_mask_inv_) * (data_[row] - x_0) \r\n                                            else:\r\n                                                eps_[row] = (1 - (lgw_mask_ + lgw_mask_inv_)) * eps_[row] + (lgw_mask_ + lgw_mask_inv_) * (x_0 - data_[row]) / sigma\r\n\r\n                                    if EO(\"flow_slerp\"):\r\n                                        if RK.EXPONENTIAL:\r\n                                            eps_y_alt     = data_y     - x_0\r\n                                            eps_y_alt_inv = data_y_inv - x_0\r\n                                            eps_x_alt     = data_x     - x_0\r\n                                        else:\r\n                                            eps_y_alt     = (x_0 - data_y)     / sigma\r\n                                            eps_y_alt_inv = (x_0 - data_y_inv) / sigma\r\n                                            eps_x_alt     = (x_0 - data_x)     / sigma\r\n                                        \r\n                                        flow_slerp_ratio2 = EO(\"flow_slerp_ratio2\", 0.5)\r\n\r\n                                        eps_yx     = (eps_y_alt - eps_x_alt)\r\n                                        eps_y_lin  = (x_0 - data_y) / sigma\r\n                                        eps_x_lin  = (x_0 - data_x) / sigma\r\n                                        eps_yx_lin = (eps_y_lin - eps_x_lin)\r\n                                        \r\n                                        eps_yx_inv     = (eps_y_alt_inv - eps_x_alt)\r\n                                        eps_y_lin_inv  = (x_0 - data_y_inv) / sigma\r\n                                        eps_x_lin      = (x_0 - data_x)     / sigma\r\n                                        eps_yx_lin_inv = (eps_y_lin_inv - eps_x_lin)\r\n                                        \r\n                                        data_row     = x_0 - sigma * eps_yx_lin\r\n                                        data_row_inv = x_0 - sigma * eps_yx_lin_inv\r\n                                        \r\n                                        if EO(\"flow_slerp_similarity_ratio\"):\r\n                                            flow_slerp_similarity_ratio = EO(\"flow_slerp_similarity_ratio\", 1.0)\r\n                                            flow_slerp_ratio2           = find_slerp_ratio_grid(data_row, data_row_inv, LG.y0.clone(), LG.y0_inv.clone(), flow_slerp_similarity_ratio)\r\n                                        \r\n                                        eps_ [row] = slerp_tensor(flow_slerp_ratio2, eps_yx,   eps_yx_inv)\r\n                                        data_[row] = slerp_tensor(flow_slerp_ratio2, data_row, data_row_inv)\r\n                                        \r\n                                        if EO(\"flow_slerp_autoalter\"):\r\n                                            data_row_slerp = slerp_tensor(0.5, data_row, data_row_inv)\r\n                                            y0_pearsim     = get_pearson_similarity(data_row_slerp, y0)\r\n                                            y0_pearsim_inv = get_pearson_similarity(data_row_slerp, y0_inv)\r\n                                            \r\n                                            if y0_pearsim > y0_pearsim_inv:\r\n                                                data_[row] = data_row_inv \r\n                                                eps_ [row] = (eps_y_alt_inv - eps_x_alt)\r\n                                            else:\r\n                                                data_[row] = data_row\r\n                                                eps_ [row] = (eps_y_alt     - eps_x_alt)\r\n                                            \r\n                                        if EO(\"flow_slerp_recalc_eps_row\"):\r\n                                            if RK.EXPONENTIAL:\r\n                                                eps_[row]  = data_[row] - x_0\r\n                                            else:\r\n                                                eps_[row]  = (x_0 - data_[row]) / sigma\r\n                                        \r\n                                        if EO(\"flow_slerp_recalc_data_row\"):\r\n                                            if RK.EXPONENTIAL:\r\n                                                data_[row] = x_0 + eps_[row]\r\n                                            else:\r\n                                                data_[row] = x_0 - sigma * eps_[row]\r\n\r\n                                    data_cached = data_x \r\n\r\n                            if step < EO(\"direct_pre_pseudo_guide\", 0) and step > 0:\r\n                                for i_pseudo in range(EO(\"direct_pre_pseudo_guide_iter\", 1)):\r\n                                    x_tmp += LG.lgw[step_sched] * LG.mask * (NS.sigma_max - s_tmp) * (LG.y0 - denoised)     +     LG.lgw_inv[step_sched] * LG.mask_inv * (NS.sigma_max - s_tmp) * (LG.y0_inv - denoised)\r\n                                    eps_[row], data_[row] = RK(x_tmp, s_tmp, x_0, sigma)\r\n                            \r\n                            # MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL MODEL CALL\r\n                            \r\n                            if    SYNC_GUIDE_ACTIVE:\r\n                                pass\r\n                            elif not ((not LG.guide_mode.startswith(\"flow\"))   or   FLOW_STOPPED   or    (LG.guide_mode.startswith(\"flow\") and LG.lgw[step_sched] == 0 and LG.lgw_inv[step_sched] == 0)): #(LG.guide_mode.startswith(\"flow\") and (LG.lgw[step_sched] != 0 or LG.lgw_inv[step_sched] != 0)) or FLOW_STOPPED:\r\n                                pass\r\n                            elif LG.guide_mode.startswith(\"lure\") and (LG.lgw[step_sched] > 0 or LG.lgw_inv[step_sched] > 0):\r\n                                eps_[row], data_[row] = RK(x_tmp, s_tmp, x_0, sigma, transformer_options={'latent_type': 'yt'})\r\n                                \r\n                            else:\r\n                                if EO(\"protoshock\") and StyleMMDiT is not None and StyleMMDiT.data_shock_start_step <= step_sched < StyleMMDiT.data_shock_end_step:\r\n                                    eps_[row], data_[row] = RK(x_tmp, s_tmp, x_0, sigma, transformer_options={'row': row, 'x_tmp': x_tmp, 'sigma_next': sigma_next})\r\n                                    data_wct = StyleMMDiT.apply_data_shock(data_[row])\r\n                                    if VE_MODEL:\r\n                                        x_tmp = x_tmp + (data_wct - data_[row])\r\n                                    else:\r\n                                        x_tmp = x_tmp + (NS.sigma_max-NS.s_[row]) * (data_wct - data_[row])\r\n                                    #x_[row+RK.row_offset] = x_tmp\r\n                                    x_[row] = x_tmp\r\n                                    if row == 0:\r\n                                        x_0 = x_tmp\r\n                                        \r\n                                if EO(\"preshock\"):\r\n                                    eps_[row], data_[row] = RK(x_tmp, s_tmp, x_0, sigma, transformer_options={'row': row, 'x_tmp': x_tmp, 'sigma_next': sigma_next})\r\n                                    if VE_MODEL:\r\n                                        x_tmp = x_tmp + (data_wct - data_[row])\r\n                                    else:\r\n                                        x_tmp = x_tmp + (NS.sigma_max-NS.s_[row]) * (data_wct - data_[row])\r\n                                    x_[row] = x_tmp\r\n                                    if row == 0:\r\n                                        x_0 = x_tmp\r\n                                \r\n                                eps_[row], data_[row] = RK(x_tmp, s_tmp, x_0, sigma, transformer_options={'row': row, 'x_tmp': x_tmp, 'sigma_next': sigma_next})\r\n                                \r\n                                #if EO(\"yoloshock\") and StyleMMDiT is not None and StyleMMDiT.data_shock_start_step <= step_sched < StyleMMDiT.data_shock_end_step:\r\n                                if not EO(\"disable_yoloshock\") and StyleMMDiT is not None and StyleMMDiT.data_shock_start_step <= step_sched < StyleMMDiT.data_shock_end_step:\r\n                                    data_wct = StyleMMDiT.apply_data_shock(data_[row])\r\n                                    if VE_MODEL:\r\n                                        x_tmp = x_tmp + (data_wct - data_[row])\r\n                                    else:\r\n                                        x_tmp = x_tmp + (NS.sigma_max-NS.s_[row]) * (data_wct - data_[row])\r\n                                    #x_[row+RK.row_offset] = x_tmp\r\n                                    x_[row] = x_tmp\r\n                                    if row == 0:\r\n                                        x_0 = x_tmp\r\n                                    data_[row] = data_wct\r\n                                    if RK.EXPONENTIAL:\r\n                                        eps_[row] = data_[row] - x_0\r\n                                    else:\r\n                                        eps_[row] = (x_0 - data_[row]) / sigma\r\n                                    \r\n                                \r\n                                if hasattr(model.inner_model.inner_model.diffusion_model, \"eps_out\"):  # fp64 model out override, for testing only\r\n                                    eps_out = model.inner_model.inner_model.diffusion_model.eps_out\r\n                                    del model.inner_model.inner_model.diffusion_model.eps_out\r\n                                    if eps_out.shape[0] == 2:\r\n                                        data_cond   = x_0 - sigma * eps_out[1]\r\n                                        data_uncond = x_0 - sigma * eps_out[0]\r\n                                        data_row = data_uncond + model.inner_model.cfg * (data_cond - data_uncond)\r\n                                        eps_row = (x_0 - data_row) / sigma\r\n                                    else:\r\n                                        data_row = x_0 - sigma * eps_out\r\n                                    if RK.EXPONENTIAL:\r\n                                        eps_row = data_row - x_0\r\n                                    else:\r\n                                        eps_row = eps_out\r\n                                    if torch.norm(eps_row - eps_[row]) < 0.01 and torch.norm(data_row - data_[row]) < 0.01:  # if some other cfg/post-cfg func was used, detect and ignore this\r\n                                        eps_[row] = eps_row\r\n                                        data_[row] = data_row  \r\n                                \r\n\r\n                            if RK.extra_args['model_options']['transformer_options'].get('y0_standard_guide') is not None:\r\n                                if hasattr(model.inner_model.inner_model.diffusion_model, \"y0_standard_guide\"):\r\n                                    LG.y0 = model.inner_model.inner_model.diffusion_model.y0_standard_guide.clone()\r\n                                    del model.inner_model.inner_model.diffusion_model.y0_standard_guide\r\n                                    RK.extra_args['model_options']['transformer_options']['y0_standard_guide'] = None\r\n                                \r\n                            if RK.extra_args['model_options']['transformer_options'].get('y0_inv_standard_guide') is not None:\r\n                                if hasattr(model.inner_model.inner_model.diffusion_model, \"y0_inv_standard_guide\"):\r\n                                    LG.y0_inv = model.inner_model.inner_model.diffusion_model.y0_inv_standard_guide.clone() # RK.extra_args['model_options']['transformer_options'].get('y0_standard_guide')\r\n                                    del model.inner_model.inner_model.diffusion_model.y0_inv_standard_guide\r\n                                    RK.extra_args['model_options']['transformer_options']['y0_inv_standard_guide'] = None\r\n\r\n                            if LG.guide_mode.startswith(\"lure\") and (LG.lgw[step_sched] > 0 or LG.lgw_inv[step_sched] > 0):\r\n                                x_tmp = LG.process_guides_data_substep(x_tmp, data_[row], step_sched, s_tmp)\r\n                                eps_[row], data_[row] = RK(x_tmp, s_tmp, x_0, sigma, transformer_options={'latent_type': 'xt'})\r\n\r\n                            if momentum != 0.0:\r\n                                data_[row] = data_[row] - momentum * (data_prev_[0] - data_[row])  #negative!\r\n                                eps_[row]  = RK.get_epsilon(x_0, x_tmp, data_[row], sigma, s_tmp)    # ... why was this here??? for momentum maybe?\r\n\r\n                            if row < RK.rows and noise_scaling_weight != 0 and noise_scaling_type in {\"sampler\", \"sampler_substep\"}:\r\n                                if noise_scaling_type == \"sampler_substep\":\r\n                                    sub_lying_su, sub_lying_sigma, sub_lying_sd, sub_lying_alpha_ratio = NS.get_sde_substep(NS.s_[row], NS.s_[row+RK.row_offset+RK.multistep_stages], noise_scaling_eta, noise_scaling_mode)\r\n                                    for _ in range(noise_scaling_cycles-1):\r\n                                        sub_lying_su, sub_lying_sigma, sub_lying_sd, sub_lying_alpha_ratio = NS.get_sde_substep(NS.s_[row], sub_lying_sd, noise_scaling_eta, noise_scaling_mode)\r\n                                    lying_s_[row+1] = sub_lying_sd\r\n                                substep_noise_scaling_ratio = NS.s_[row+1]/lying_s_[row+1]\r\n                                if RK.multistep_stages > 0:\r\n                                    substep_noise_scaling_ratio = sigma_next/lying_sd                   #fails with resample?\r\n                                \r\n                                lying_eps_row_factor = (1 - noise_scaling_weight*(substep_noise_scaling_ratio-1))\r\n\r\n                        # GUIDE \r\n                        if not EO(\"disable_guides_eps_substep\"):\r\n                            eps_, x_      = LG.process_guides_substep(x_0, x_, eps_,      data_, row, step_sched, NS.sigma, NS.sigma_next, NS.sigma_down, NS.s_, epsilon_scale, RK)\r\n                        if not EO(\"disable_guides_eps_prev_substep\"):\r\n                            eps_prev_, x_ = LG.process_guides_substep(x_0, x_, eps_prev_, data_, row, step_sched, NS.sigma, NS.sigma_next, NS.sigma_down, NS.s_, epsilon_scale, RK)\r\n                        \r\n                        if LG.y0_mean is not None and LG.y0_mean.sum() != 0.0:\r\n\r\n                            if EO(\"guide_mean_scattersort\"):\r\n                                data_row_mean = apply_scattersort_spatial(data_[row], LG.y0_mean)\r\n                                eps_row_mean  = RK.get_eps(x_0, data_row_mean, s_tmp)\r\n                            else:\r\n                                eps_row_mean = eps_[row] - eps_[row].mean(dim=(-2,-1), keepdim=True) + (LG.y0_mean - x_0).mean(dim=(-2,-1), keepdim=True)\r\n                            \r\n                            if LG.mask_mean is not None:\r\n                                eps_row_mean = LG.mask_mean * eps_row_mean + (1-LG.mask_mean) * eps_[row]\r\n                            \r\n                            eps_[row] = eps_[row] + LG.lgw_mean[step_sched] * (eps_row_mean - eps_[row])\r\n                            \r\n                        if (full_iter == 0 and diag_iter == 0)   or   EO(\"newton_iter_post_use_on_implicit_steps\"):\r\n                            x_, eps_ = RK.newton_iter(x_0, x_, eps_, eps_prev_, data_, NS.s_, row, NS.h, sigmas, step, \"post\", SYNC_GUIDE_ACTIVE)\r\n\r\n                    # UPDATE   #for row in range(RK.rows - RK.multistep_stages - RK.row_offset + 1):\r\n                    if EO(\"exp2lin_override\") and RK.EXPONENTIAL:\r\n                        x_ = RK.update_substep(x_0, x_, eps_, eps_prev_, row, RK.row_offset, NS.h_new, NS.h_new_orig, lying_eps_row_factor=lying_eps_row_factor, sigma=sigma)   #modifies eps_[row] if lying_eps_row_factor != 1.0\r\n                        #x_ = RK.update_substep(x_0, x_, eps_, eps_prev_, row, RK.row_offset, -sigma*NS.h_new, -sigma*NS.h_new_orig, lying_eps_row_factor=lying_eps_row_factor)   #modifies eps_[row] if lying_eps_row_factor != 1.0\r\n                    else:\r\n                        x_ = RK.update_substep(x_0, x_, eps_, eps_prev_, row, RK.row_offset, NS.h_new, NS.h_new_orig, lying_eps_row_factor=lying_eps_row_factor)   #modifies eps_[row] if lying_eps_row_factor != 1.0\r\n                    \r\n                    x_[row+RK.row_offset] = NS.rebound_overshoot_substep(x_0, x_[row+RK.row_offset])\r\n                    \r\n                    if SYNC_GUIDE_ACTIVE: #yt_ is not None:\r\n                        #yt_ = RK.update_substep(yt_0, yt_, eps_y_, eps_prev_y_, row, RK.row_offset, NS.h_new, NS.h_new_orig, lying_eps_row_factor=lying_eps_row_factor)   #modifies eps_[row] if lying_eps_row_factor != 1.0\r\n                        yt_ = RK.update_substep(yt_0, yt_, eps_yt_, eps_prev_y_, row, RK.row_offset, NS.h_new, NS.h_new_orig, lying_eps_row_factor=lying_eps_row_factor, sigma=sigma)   #modifies eps_[row] if lying_eps_row_factor != 1.0\r\n                        yt_[row+RK.row_offset] = NS.rebound_overshoot_substep(yt_0, yt_[row+RK.row_offset])\r\n                    \r\n                    if not RK.IMPLICIT and NS.noise_mode_sde_substep != \"hard_sq\":\r\n\r\n                        x_means_per_substep = x_[row+RK.row_offset].mean(dim=(-2,-1), keepdim=True)\r\n\r\n                        if not LG.guide_mode.startswith(\"flow\") or (LG.lgw[step_sched] == 0 and LG.lgw[step+1] == 0   and   LG.lgw_inv[step_sched] == 0 and LG.lgw_inv[step+1] == 0):\r\n                            #if LG.guide_mode.startswith(\"sync\") and (LG.lgw[step_sched] != 0.0 or LG.lgw_inv[step_sched] != 0.0):\r\n                            #    x_row_tmp = x_[row+RK.row_offset].clone()\r\n                                \r\n                            #x_[row+RK.row_offset] = NS.swap_noise_substep(x_0, x_[row+RK.row_offset], mask=sde_mask, guide=LG.y0)\r\n                            x_row_tmp = NS.swap_noise_substep(x_0, x_[row+RK.row_offset], mask=sde_mask, guide=LG.y0)\r\n                            \r\n                            #if EO(\"eps_adain_smartnoise_substep\"):\r\n                            if LG.ADAIN_NOISE_MODE == \"smart\":\r\n                                #eps_row_next = (x_0 - x_[row+RK.row_offset]) / (sigma - NS.s_[row+RK.row_offset])\r\n                                #denoised_row_next = x_0 - sigma * eps_row_next\r\n                                #\r\n                                #eps_swapped = (x_row_tmp - denoised_row_next) / NS.s_[row+RK.row_offset]\r\n                                #\r\n                                #noise_row_next = eps_swapped + denoised_row_next\r\n                                #z_[row+RK.row_offset] = noise_row_next\r\n                                #RK.update_transformer_options({'z_' : z_})\r\n                                data_next = denoised + NS.h_new * RK.zum(row+RK.row_offset+RK.multistep_stages, data_, data_prev_) \r\n                                if VE_MODEL:\r\n                                    z_[row+RK.row_offset] = (x_row_tmp - data_next) / NS.s_[row+RK.row_offset]\r\n                                else:\r\n                                    z_[row+RK.row_offset] = (x_row_tmp - (NS.sigma_max-NS.s_[row+RK.row_offset])*data_next) / NS.s_[row+RK.row_offset]\r\n                                RK.update_transformer_options({'z_' : z_})\r\n                            \r\n                            elif LG.ADAIN_NOISE_MODE == \"update\": #EO(\"eps_adain\"):\r\n                                x_init_new = (x_row_tmp - x_[row+RK.row_offset]) / s_tmp + x_init\r\n                                x_0 += sigma * (x_init_new - x_init)\r\n                                x_init = x_init_new\r\n                                RK.update_transformer_options({'x_init' : x_init.clone()})\r\n                            \r\n                            if SYNC_GUIDE_ACTIVE:\r\n                                noise_bongflow_new = (x_row_tmp - x_[row+RK.row_offset]) / s_tmp + noise_bongflow\r\n                                yt_[row+RK.row_offset] += s_tmp * (noise_bongflow_new - noise_bongflow)\r\n                                x_0 += sigma * (noise_bongflow_new - noise_bongflow)\r\n                                noise_bongflow = noise_bongflow_new\r\n                            \r\n                            x_[row+RK.row_offset] = x_row_tmp\r\n                            \r\n                        elif LG.guide_mode.startswith(\"flow\"):\r\n                            pass\r\n\r\n                    if not LG.guide_mode.startswith(\"lure\"):\r\n                        x_[row+RK.row_offset] = LG.process_guides_data_substep(x_[row+RK.row_offset], data_[row], step_sched, NS.s_[row])\r\n                    \r\n                    if ((not EO(\"protoshock\") and not EO(\"yoloshock\"))  or EO(\"fuckitshock\")) and StyleMMDiT is not None and StyleMMDiT.data_shock_start_step <= step_sched < StyleMMDiT.data_shock_end_step:\r\n                        data_wct = StyleMMDiT.apply_data_shock(data_[row])\r\n                        if VE_MODEL:\r\n                            x_[row+RK.row_offset] = x_[row+RK.row_offset] + (data_wct - data_[row])\r\n                        else:\r\n                            x_[row+RK.row_offset] = x_[row+RK.row_offset] + (NS.sigma_max-NS.s_[row]) * (data_wct - data_[row])\r\n                    \r\n\r\n                    if SYNC_GUIDE_ACTIVE: # # # # ## # # ## # YIIIIKES ---------------------------------------------------------------------------------------------------------\r\n                        if VE_MODEL:\r\n                            yt_[:NS.s_.shape[0], 0] = y0_bongflow + NS.s_.view(-1, *[1]*(x.ndim-1)) * (noise_bongflow)\r\n                            yt_0   = y0_bongflow + sigma * (noise_bongflow)\r\n                        else:\r\n                            yt_[:NS.s_.shape[0], 0] = y0_bongflow + NS.s_.view(-1, *[1]*(x.ndim-1)) * (noise_bongflow - y0_bongflow)\r\n                            yt_0   = y0_bongflow + sigma * (noise_bongflow - y0_bongflow)\r\n                        if RK.EXPONENTIAL:\r\n                            eps_y_ = data_y_ - yt_0 # yt_        # watch out for fuckery with size of tableau being smaller later in a chained sampler\r\n                        else:\r\n                            if BONGMATH:\r\n                                eps_y_[:NS.s_.shape[0]] = (yt_[:NS.s_.shape[0]] - data_y_[:NS.s_.shape[0]]) / NS.s_.view(-1,*[1]*(x_.ndim-1)) \r\n                            else:\r\n                                eps_y_[:NS.s_.shape[0]] = (yt_0.repeat(NS.s_.shape[0], *[1]*(x_.ndim-1)) - data_y_[:NS.s_.shape[0]]) / sigma    # calc exact to c0 node\r\n                        if not BONGMATH and (eta != 0 or eta_substep != 0):\r\n                            if RK.EXPONENTIAL:\r\n                                eps_x_ = data_x_ - x_0 \r\n                            else:\r\n                                eps_x_ = (x_0 - data_x_) / sigma\r\n\r\n                            weight_mask = lgw_mask_+lgw_mask_inv_\r\n                            if LG.SYNC_SEPARATE:\r\n                                sync_mask = lgw_mask_sync_+lgw_mask_sync_inv_\r\n                            else:\r\n                                sync_mask = 1.\r\n                            \r\n                            for ms in range(len(eps_)):\r\n                                if RK.EXPONENTIAL:\r\n                                    if VE_MODEL:\r\n                                        eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_y_[ms] + sigma*(-noise_bongflow))\r\n                                        if EO(\"sync_x2y\"):\r\n                                            eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_x2y_[ms] + sigma*(-noise_bongflow))\r\n                                    else:\r\n                                        eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_y_[ms] + sigma*(y0_bongflow-noise_bongflow))\r\n                                        if EO(\"sync_x2y\"):\r\n                                            eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_x2y_[ms] + sigma*(y0_bongflow-noise_bongflow))\r\n                                else:\r\n                                    if VE_MODEL:\r\n                                        eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_y_[ms] + (noise_bongflow))\r\n                                        if EO(\"sync_x2y\"):\r\n                                            eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_x2y_[ms] + (noise_bongflow))\r\n                                    else:\r\n                                        eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_y_[ms] + (noise_bongflow-y0_bongflow))\r\n                                        if EO(\"sync_x2y\"):\r\n                                            eps_[ms] = sync_mask * eps_x_[ms]  +  (1-sync_mask) * eps_x2y_[ms]  +  weight_mask * (-eps_x2y_[ms] + (noise_bongflow-y0_bongflow))\r\n\r\n                    if BONGMATH and NS.s_[row] > RK.sigma_min and NS.h < RK.sigma_max/2   and   (diag_iter == implicit_steps_diag or EO(\"enable_diag_explicit_bongmath_all\"))   and not EO(\"disable_terminal_bongmath\"):\r\n                        if step == 0 and UNSAMPLE:\r\n                            pass\r\n                        elif full_iter == implicit_steps_full or not EO(\"disable_fully_explicit_bongmath_except_final\"):\r\n                            if sigma > 0.03:\r\n                                BONGMATH_Y = SYNC_GUIDE_ACTIVE\r\n                                x_0, x_, eps_ = RK.bong_iter(x_0, x_, eps_, eps_prev_, data_, sigma, NS.s_, row, RK.row_offset, NS.h, step, step_sched,\r\n                                                            BONGMATH_Y, y0_bongflow, noise_bongflow, eps_x_, eps_y_, data_x_, data_y_, LG)\r\n                                #                            BONGMATH_Y, y0_bongflow, noise_bongflow, eps_x_, eps_y_, eps_x2y_, data_x_, LG)\r\n                                #if EO(\"eps_adain_smartnoise_bongmath\"):\r\n                                if LG.ADAIN_NOISE_MODE == \"smart\":\r\n                                    if VE_MODEL:\r\n                                        z_[:NS.s_.shape[0], ...] = (x_ - data_)[:NS.s_.shape[0], ...] / NS.s_.view(-1,*[1]*(x_.ndim-1))\r\n                                    else:\r\n                                        z_[:NS.s_.shape[0], ...] = (x_[:NS.s_.shape[0], ...] - (NS.sigma_max - NS.s_.view(-1,*[1]*(x_.ndim-1)))*data_[:NS.s_.shape[0], ...])[:NS.s_.shape[0], ...] / NS.s_.view(-1,*[1]*(x_.ndim-1))\r\n                                    RK.update_transformer_options({'z_' : z_})\r\n                    diag_iter += 1\r\n\r\n                    #progress_bar.update( round(1 / implicit_steps_total, 2) )\r\n                    \r\n                    #step_update = round(1 / implicit_steps_total, 2)\r\n                    #progress_bar.update(float(f\"{step_update:.2f}\")) \r\n\r\n            x_next = x_[RK.rows - RK.multistep_stages - RK.row_offset + 1]\r\n            x_next = NS.rebound_overshoot_step(x_0, x_next)\r\n            \r\n            if SYNC_GUIDE_ACTIVE:           # YT_NEXT UPDATE STEP --------------------------------------\r\n                yt_next = yt_[RK.rows - RK.multistep_stages - RK.row_offset + 1]\r\n                yt_next = NS.rebound_overshoot_step(yt_0, yt_next)\r\n            \r\n            eps = (x_0 - x_next) / (sigma - sigma_next)\r\n            denoised = x_0 - sigma * eps\r\n            \r\n            if EO(\"postshock\") and step < EO(\"postshock\", 10):\r\n                eps_row, data_row = RK(x_next, sigma_next, x_next, sigma_next, transformer_options={'row': row, 'x_tmp': x_next, 'sigma_next': sigma_next})\r\n                if VE_MODEL:\r\n                    x_next = x_next + (data_row - denoised)\r\n                else:\r\n                    x_next = x_next + (NS.sigma_max-sigma_next) * (data_row - denoised)\r\n                eps = (x_0 - x_next) / (sigma - sigma_next)\r\n                denoised = x_0 - sigma * eps\r\n\r\n            if EO(\"data_sampler\") and step > EO(\"data_sampler_start_step\", 0) and step < EO(\"data_sampler_end_step\", 5):\r\n                data_sampler_weight = EO(\"data_sampler_weight\", 1.0)\r\n                denoised_step = RK.zum(row+RK.row_offset+RK.multistep_stages, data_, data_prev_) \r\n                x_next = LG.swap_data(x_next, denoised, denoised_step, data_sampler_weight * sigma_next)\r\n                eps = (x_0 - x_next) / (sigma - sigma_next)\r\n                denoised = x_0 - sigma * eps\r\n            \r\n            x_0_prev = x_0.clone()\r\n\r\n            x_means_per_step = x_next.mean(dim=(-2,-1), keepdim=True)\r\n\r\n            if eta == 0.0:\r\n                x = x_next\r\n                if SYNC_GUIDE_ACTIVE:\r\n                    yt_0 = yt_[0] = yt_next\r\n                #elif LG.guide_mode.startswith(\"sync\") and (LG.lgw[step_sched] != 0.0 or LG.lgw_inv[step_sched] != 0.0):\r\n                #    noise_sync_new = NS.noise_sampler(sigma=sigma, sigma_next=sigma_next)\r\n                #    x = x_next + sigma * eta * (noise_sync_new - noise_bongflow)\r\n                #    noise_bongflow += eta * (noise_sync_new - noise_bongflow)\r\n            elif not LG.guide_mode.startswith(\"flow\") or (LG.lgw[step_sched] == 0 and LG.lgw[step+1] == 0   and   LG.lgw_inv[step_sched] == 0 and LG.lgw_inv[step+1] == 0):\r\n                x = NS.swap_noise_step(x_0, x_next, mask=sde_mask)\r\n                \r\n                #if EO(\"eps_adain_smartnoise\"):\r\n                if LG.ADAIN_NOISE_MODE == \"smart\":\r\n                    #noise_next = eps + denoised\r\n                    #eps_swapped = (x - denoised) / sigma_next\r\n                    #\r\n                    #noise_next = eps_swapped + denoised\r\n                    #z_[0] = noise_next\r\n                    #RK.update_transformer_options({'z_' : z_})\r\n                    if full_iter+1 < implicit_steps_full+1: # are we to loop for full iter after this?\r\n                        if VE_MODEL:\r\n                            #z_[row+RK.row_offset] = (x - denoised) / sigma_next\r\n                            z_[0] = (x_0 - denoised) / sigma\r\n                        else:\r\n                            #z_[row+RK.row_offset] = (x - (NS.sigma_max-sigma_next) * denoised) / sigma_next\r\n                            z_[0] = (x_0 - (NS.sigma_max-sigma) * denoised) / sigma\r\n                    else: #we're advancing to next step, x is x_next\r\n                        if VE_MODEL:\r\n                            #z_[row+RK.row_offset] = (x - denoised) / sigma_next\r\n                            z_[0] = (x - denoised) / sigma_next\r\n                        else:\r\n                            #z_[row+RK.row_offset] = (x - (NS.sigma_max-sigma_next) * denoised) / sigma_next\r\n                            z_[0] = (x - (NS.sigma_max-sigma_next) * denoised) / sigma_next\r\n                    RK.update_transformer_options({'z_' : z_})\r\n\r\n                elif LG.ADAIN_NOISE_MODE == \"update\": #EO(\"eps_adain\"):\r\n                    x_init_new = (x - x_next) / sigma_next + x_init\r\n                    x_0 += sigma * (x_init_new - x_init)\r\n                    x_init = x_init_new\r\n                    RK.update_transformer_options({'x_init' : x_init.clone()})\r\n                \r\n                if SYNC_GUIDE_ACTIVE:\r\n                    noise_bongflow_new = (x - x_next) / sigma_next + noise_bongflow\r\n                    yt_next += sigma_next * (noise_bongflow_new - noise_bongflow)\r\n                    x_0 += sigma * (noise_bongflow_new - noise_bongflow)\r\n                    if not EO(\"disable_i_bong\"):\r\n                        for i_bong in range(len(NS.s_)):\r\n                            x_[i_bong] += NS.s_[i_bong] * (noise_bongflow_new - noise_bongflow)\r\n                    #x_[0] += sigma * (noise_bongflow_new - noise_bongflow)\r\n                    yt_0 = yt_[0] = yt_next\r\n                    noise_bongflow = noise_bongflow_new\r\n            else:\r\n                x = x_next\r\n            \r\n            if EO(\"keep_step_means\"):\r\n                x = x - x.mean(dim=(-2,-1), keepdim=True) + x_means_per_step\r\n\r\n            \r\n            callback_step = len(sigmas)-1 - step if sampler_mode == \"unsample\" else step\r\n            preview_callback(x, eps, denoised, x_, eps_, data_, callback_step, sigma, sigma_next, callback, EO, preview_override=data_cached, FLOW_STOPPED=FLOW_STOPPED)\r\n            \r\n            h_prev = NS.h\r\n            x_prev = x_0\r\n            \r\n            denoised_prev2 = denoised_prev\r\n            denoised_prev  = denoised\r\n            \r\n            full_iter += 1\r\n            \r\n            if LG.lgw[step_sched] > 0 and step >= EO(\"guide_cutoff_start_step\", 0) and cossim_counter < EO(\"guide_cutoff_max_iter\", 10) and (EO(\"guide_cutoff\") or EO(\"guide_min\")):\r\n                guide_cutoff = EO(\"guide_cutoff\", 1.0)\r\n                denoised_norm = data_[0] - data_[0].mean(dim=(-2,-1), keepdim=True)\r\n                y0_norm       = LG.y0    - LG.y0   .mean(dim=(-2,-1), keepdim=True)\r\n                y0_cossim     = get_cosine_similarity(denoised_norm, y0_norm)\r\n                if y0_cossim > guide_cutoff and LG.lgw[step_sched] > EO(\"guide_cutoff_floor\", 0.0):\r\n                    if not EO(\"guide_cutoff_fast\"):\r\n                        LG.lgw[step_sched] *= EO(\"guide_cutoff_factor\", 0.9)\r\n                    else:\r\n                        LG.lgw *= EO(\"guide_cutoff_factor\", 0.9)\r\n                    full_iter -= 1\r\n                if y0_cossim < EO(\"guide_min\", 0.0) and LG.lgw[step_sched] < EO(\"guide_min_ceiling\", 1.0):\r\n                    if not EO(\"guide_cutoff_fast\"):\r\n                        LG.lgw[step_sched] *= EO(\"guide_min_factor\", 1.1)\r\n                    else:\r\n                        LG.lgw *= EO(\"guide_min_factor\", 1.1)\r\n                    full_iter -= 1\r\n        \r\n        #if EO(\"smartnoise\"): #TODO: determine if this was useful\r\n        #    z_[0] = z_next\r\n        \r\n        if FLOW_STARTED and FLOW_STOPPED:\r\n            data_prev_ = data_x_prev_\r\n        if FLOW_STARTED and not FLOW_STOPPED:\r\n            data_x_prev_[0] = data_cached       # data_cached is data_x from flow mode. this allows multistep to resume seamlessly.\r\n            for ms in range(recycled_stages):\r\n                data_x_prev_[recycled_stages - ms] = data_x_prev_[recycled_stages - ms - 1]\r\n\r\n        #if LG.guide_mode.startswith(\"sync\") and (LG.lgw[step_sched] != 0.0 or LG.lgw_inv[step_sched] != 0.0):\r\n        #    data_prev_[0] = x_0 - sigma * eps_[0]\r\n        #else:\r\n        data_prev_[0] = data_[0]                # with flow mode, this will be the differentiated guide/\"denoised\"\r\n        for ms in range(recycled_stages):\r\n            data_prev_[recycled_stages - ms] = data_prev_[recycled_stages - ms - 1]   # TODO: verify that this does not run on every substep...\r\n\r\n        if SYNC_GUIDE_ACTIVE:\r\n            data_prev_x_[0] = data_x      \r\n            for ms in range(recycled_stages):\r\n                data_prev_x_[recycled_stages - ms] = data_prev_x_[recycled_stages - ms - 1] \r\n\r\n            data_prev_y_[0] = data_y     \r\n            for ms in range(recycled_stages):\r\n                data_prev_y_[recycled_stages - ms] = data_prev_y_[recycled_stages - ms - 1] \r\n        \r\n        rk_type = RK.swap_rk_type_at_step_or_threshold(x_0, data_prev_, NS, sigmas, step, rk_swap_step, rk_swap_threshold, rk_swap_type, rk_swap_print)\r\n        if step > rk_swap_step:\r\n            implicit_steps_full = 0\r\n            implicit_steps_diag = 0\r\n\r\n        if EO(\"bong2m\") or EO(\"bong3m\"):\r\n            denoised_data_prev2 = denoised_data_prev\r\n            denoised_data_prev = data_[0]\r\n        \r\n        if SKIP_PSEUDO and not LG.guide_mode.startswith(\"flow\"):\r\n            if SKIP_PSEUDO_Y == \"y0\":\r\n                LG.y0 = denoised\r\n                LG.HAS_LATENT_GUIDE = True\r\n            else:\r\n                LG.y0_inv = denoised\r\n                LG.HAS_LATENT_GUIDE_INV = True\r\n                \r\n        if EO(\"pseudo_mix_strength\"):\r\n            pseudo_mix_strength = EO(\"pseudo_mix_strength\", 0.0)\r\n            LG.y0     = orig_y0     + pseudo_mix_strength * (denoised - orig_y0)\r\n            LG.y0_inv = orig_y0_inv + pseudo_mix_strength * (denoised - orig_y0_inv)\r\n            \r\n        #if sampler_mode == \"unsample\":\r\n        #    progress_bar.n -= 1 \r\n        #    progress_bar.refresh() \r\n        #else:\r\n        #    progress_bar.update(1)\r\n        progress_bar.update(1)  #THIS WAS HERE\r\n        step += 1\r\n        \r\n        if EO(\"skip_step\", -1) == step:\r\n            step += 1\r\n\r\n        if d_noise_start_step     == step:\r\n            sigmas = sigmas.clone() * d_noise\r\n            if sigmas.max() > NS.sigma_max:\r\n                sigmas = sigmas / NS.sigma_max\r\n        if d_noise_inv_start_step == step:\r\n            sigmas = sigmas.clone() / d_noise_inv\r\n            if sigmas.max() > NS.sigma_max:\r\n                sigmas = sigmas / NS.sigma_max\r\n        \r\n        if LG.lgw[step_sched] > 0 and step >= EO(\"guide_step_cutoff_start_step\", 0) and cossim_counter < EO(\"guide_step_cutoff_max_iter\", 10) and (EO(\"guide_step_cutoff\") or EO(\"guide_step_min\")):\r\n            guide_cutoff = EO(\"guide_step_cutoff\", 1.0)\r\n            eps_trash, data_trash = RK(x, sigma_next, x_0, sigma)\r\n            denoised_norm = data_trash - data_trash.mean(dim=(-2,-1), keepdim=True)\r\n            y0_norm       = LG.y0    - LG.y0   .mean(dim=(-2,-1), keepdim=True)\r\n            y0_cossim     = get_cosine_similarity(denoised_norm, y0_norm)\r\n            if y0_cossim > guide_cutoff and LG.lgw[step_sched] > EO(\"guide_step_cutoff_floor\", 0.0):\r\n                if not EO(\"guide_step_cutoff_fast\"):\r\n                    LG.lgw[step_sched] *= EO(\"guide_step_cutoff_factor\", 0.9)\r\n                else:\r\n                    LG.lgw *= EO(\"guide_step_cutoff_factor\", 0.9)\r\n                step -= 1\r\n                x_0 = x = x_[0] = x_0_orig.clone()\r\n            if y0_cossim < EO(\"guide_step_min\", 0.0) and LG.lgw[step_sched] < EO(\"guide_step_min_ceiling\", 1.0):\r\n                if not EO(\"guide_step_cutoff_fast\"):\r\n                    LG.lgw[step_sched] *= EO(\"guide_step_min_factor\", 1.1)\r\n                else:\r\n                    LG.lgw *= EO(\"guide_step_min_factor\", 1.1)\r\n                step -= 1\r\n                x_0 = x = x_[0] = x_0_orig.clone()\r\n        # END SAMPLING LOOP ---------------------------------------------------------------------------------------------------\r\n\r\n    #progress_bar.close()\r\n    RK.update_transformer_options({'update_cross_attn':  None})\r\n    if step == len(sigmas)-2 and sigmas[-1] == 0 and sigmas[-2] == NS.sigma_min and not INIT_SAMPLE_LOOP:\r\n        if EO(\"skip_final_model_call\"):\r\n            sigma_min = NS.sigma_min.view((1,) * x.ndim).to(x)\r\n            denoised  = model.inner_model.inner_model.model_sampling.calculate_denoised(sigma_min, eps, x)\r\n            x = denoised\r\n        else:\r\n            eps, denoised = RK(x, NS.sigma_min, x, NS.sigma_min)\r\n            x = denoised\r\n            #progress_bar.update(1)\r\n\r\n    eps      = eps     .to(model_device)\r\n    denoised = denoised.to(model_device)\r\n    x        = x       .to(model_device)\r\n    \r\n    progress_bar.close()\r\n\r\n    if not (UNSAMPLE and sigmas[1] > sigmas[0]) and not EO(\"preview_last_step_always\") and sigma is not None   and   not (FLOW_STARTED and not FLOW_STOPPED):\r\n        callback_step = len(sigmas)-1 - step if sampler_mode == \"unsample\" else step\r\n        preview_callback(x, eps, denoised, x_, eps_, data_, callback_step, sigma, sigma_next, callback, EO, preview_override=data_cached, FLOW_STOPPED=FLOW_STOPPED)\r\n\r\n    if INIT_SAMPLE_LOOP:\r\n        state_info_out = state_info\r\n    else:\r\n        if guides is not None and guides.get('guide_mode', \"\") == 'inversion':\r\n            guide_inversion_y0     = state_info.get('guide_inversion_y0')\r\n            guide_inversion_y0_inv = state_info.get('guide_inversion_y0_inv')\r\n            \r\n            if sampler_mode == \"unsample\" and guide_inversion_y0 is None:\r\n                guide_inversion_y0     = LG.y0.clone()\r\n            if sampler_mode == \"unsample\" and guide_inversion_y0_inv is None:\r\n                guide_inversion_y0_inv = LG.y0_inv.clone()\r\n                \r\n            if sampler_mode in {\"standard\", \"resample\"} and guide_inversion_y0 is None:\r\n                guide_inversion_y0 = NS.noise_sampler(sigma=NS.sigma_max, sigma_next=NS.sigma_min).to(x)\r\n                guide_inversion_y0 = normalize_zscore(guide_inversion_y0, channelwise=True, inplace=True)\r\n            if sampler_mode in {\"standard\", \"resample\"} and guide_inversion_y0_inv is None:\r\n                guide_inversion_y0_inv = NS.noise_sampler(sigma=NS.sigma_max, sigma_next=NS.sigma_min).to(x)\r\n                guide_inversion_y0_inv = normalize_zscore(guide_inversion_y0_inv, channelwise=True, inplace=True)\r\n                \r\n            state_info_out['guide_inversion_y0']     = guide_inversion_y0\r\n            state_info_out['guide_inversion_y0_inv'] = guide_inversion_y0_inv\r\n\r\n        state_info_out['raw_x']             = x.to('cpu')\r\n        state_info_out['denoised']          = denoised.to('cpu')\r\n        state_info_out['data_prev_']        = data_prev_.to('cpu')\r\n        state_info_out['end_step']          = step\r\n        state_info_out['sigma_next']        = sigma_next.clone()\r\n        state_info_out['sigmas']            = sigmas_scheduled.clone()\r\n        state_info_out['sampler_mode']      = sampler_mode\r\n        state_info_out['last_rng']          = NS.noise_sampler .generator.get_state().clone()\r\n        state_info_out['last_rng_substep']  = NS.noise_sampler2.generator.get_state().clone()\r\n        state_info_out['completed']         = step == len(sigmas)-2 and sigmas[-1] == 0 and sigmas[-2] == NS.sigma_min\r\n        state_info_out['FLOW_STARTED']      = FLOW_STARTED\r\n        state_info_out['FLOW_STOPPED']      = FLOW_STOPPED\r\n        state_info_out['noise_bongflow']    = noise_bongflow\r\n        state_info_out['y0_bongflow']       = y0_bongflow\r\n        state_info_out['y0_bongflow_orig']  = y0_bongflow_orig\r\n        state_info_out['y0_standard_guide']       = y0_standard_guide\r\n        state_info_out['y0_inv_standard_guide']  = y0_inv_standard_guide\r\n        state_info_out['data_prev_y_']      = data_prev_y_\r\n        state_info_out['data_prev_x_']      = data_prev_x_\r\n\r\n        if noise_initial is not None:\r\n            state_info_out['noise_initial'] = noise_initial.to('cpu')\r\n        if image_initial is not None:\r\n            state_info_out['image_initial'] = image_initial.to('cpu')\r\n\r\n        if FLOW_STARTED and not FLOW_STOPPED:\r\n            state_info_out['y0']           = y0.to('cpu') \r\n            #state_info_out['y0_inv']       = y0_inv.to('cpu')       # TODO: implement this?\r\n            state_info_out['data_cached']  = data_cached.to('cpu')\r\n            state_info_out['data_x_prev_'] = data_x_prev_.to('cpu')\r\n\r\n    return x\r\n\r\ndef noise_fn(x, sigma, sigma_next, noise_sampler, cossim_iter=1):\r\n    \r\n    noise  = normalize_zscore(noise_sampler(sigma=sigma, sigma_next=sigma_next), channelwise=True, inplace=True)\r\n    cossim = get_pearson_similarity(x, noise)\r\n    \r\n    for i in range(cossim_iter):\r\n        noise_new  = normalize_zscore(noise_sampler(sigma=sigma, sigma_next=sigma_next), channelwise=True, inplace=True)\r\n        cossim_new = get_pearson_similarity(x, noise_new)\r\n        \r\n        if cossim_new > cossim:\r\n            noise  = noise_new\r\n            cossim = cossim_new\r\n    \r\n    return noise\r\n\r\n\r\ndef preview_callback(\r\n                    x          : Tensor,\r\n                    eps        : Tensor,\r\n                    denoised   : Tensor,\r\n                    x_         : Tensor,\r\n                    eps_       : Tensor,\r\n                    data_      : Tensor,\r\n                    step       : int,\r\n                    sigma      : Tensor,\r\n                    sigma_next : Tensor,\r\n                    callback   : Callable,\r\n                    EO         : ExtraOptions,\r\n                    preview_override : Optional[Tensor] = None,\r\n                    FLOW_STOPPED : bool = False):\r\n\r\n    if EO(\"eps_substep_preview\"):\r\n        row_callback = EO(\"eps_substep_preview\", 0)\r\n        denoised_callback = eps_[row_callback]\r\n        \r\n    elif EO(\"denoised_substep_preview\"):\r\n        row_callback = EO(\"denoised_substep_preview\", 0)\r\n        denoised_callback = data_[row_callback]\r\n        \r\n    elif EO(\"x_substep_preview\"):\r\n        row_callback = EO(\"x_substep_preview\", 0)\r\n        denoised_callback = x_[row_callback]\r\n        \r\n    elif EO(\"eps_preview\"):\r\n        denoised_callback = eps\r\n        \r\n    elif EO(\"denoised_preview\"):\r\n        denoised_callback = denoised\r\n        \r\n    elif EO(\"x_preview\"):\r\n        denoised_callback = x\r\n        \r\n    elif preview_override is not None and FLOW_STOPPED == False:\r\n        denoised_callback = preview_override\r\n        \r\n    else:\r\n        denoised_callback = data_[0]\r\n        \r\n    callback({'x': x, 'i': step, 'sigma': sigma, 'sigma_next': sigma_next, 'denoised': denoised_callback.to(torch.float32)}) if callback is not None else None\r\n    \r\n    return\r\n\r\n"
  },
  {
    "path": "beta/samplers.py",
    "content": "import torch\nimport torch.nn.functional as F\nfrom torch import Tensor\n\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union\nimport copy\nimport gc\n\nimport comfy.samplers\nimport comfy.sample\nimport comfy.sampler_helpers\nimport comfy.model_sampling\nimport comfy.latent_formats\nimport comfy.sd\nimport comfy.supported_models\nimport comfy.utils\nimport comfy.nested_tensor\nfrom comfy.samplers import CFGGuider, sampling_function\n\nimport latent_preview\n\nfrom ..helper               import initialize_or_scale, get_res4lyf_scheduler_list, OptionsManager, ExtraOptions\nfrom ..res4lyf              import RESplain\nfrom ..latents              import normalize_zscore, get_orthogonal\nfrom ..sigmas               import get_sigmas\n#import ..models              # import ReFluxPatcher\n\nfrom .constants             import MAX_STEPS, IMPLICIT_TYPE_NAMES\nfrom .noise_classes         import NOISE_GENERATOR_CLASSES_SIMPLE, NOISE_GENERATOR_NAMES_SIMPLE, NOISE_GENERATOR_NAMES\nfrom .rk_noise_sampler_beta import NOISE_MODE_NAMES\nfrom .rk_coefficients_beta  import get_default_sampler_name, get_sampler_name_list, process_sampler_name\n\n\ndef copy_cond(conditioning):\n    new_conditioning = []\n    if type(conditioning[0][0]) == list:\n        for i in range(len(conditioning)):\n            new_conditioning_i = []\n            for embedding, cond in conditioning[i]:\n                cond_copy = {}\n                for k, v in cond.items():\n                    if isinstance(v, torch.Tensor):\n                        cond_copy[k] = v.clone()\n                    else:\n                        cond_copy[k] = v  # ensure we're not copying huge shit like controlnets\n                new_conditioning_i.append([embedding.clone(), cond_copy])\n            new_conditioning.append(new_conditioning_i)\n    else:\n        for embedding, cond in conditioning:\n            cond_copy = {}\n            for k, v in cond.items():\n                if isinstance(v, torch.Tensor):\n                    cond_copy[k] = v.clone()\n                else:\n                    cond_copy[k] = v  # ensure we're not copying huge shit like controlnets\n            new_conditioning.append([embedding.clone(), cond_copy])\n            \n    return new_conditioning\n\n\ndef generate_init_noise(x, seed, noise_type_init, noise_stdev, noise_mean, noise_normalize,\n                        sigma_max, sigma_min, alpha_init=None, k_init=None, EO=None):\n    if noise_type_init == \"none\" or noise_stdev == 0.0:\n        return torch.zeros_like(x)\n\n    noise_sampler_init = NOISE_GENERATOR_CLASSES_SIMPLE.get(noise_type_init)(\n        x=x, seed=seed, sigma_max=sigma_max, sigma_min=sigma_min\n    )\n\n    if noise_type_init == \"fractal\":\n        noise_sampler_init.alpha = alpha_init\n        noise_sampler_init.k = k_init\n        noise_sampler_init.scale = 0.1\n\n    noise = noise_sampler_init(sigma=sigma_max * noise_stdev, sigma_next=sigma_min)\n\n    if noise_normalize and noise.std() > 0:\n        channelwise = EO(\"init_noise_normalize_channelwise\", \"true\") if EO else \"true\"\n        channelwise = True if channelwise == \"true\" else False\n        noise = normalize_zscore(noise, channelwise=channelwise, inplace=True)\n\n    noise *= noise_stdev\n    noise = (noise - noise.mean()) + noise_mean\n    return noise\n\n\nclass SharkGuider(CFGGuider):\n    def __init__(self, model_patcher):\n        super().__init__(model_patcher)\n        self.cfgs = {}\n\n    def set_conds(self, **kwargs):\n        self.inner_set_conds(kwargs)\n\n    def set_cfgs(self, **kwargs):\n        self.cfgs = {**kwargs}\n        self.cfg  = self.cfgs.get('xt', self.cfg)\n\n    def predict_noise(self, x, timestep, model_options={}, seed=None):\n        latent_type = model_options['transformer_options'].get('latent_type', 'xt')\n        positive = self.conds.get(f'{latent_type}_positive', self.conds.get('xt_positive'))\n        negative = self.conds.get(f'{latent_type}_negative', self.conds.get('xt_negative'))\n        positive = self.conds.get('xt_positive') if positive is None else positive\n        negative = self.conds.get('xt_negative') if negative is None else negative\n        cfg      = self.cfgs.get(latent_type, self.cfg)\n        \n        model_options['transformer_options']['yt_positive'] = self.conds.get('yt_positive')\n        model_options['transformer_options']['yt_negative'] = self.conds.get('yt_negative')\n        \n        return sampling_function(self.inner_model, x, timestep, negative, positive, cfg, model_options=model_options, seed=seed)\n\n\n\nclass SharkSampler:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"noise_type_init\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\n                \"noise_stdev\":     (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, }),\n                \"noise_seed\":      (\"INT\",                        {\"default\": 0,   \"min\": -1,       \"max\": 0xffffffffffffffff}),\n                \"sampler_mode\":    (['unsample', 'standard', 'resample'], {\"default\": \"standard\"}),\n                \"scheduler\":       (get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\n                \"steps\":           (\"INT\",                        {\"default\": 30,  \"min\": 1,        \"max\": 10000.0}),\n                \"denoise\":         (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01}),\n                \"denoise_alt\":     (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01}),\n                \"cfg\":             (\"FLOAT\",                      {\"default\": 5.5, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Negative values use channelwise CFG.\" }),\n                },\n            \"optional\": {\n                \"model\":           (\"MODEL\",),\n                \"positive\":        (\"CONDITIONING\", ),\n                \"negative\":        (\"CONDITIONING\", ),\n                \"sampler\":         (\"SAMPLER\", ),\n                \"sigmas\":          (\"SIGMAS\", ),\n                \"latent_image\":    (\"LATENT\", ),     \n                \"extra_options\":   (\"STRING\",                     {\"default\": \"\", \"multiline\": True}),   \n                \"options\":         (\"OPTIONS\", ),   \n                }\n            }\n\n    RETURN_TYPES = (\"LATENT\", \n                    \"LATENT\",  \n                    \"LATENT\",)\n    \n    RETURN_NAMES = (\"output\", \n                    \"denoised\",\n                    \"sde_noise\",) \n    \n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/samplers\"\n    EXPERIMENTAL = True\n    \n    def main(self, \n            model                                       = None,\n            cfg                : float                  =  5.5, \n            scheduler          : str                    = \"beta57\", \n            steps              : int                    = 30, \n            steps_to_run       : int                    = -1,\n            sampler_mode       : str                    = \"standard\",\n            denoise            : float                  =  1.0, \n            denoise_alt        : float                  =  1.0,\n            noise_type_init    : str                    = \"gaussian\",\n            latent_image       : Optional[dict[Tensor]] = None,\n            \n            positive                                    = None,\n            negative                                    = None,\n            sampler                                     = None,\n            sigmas             : Optional[Tensor]       = None,\n            noise_stdev        : float                  =  1.0,\n            noise_mean         : float                  =  0.0,\n            noise_normalize    : bool                   = True,\n            \n            d_noise            : float                  =  1.0,\n            alpha_init         : float                  = -1.0,\n            k_init             : float                  =  1.0,\n            cfgpp              : float                  =  0.0,\n            noise_seed         : int                    = -1,\n            options                                     = None,\n            sde_noise                                   = None,\n            sde_noise_steps    : int                    =  1,\n            \n            rebounds           : int                    =  0,\n            unsample_cfg       : float                  = 1.0,\n            unsample_eta       : float                  = 0.5,\n            unsampler_name     : str                    = \"none\",\n            unsample_steps_to_run : int                 = -1,\n            eta_decay_scale   : float                  = 1.0,\n            \n            #ultracascade_stage : str = \"stage_UP\",\n            ultracascade_latent_image : Optional[dict[str,Any]] = None,\n            ultracascade_guide_weights: Optional[Tuple] = None,\n            \n            ultracascade_latent_width : int = 0,\n            ultracascade_latent_height: int = 0,\n\n            extra_options      : str = \"\", \n            **kwargs,\n            ): \n        \n            \n            disable_pbar = not comfy.utils.PROGRESS_BAR_ENABLED\n\n\n\n            # INIT EXTENDABLE OPTIONS INPUTS\n            \n            options_mgr     = OptionsManager(options, **kwargs)\n                        \n            extra_options  += \"\\n\" + options_mgr.get('extra_options', \"\")\n            EO              = ExtraOptions(extra_options)\n            default_dtype   = EO(\"default_dtype\", torch.float64)\n            default_device  = EO(\"work_device\", \"cuda\" if torch.cuda.is_available() else \"cpu\")\n            \n            noise_stdev     = options_mgr.get('noise_init_stdev', noise_stdev)\n            noise_mean      = options_mgr.get('noise_init_mean',  noise_mean)\n            noise_type_init = options_mgr.get('noise_type_init',  noise_type_init)\n            d_noise         = options_mgr.get('d_noise',          d_noise)\n            alpha_init      = options_mgr.get('alpha_init',       alpha_init)\n            k_init          = options_mgr.get('k_init',           k_init)\n            sde_noise       = options_mgr.get('sde_noise',        sde_noise)\n            sde_noise_steps = options_mgr.get('sde_noise_steps',  sde_noise_steps)\n            rebounds        = options_mgr.get('rebounds',         rebounds)\n            unsample_cfg    = options_mgr.get('unsample_cfg',     unsample_cfg)\n            unsample_eta    = options_mgr.get('unsample_eta',     unsample_eta)\n            unsampler_name  = options_mgr.get('unsampler_name',   unsampler_name)\n            unsample_steps_to_run = options_mgr.get('unsample_steps_to_run',   unsample_steps_to_run)\n            \n            eta_decay_scale = options_mgr.get('eta_decay_scale',  eta_decay_scale)\n            start_at_step   = options_mgr.get('start_at_step',    -1)\n            tile_sizes      = options_mgr.get('tile_sizes',       None)\n            flow_sync_eps   = options_mgr.get('flow_sync_eps',    0.0)\n            \n            unsampler_name, _ = process_sampler_name(unsampler_name)\n\n            \n            #ultracascade_stage        = options_mgr.get('ultracascade_stage',         ultracascade_stage)\n            ultracascade_latent_image  = options_mgr.get('ultracascade_latent_image',  ultracascade_latent_image)\n            ultracascade_latent_width  = options_mgr.get('ultracascade_latent_width',  ultracascade_latent_width)\n            ultracascade_latent_height = options_mgr.get('ultracascade_latent_height', ultracascade_latent_height)\n            \n            \n            if 'BONGMATH' in sampler.extra_options:\n                sampler.extra_options['start_at_step'] = start_at_step\n                sampler.extra_options['tile_sizes']    = tile_sizes\n                \n                sampler.extra_options['unsample_bongmath'] = options_mgr.get('unsample_bongmath', sampler.extra_options['BONGMATH'])   # allow turning off bongmath for unsampling with cycles\n                sampler.extra_options['flow_sync_eps'] = flow_sync_eps\n                \n            is_chained = False\n            if latent_image is not None:\n                if 'positive' in latent_image and positive is None:\n                    positive = copy_cond(latent_image['positive'])\n                    if positive is not None and 'control' in positive[0][1]:\n                        for i in range(len(positive)):\n                            positive[i][1]['control']      = latent_image['positive'][i][1]['control']\n                            if hasattr(latent_image['positive'][i][1]['control'], 'base'):\n                                positive[i][1]['control'].base = latent_image['positive'][i][1]['control'].base\n                    is_chained = True\n                if 'negative' in latent_image and negative is None:\n                    negative = copy_cond(latent_image['negative'])\n                    if negative is not None and 'control' in negative[0][1]:\n                        for i in range(len(negative)):\n                            negative[i][1]['control']      = latent_image['negative'][i][1]['control']\n                            if hasattr(latent_image['negative'][i][1]['control'], 'base'):\n                                negative[i][1]['control'].base = latent_image['negative'][i][1]['control'].base\n                    is_chained = True\n                if 'sampler' in latent_image and sampler is None:\n                    sampler = copy_cond(latent_image['sampler'])  #.clone()\n                    is_chained = True\n            \n            if 'steps_to_run' in sampler.extra_options:\n                sampler.extra_options['steps_to_run'] = steps_to_run\n\n            guider_input = options_mgr.get('guider', None)\n            if guider_input is not None and is_chained is False:\n                guider = guider_input\n                work_model = guider.model_patcher\n                RESplain(\"Shark: Using model from ClownOptions_GuiderInput: \", guider.model_patcher.model.diffusion_model.__class__.__name__)\n                RESplain(\"SharkWarning: \\\"flow\\\" guide mode does not work with ClownOptions_GuiderInput\")\n                if hasattr(guider, 'cfg') and guider.cfg is not None:\n                    cfg = guider.cfg\n                    RESplain(\"Shark: Using cfg from ClownOptions_GuiderInput: \", cfg)\n                if hasattr(guider, 'original_conds') and guider.original_conds is not None:\n                    if 'positive' in guider.original_conds:\n                        first_ = guider.original_conds['positive'][0]['cross_attn']\n                        second_ = {k: v for k, v in guider.original_conds['positive'][0].items() if k != 'cross_attn'}\n                        positive = [[first_, second_],]\n                        RESplain(\"Shark: Using positive cond from ClownOptions_GuiderInput\")\n                    if 'negative' in guider.original_conds:\n                        first_ = guider.original_conds['negative'][0]['cross_attn']\n                        second_ = {k: v for k, v in guider.original_conds['negative'][0].items() if k != 'cross_attn'}\n                        negative = [[first_, second_],]\n                        RESplain(\"Shark: Using negative cond from ClownOptions_GuiderInput\")\n            else:\n                guider = None\n                work_model   = model#.clone()\n            \n            if latent_image is not None:\n                latent_image['samples'] = comfy.sample.fix_empty_latent_channels(work_model, latent_image['samples'])\n                \n            if positive is None or negative is None:\n                from ..conditioning import EmptyConditioningGenerator\n                EmptyCondGen       = EmptyConditioningGenerator(work_model)\n                positive, negative = EmptyCondGen.zero_none_conditionings_([positive, negative])\n\n            if cfg < 0:\n                sampler.extra_options['cfg_cw'] = -cfg\n                cfg = 1.0\n            else:\n                sampler.extra_options.pop(\"cfg_cw\", None) \n\n            \n            is_nested_input = latent_image is not None and 'samples' in latent_image and isinstance(latent_image['samples'], comfy.nested_tensor.NestedTensor)\n            if not EO(\"disable_dummy_sampler_init\") and not is_nested_input:\n                sampler_null = comfy.samplers.ksampler(\"rk_beta\",\n                    {\n                        \"sampler_mode\": \"NULL\",\n                    })\n                if latent_image is not None and 'samples' in latent_image:\n                    latent_vram_factor = EO(\"latent_vram_factor\", 3)\n                    x_null = torch.zeros_like(latent_image['samples']).repeat_interleave(latent_vram_factor, dim=-1)\n                elif ultracascade_latent_height * ultracascade_latent_width > 0:\n                    x_null = comfy.sample.fix_empty_latent_channels(model, torch.zeros((1,16,ultracascade_latent_height,ultracascade_latent_width)))\n                else:\n                    print(\"Fallback: spawning dummy 1,16,256,256 latent.\")\n                    x_null = comfy.sample.fix_empty_latent_channels(model, torch.zeros((1,16,256,256)))\n                _ = comfy.sample.sample_custom(work_model, x_null, cfg, sampler_null, torch.linspace(1, 0, 10).to(x_null.dtype).to(x_null.device), negative, negative, x_null, noise_mask=None, callback=None, disable_pbar=disable_pbar, seed=noise_seed)\n\n            sigma_min = work_model.get_model_object('model_sampling').sigma_min\n            sigma_max = work_model.get_model_object('model_sampling').sigma_max\n            \n            if sampler is None:\n                raise ValueError(\"sampler is required\")\n            else:\n                sampler = copy.deepcopy(sampler)\n        \n        \n        \n            # INIT SIGMAS\n            if sigmas is not None:\n                sigmas = sigmas.clone().to(dtype=default_dtype, device=default_device) # does this type carry into clown after passing through comfy?\n                sigmas *= denoise   # ... otherwise we have to interpolate and that might not be ideal for tiny custom schedules...\n            else: \n                sigmas = get_sigmas(work_model, scheduler, steps, abs(denoise)).to(dtype=default_dtype, device=default_device)\n            sigmas *= denoise_alt\n\n            # USE NULL FLOATS AS \"FLAGS\" TO PREVENT COMFY NOISE ADDITION\n            if sampler_mode.startswith(\"unsample\"): \n                null   = torch.tensor([0.0], device=sigmas.device, dtype=sigmas.dtype)\n                sigmas = torch.flip(sigmas, dims=[0])\n                sigmas = torch.cat([sigmas, null])\n                \n            elif sampler_mode.startswith(\"resample\"):\n                null   = torch.tensor([0.0], device=sigmas.device, dtype=sigmas.dtype)\n                sigmas = torch.cat([null, sigmas])\n                sigmas = torch.cat([sigmas, null])\n\n\n\n            latent_x = {}\n            # INIT STATE INFO FOR CONTINUING GENERATION ACROSS MULTIPLE SAMPLER NODES\n            if latent_image is not None:\n                samples = latent_image['samples']\n                latent_x['samples'] = samples._copy() if isinstance(samples, comfy.nested_tensor.NestedTensor) else samples.clone()\n                if 'noise_mask' in latent_image:\n                    noise_mask = latent_image['noise_mask']\n                    latent_x['noise_mask'] = noise_mask._copy() if isinstance(noise_mask, comfy.nested_tensor.NestedTensor) else noise_mask.clone()\n                state_info = copy.deepcopy(latent_image['state_info']) if 'state_info' in latent_image else {}\n            else:\n                state_info = {}\n            state_info_out = {}\n            \n            \n            \n            # SETUP CONDITIONING EMBEDS\n            \n            pos_cond = copy_cond(positive)\n            neg_cond = copy_cond(negative)\n            \n            \n            \n            # SETUP FOR ULTRACASCADE IF DETECTED\n            if work_model.model.model_config.unet_config.get('stable_cascade_stage') == 'up':\n                \n                ultracascade_guide_weight = EO(\"ultracascade_guide_weight\", 0.0)\n                ultracascade_guide_type   = EO(\"ultracascade_guide_type\", \"residual\")\n                \n                x_lr = None\n                if ultracascade_latent_height * ultracascade_latent_width > 0:\n                    x_lr        = latent_image['samples'].clone() if latent_image is not None else None\n                    x_lr_bs     = 1                               if x_lr         is     None else x_lr.shape[-4]\n                    x_lr_dtype  = default_dtype                   if x_lr         is     None else x_lr.dtype\n                    x_lr_device = 'cuda'                          if x_lr         is     None else x_lr.device\n                    \n                    ultracascade_stage_up_upscale_align_corners = EO(\"ultracascade_stage_up_upscale_align_corners\", False)\n                    ultracascade_stage_up_upscale_mode          = EO(\"ultracascade_stage_up_upscale_mode\",         \"bicubic\")\n                    latent_x['samples'] = torch.zeros([x_lr_bs, 16, ultracascade_latent_height, ultracascade_latent_width], dtype=x_lr_dtype, device=x_lr_device)\n                \n                    data_prev_ = state_info.get('data_prev_')\n                    if EO(\"ultracascade_stage_up_preserve_data_prev\") and data_prev_ is not None:\n                        data_prev_ = data_prev_.squeeze(1) \n\n                        if data_prev_.dim() == 4: \n                            data_prev_ = F.interpolate(\n                                data_prev_,\n                                size=latent_x['samples'].shape[-2:],\n                                mode=ultracascade_stage_up_upscale_mode,\n                                align_corners=ultracascade_stage_up_upscale_align_corners\n                                )\n                        else:\n                            print(\"data_prev_ upscale failed.\")\n                        state_info['data_prev_'] = data_prev_.unsqueeze(1)\n                    \n                    else:\n                        state_info['data_prev_'] = data_prev_ #None   # = None was leading to errors even with sampler_mode=standard due to below with = state_info['data_prev_'][batch_num]\n                \n                if x_lr is not None:\n                    if x_lr.shape[-2:] != latent_image['samples'].shape[-2:]:\n                        x_height, x_width = latent_image['samples'].shape[-2:]\n                        ultracascade_stage_up_upscale_align_corners = EO(\"ultracascade_stage_up_upscale_align_corners\", False)\n                        ultracascade_stage_up_upscale_mode          = EO(\"ultracascade_stage_up_upscale_mode\",         \"bicubic\")\n\n                        x_lr = F.interpolate(x_lr, size=(x_height, x_width), mode=ultracascade_stage_up_upscale_mode, align_corners=ultracascade_stage_up_upscale_align_corners)\n                        \n                ultracascade_guide_weights = initialize_or_scale(ultracascade_guide_weights, ultracascade_guide_weight, MAX_STEPS)\n\n                patch = work_model.model_options.get(\"transformer_options\", {}).get(\"patches_replace\", {}).get(\"ultracascade\", {}).get(\"main\")\n                if patch is not None:\n                    patch.update(x_lr=x_lr, guide_weights=ultracascade_guide_weights, guide_type=ultracascade_guide_type)\n                else:\n                    work_model.model.diffusion_model.set_sigmas_schedule(sigmas_schedule = sigmas)\n                    work_model.model.diffusion_model.set_sigmas_prev    (sigmas_prev     = sigmas[:1])\n                    work_model.model.diffusion_model.set_guide_weights  (guide_weights   = ultracascade_guide_weights)\n                    work_model.model.diffusion_model.set_guide_type     (guide_type      = ultracascade_guide_type)\n                    work_model.model.diffusion_model.set_x_lr           (x_lr            = x_lr)\n                \n            elif work_model.model.model_config.unet_config.get('stable_cascade_stage') == 'b':\n                #if sampler_mode != \"resample\":\n                #    state_info['data_prev_'] = None    #commented out as it was throwing an error below with = state_info['data_prev_'][batch_num]\n                \n                c_pos, c_neg = [], []\n                for t in pos_cond:\n                    d_pos = t[1].copy()\n                    d_neg = t[1].copy()\n                    \n                    x_lr = None\n                    if ultracascade_latent_height * ultracascade_latent_width > 0:\n                        x_lr = latent_image['samples'].clone()\n                        latent_x['samples'] = torch.zeros([x_lr.shape[-4], 4, ultracascade_latent_height // 4, ultracascade_latent_width // 4], dtype=x_lr.dtype, device=x_lr.device)\n                    \n                    d_pos['stable_cascade_prior'] = x_lr\n\n                    pooled_output = d_neg.get(\"pooled_output\", None)\n                    if pooled_output is not None:\n                        d_neg[\"pooled_output\"] = torch.zeros_like(pooled_output)\n                    \n                    c_pos.append(                 [t[0],  d_pos])            \n                    c_neg.append([torch.zeros_like(t[0]), d_neg])\n                pos_cond = c_pos\n                neg_cond = c_neg\n                \n            elif ultracascade_latent_height * ultracascade_latent_width > 0:\n                latent_x['samples'] = torch.zeros([1, 16, ultracascade_latent_height, ultracascade_latent_width], dtype=default_dtype, device=sigmas.device)\n            \n            \n            \n            # NOISE, ORTHOGONALIZE, OR ZERO EMBEDS\n            \n            if pos_cond is None or neg_cond is None:\n                from ..conditioning import EmptyConditioningGenerator\n                EmptyCondGen       = EmptyConditioningGenerator(work_model)\n                pos_cond, neg_cond = EmptyCondGen.zero_none_conditionings_([pos_cond, neg_cond])\n\n\n\n            if EO((\"cond_noise\", \"uncond_noise\")):\n                if noise_seed == -1:\n                    cond_seed = torch.initial_seed() + 1\n                else:\n                    cond_seed = noise_seed\n                \n                t5_seed              = EO(\"t5_seed\"             , cond_seed)\n                clip_seed            = EO(\"clip_seed\"           , cond_seed+1)\n                t5_noise_type        = EO(\"t5_noise_type\"       , \"gaussian\")\n                clip_noise_type      = EO(\"clip_noise_type\"     , \"gaussian\")\n                t5_noise_sigma_max   = EO(\"t5_noise_sigma_max\"  , \"gaussian\")\n                t5_noise_sigma_min   = EO(\"t5_noise_sigma_min\"  , \"gaussian\")\n                clip_noise_sigma_max = EO(\"clip_noise_sigma_max\", \"gaussian\")\n                clip_noise_sigma_min = EO(\"clip_noise_sigma_min\", \"gaussian\")\n                \n                noise_sampler_t5     = NOISE_GENERATOR_CLASSES_SIMPLE.get(  t5_noise_type)(x=pos_cond[0][0],                  seed=  t5_seed, sigma_max=  t5_noise_sigma_max, sigma_min=  t5_noise_sigma_min, )\n                noise_sampler_clip   = NOISE_GENERATOR_CLASSES_SIMPLE.get(clip_noise_type)(x=pos_cond[0][1]['pooled_output'], seed=clip_seed, sigma_max=clip_noise_sigma_max, sigma_min=clip_noise_sigma_min, )\n                \n                t5_noise_scale   = EO(\"t5_noise_scale\",   1.0)\n                clip_noise_scale = EO(\"clip_noise_scale\", 1.0)\n                \n                if EO(\"cond_noise\"):\n                    t5_noise   = noise_sampler_t5  (sigma=  t5_noise_sigma_max, sigma_next=  t5_noise_sigma_min)\n                    clip_noise = noise_sampler_clip(sigma=clip_noise_sigma_max, sigma_next=clip_noise_sigma_min)\n                    \n                    pos_cond[0][0]                  = pos_cond[0][0]                  + t5_noise_scale   * (t5_noise   - pos_cond[0][0])\n                    pos_cond[0][1]['pooled_output'] = pos_cond[0][1]['pooled_output'] + clip_noise_scale * (clip_noise - pos_cond[0][1]['pooled_output'])\n                    \n                if EO(\"uncond_noise\"):\n                    t5_noise   = noise_sampler_t5  (sigma=  t5_noise_sigma_max, sigma_next=  t5_noise_sigma_min)\n                    clip_noise = noise_sampler_clip(sigma=clip_noise_sigma_max, sigma_next=clip_noise_sigma_min)\n                    \n                    neg_cond[0][0]                  = neg_cond[0][0]                  + t5_noise_scale   * (t5_noise   - neg_cond[0][0])\n                    neg_cond[0][1]['pooled_output'] = neg_cond[0][1]['pooled_output'] + clip_noise_scale * (clip_noise - neg_cond[0][1]['pooled_output'])\n\n            if EO(\"uncond_ortho\"):\n                neg_cond[0][0]                  = get_orthogonal(neg_cond[0][0],                  pos_cond[0][0])\n                neg_cond[0][1]['pooled_output'] = get_orthogonal(neg_cond[0][1]['pooled_output'], pos_cond[0][1]['pooled_output'])\n            \n\n            if \"noise_seed\" in sampler.extra_options:\n                if sampler.extra_options['noise_seed'] == -1 and noise_seed != -1:\n                    sampler.extra_options['noise_seed'] = noise_seed + 1\n                    RESplain(\"Shark: setting clown noise seed to: \", sampler.extra_options['noise_seed'], debug=True)\n\n            if \"sampler_mode\" in sampler.extra_options:\n                sampler.extra_options['sampler_mode'] = sampler_mode\n\n            if \"extra_options\" in sampler.extra_options:\n                extra_options += \"\\n\"\n                extra_options += sampler.extra_options['extra_options']\n                sampler.extra_options['extra_options'] = extra_options\n\n            samples = latent_x['samples']\n            latent_image_batch = {\"samples\": samples._copy() if isinstance(samples, comfy.nested_tensor.NestedTensor) else samples.clone()}\n            if 'noise_mask' in latent_x and latent_x['noise_mask'] is not None:\n                noise_mask = latent_x['noise_mask']\n                latent_image_batch['noise_mask'] = noise_mask._copy() if isinstance(noise_mask, comfy.nested_tensor.NestedTensor) else noise_mask.clone()\n\n            if EO(\"no_batch_loop\"):\n                x = latent_image_batch['samples'].to(default_dtype)\n\n                if isinstance(x, comfy.nested_tensor.NestedTensor):\n                    noise = comfy.nested_tensor.NestedTensor([\n                        generate_init_noise(\n                            x=t.clone(), seed=noise_seed + idx,\n                            noise_type_init=noise_type_init, noise_stdev=noise_stdev,\n                            noise_mean=noise_mean, noise_normalize=noise_normalize,\n                            sigma_max=sigma_max, sigma_min=sigma_min,\n                            alpha_init=alpha_init, k_init=k_init, EO=EO\n                        )\n                        for idx, t in enumerate(x.unbind())\n                    ])\n                else:\n                    noise = generate_init_noise(\n                        x=x.clone(), seed=noise_seed,\n                        noise_type_init=noise_type_init, noise_stdev=noise_stdev,\n                        noise_mean=noise_mean, noise_normalize=noise_normalize,\n                        sigma_max=sigma_max, sigma_min=sigma_min,\n                        alpha_init=alpha_init, k_init=k_init, EO=EO\n                    )\n\n                if guider is None:\n                    guider = SharkGuider(work_model)\n                    flow_cond = options_mgr.get('flow_cond', {})\n                    if flow_cond and 'yt_positive' in flow_cond:\n                        if 'yt_inv_positive' not in flow_cond:\n                            guider.set_conds(yt_positive=flow_cond.get('yt_positive'),\n                                             yt_negative=flow_cond.get('yt_negative'))\n                            guider.set_cfgs(yt=flow_cond.get('yt_cfg'), xt=cfg)\n                        else:\n                            guider.set_conds(yt_positive=flow_cond.get('yt_positive'),\n                                             yt_negative=flow_cond.get('yt_negative'),\n                                             yt_inv_positive=flow_cond.get('yt_inv_positive'),\n                                             yt_inv_negative=flow_cond.get('yt_inv_negative'))\n                            guider.set_cfgs(yt=flow_cond.get('yt_cfg'),\n                                           yt_inv=flow_cond.get('yt_inv_cfg'), xt=cfg)\n                    else:\n                        guider.set_cfgs(xt=cfg)\n                    guider.set_conds(xt_positive=pos_cond, xt_negative=neg_cond)\n                elif type(guider) == SharkGuider:\n                    guider.set_cfgs(xt=cfg)\n                    guider.set_conds(xt_positive=pos_cond, xt_negative=neg_cond)\n                else:\n                    try:\n                        guider.set_cfg(cfg)\n                        guider.set_conds(pos_cond, neg_cond)\n                    except:\n                        pass\n\n                if latent_image is not None and 'state_info' in latent_image and 'sigmas' in latent_image['state_info']:\n                    steps_len = max(sigmas.shape[-1] - 1, latent_image['state_info']['sigmas'].shape[-1] - 1)\n                else:\n                    steps_len = sigmas.shape[-1] - 1\n\n                x0_output = {}\n                try:\n                    callback = latent_preview.prepare_callback(work_model, steps_len, x0_output,\n                        shape=x.shape if hasattr(x, 'is_nested') and x.is_nested else None)\n                except TypeError:\n                    callback = latent_preview.prepare_callback(work_model, steps_len, x0_output)\n\n                noise_mask = latent_image_batch.get(\"noise_mask\", None)\n\n                if noise_mask is not None:\n                    stored_image = state_info.get('image_initial')\n                    x_initial = stored_image if stored_image is not None else x\n                    stored_noise = state_info.get('noise_initial')\n                    noise_initial = stored_noise if stored_noise is not None else noise\n                else:\n                    x_initial = x\n                    noise_initial = noise\n\n                state_info_out = {}\n                if 'BONGMATH' in sampler.extra_options:\n                    sampler.extra_options['state_info'] = state_info\n                    sampler.extra_options['state_info_out'] = state_info_out\n                    sampler.extra_options['image_initial'] = x_initial\n                    sampler.extra_options['noise_initial'] = noise_initial\n\n                if rebounds > 0:\n                    cfgs_cached = guider.cfgs\n                    steps_to_run_cached = sampler.extra_options['steps_to_run']\n                    eta_cached         = sampler.extra_options['eta']\n                    eta_substep_cached = sampler.extra_options['eta_substep']\n\n                    etas_cached         = sampler.extra_options['etas'].clone()\n                    etas_substep_cached = sampler.extra_options['etas_substep'].clone()\n\n                    unsample_etas = torch.full_like(etas_cached, unsample_eta)\n                    rk_type_cached = sampler.extra_options['rk_type']\n\n                    if sampler.extra_options['sampler_mode'] == \"unsample\":\n                        guider.cfgs = {\n                            'xt': unsample_cfg,\n                            'yt': unsample_cfg,\n                        }\n                        if unsample_eta != -1.0:\n                            sampler.extra_options['eta_substep']  = unsample_eta\n                            sampler.extra_options['eta']          = unsample_eta\n                            sampler.extra_options['etas_substep'] = unsample_etas\n                            sampler.extra_options['etas']         = unsample_etas\n                        if unsampler_name != \"none\":\n                            sampler.extra_options['rk_type']      = unsampler_name\n                        if unsample_steps_to_run > -1:\n                            sampler.extra_options['steps_to_run'] = unsample_steps_to_run\n                    else:\n                        guider.cfgs = cfgs_cached\n\n                    guider.cfgs = cfgs_cached\n                    sampler.extra_options['steps_to_run'] = steps_to_run_cached\n\n                    eta_decay           = eta_cached\n                    eta_substep_decay   = eta_substep_cached\n                    unsample_eta_decay  = unsample_eta\n\n                    etas_decay          = etas_cached\n                    etas_substep_decay  = etas_substep_cached\n                    unsample_etas_decay = unsample_etas\n\n                if isinstance(x, comfy.nested_tensor.NestedTensor):\n                    samples = guider.sample(noise, x._copy(), sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed)\n                else:\n                    samples = guider.sample(noise, x.clone(), sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed)\n\n                if rebounds > 0:\n                    noise_seed_cached   = sampler.extra_options['noise_seed']\n                    cfgs_cached         = guider.cfgs\n                    sampler_mode_cached = sampler.extra_options['sampler_mode']\n\n                    for restarts_iter in range(rebounds):\n                        sampler.extra_options['state_info'] = sampler.extra_options['state_info_out']\n\n                        sigmas = sampler.extra_options['state_info_out']['sigmas'] if sigmas is None else sigmas\n\n                        if   sampler.extra_options['sampler_mode'] == \"standard\":\n                            sampler.extra_options['sampler_mode'] = \"unsample\"\n                        elif sampler.extra_options['sampler_mode'] == \"unsample\":\n                            sampler.extra_options['sampler_mode'] = \"resample\"\n                        elif sampler.extra_options['sampler_mode'] == \"resample\":\n                            sampler.extra_options['sampler_mode'] = \"unsample\"\n\n                        sampler.extra_options['noise_seed'] = -1\n\n                        if sampler.extra_options['sampler_mode'] == \"unsample\":\n                            guider.cfgs = {\n                                'xt': unsample_cfg,\n                                'yt': unsample_cfg,\n                            }\n                            if unsample_eta != -1.0:\n                                sampler.extra_options['eta_substep']  = unsample_eta_decay\n                                sampler.extra_options['eta']          = unsample_eta_decay\n                                sampler.extra_options['etas_substep'] = unsample_etas\n                                sampler.extra_options['etas']         = unsample_etas\n                            else:\n                                sampler.extra_options['eta_substep']  = eta_substep_decay\n                                sampler.extra_options['eta']          = eta_decay\n                                sampler.extra_options['etas_substep'] = etas_substep_decay\n                                sampler.extra_options['etas']         = etas_decay\n                            if unsampler_name != \"none\":\n                                sampler.extra_options['rk_type']  = unsampler_name\n                            if unsample_steps_to_run > -1:\n                                sampler.extra_options['steps_to_run'] = unsample_steps_to_run\n                        else:\n                            guider.cfgs = cfgs_cached\n                            sampler.extra_options['eta_substep']  = eta_substep_decay\n                            sampler.extra_options['eta']          = eta_decay\n                            sampler.extra_options['etas_substep'] = etas_substep_decay\n                            sampler.extra_options['etas']         = etas_decay\n                            sampler.extra_options['rk_type']      = rk_type_cached\n                            sampler.extra_options['steps_to_run'] = steps_to_run_cached\n\n                        samples = guider.sample(noise, samples.clone(), sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=-1)\n\n                        eta_substep_decay   *= eta_decay_scale\n                        eta_decay           *= eta_decay_scale\n                        unsample_eta_decay  *= eta_decay_scale\n\n                        etas_substep_decay  *= eta_decay_scale\n                        etas_decay          *= eta_decay_scale\n                        unsample_etas_decay *= eta_decay_scale\n\n                    sampler.extra_options['noise_seed'] = noise_seed_cached\n                    guider.cfgs = cfgs_cached\n                    sampler.extra_options['sampler_mode'] = sampler_mode_cached\n                    sampler.extra_options['eta_substep']  = eta_substep_cached\n                    sampler.extra_options['eta']          = eta_cached\n                    sampler.extra_options['etas_substep'] = etas_substep_cached\n                    sampler.extra_options['etas']         = etas_cached\n\n                if noise_mask is not None:\n                    if hasattr(samples, 'is_nested') and samples.is_nested:\n                        blended = []\n                        x_initial_list = x_initial.unbind() if hasattr(x_initial, 'is_nested') and x_initial.is_nested else [x_initial]\n                        if hasattr(noise_mask, 'is_nested') and noise_mask.is_nested:\n                            mask_list = noise_mask.unbind()\n                        else:\n                            mask_list = [noise_mask]\n                        for idx, s in enumerate(samples.unbind()):\n                            xi = x_initial_list[idx] if idx < len(x_initial_list) else x_initial_list[0]\n                            m = mask_list[idx] if idx < len(mask_list) else mask_list[0]\n                            if s.ndim == m.ndim:\n                                reshaped_mask = comfy.utils.reshape_mask(m, s.shape).to(s.device)\n                                blended.append(s * reshaped_mask + xi.to(s.device) * (1.0 - reshaped_mask))\n                            else:\n                                blended.append(s)\n                        samples = comfy.nested_tensor.NestedTensor(blended)\n                    else:\n                        if hasattr(noise_mask, 'is_nested') and noise_mask.is_nested:\n                            noise_mask = noise_mask.unbind()[0]\n                        reshaped_mask = comfy.utils.reshape_mask(noise_mask, samples.shape).to(samples.device)\n                        samples = samples * reshaped_mask + x_initial.to(samples.device) * (1.0 - reshaped_mask)\n\n                samples = samples.to(comfy.model_management.intermediate_device())\n\n                out = latent_x.copy()\n                out[\"samples\"] = samples\n\n                if \"x0\" in x0_output:\n                    x0_out = work_model.model.process_latent_out(x0_output[\"x0\"].cpu())\n                    if hasattr(samples, 'is_nested') and samples.is_nested:\n                        latent_shapes = [t.shape for t in samples.unbind()]\n                        x0_out = comfy.nested_tensor.NestedTensor(\n                            comfy.utils.unpack_latents(x0_out, latent_shapes)\n                        )\n                    out_denoised = latent_x.copy()\n                    out_denoised[\"samples\"] = x0_out\n                else:\n                    out_denoised = out\n\n                out['positive'] = positive\n                out['negative'] = negative\n                out['model'] = work_model\n                out['sampler'] = sampler\n\n                if noise_mask is not None:\n                    state_info_out['image_initial'] = x_initial\n                    state_info_out['noise_initial'] = noise_initial\n\n                out['state_info'] = state_info_out\n\n                return (out, out_denoised, None)\n\n            out_samples          = []\n            out_denoised_samples = []\n            out_state_info       = []\n            \n            for batch_num in range(latent_image_batch['samples'].shape[0]):\n                latent_unbatch            = copy.deepcopy(latent_x)\n                if isinstance(latent_image_batch['samples'][batch_num], comfy.nested_tensor.NestedTensor):\n                    latent_unbatch['samples'] = latent_image_batch['samples'][batch_num]._copy()\n                else:\n                    latent_unbatch['samples'] = latent_image_batch['samples'][batch_num].clone().unsqueeze(0)\n                \n                if 'BONGMATH' in sampler.extra_options:\n                    sampler.extra_options['batch_num'] = batch_num\n\n\n                if noise_seed == -1 and sampler_mode in {\"unsample\", \"resample\"}:\n                    if latent_image.get('state_info', {}).get('last_rng', None) is not None:\n                        seed = torch.initial_seed() + batch_num\n                    else:\n                        seed = torch.initial_seed() + 1 + batch_num\n                else:\n                    if EO(\"lock_batch_seed\"):\n                        seed = noise_seed\n                    else:\n                        seed = noise_seed + batch_num\n                    torch     .manual_seed(seed)\n                    torch.cuda.manual_seed(seed)\n\n                if hasattr(latent_unbatch[\"samples\"], 'is_nested') and latent_unbatch[\"samples\"].is_nested:\n                    x = latent_unbatch[\"samples\"]._copy().to(default_dtype)\n                else:\n                    x = latent_unbatch[\"samples\"].clone().to(default_dtype) # does this type carry into clown after passing through comfy?\n\n\n\n                if sde_noise is None and sampler_mode.startswith(\"unsample\"):\n                    sde_noise = []\n                else:\n                    sde_noise_steps = 1\n\n                for total_steps_iter in range (sde_noise_steps):\n\n                    if noise_type_init != \"none\" and noise_stdev != 0.0:\n                        RESplain(\"Initial latent noise seed: \", seed, debug=True)\n\n                    noise = generate_init_noise(\n                        x=x, seed=seed,\n                        noise_type_init=noise_type_init, noise_stdev=noise_stdev,\n                        noise_mean=noise_mean, noise_normalize=noise_normalize,\n                        sigma_max=sigma_max, sigma_min=sigma_min,\n                        alpha_init=alpha_init, k_init=k_init, EO=EO\n                    )\n\n                    noise_mask = latent_unbatch[\"noise_mask\"] if \"noise_mask\" in latent_unbatch else None\n\n                    x_input = x\n                    if noise_mask is not None and 'noise_initial' in state_info:\n                        stored_noise = state_info.get('noise_initial')\n                        if stored_noise is not None:\n                            if stored_noise.dim() > 3 and stored_noise.shape[0] > batch_num:\n                                stored_noise = stored_noise[batch_num]\n                            if stored_noise.shape == noise.shape:\n                                noise = stored_noise.to(noise.device, dtype=noise.dtype)\n                                RESplain(\"Using stored noise_initial from previous sampler\", debug=True)\n\n                        stored_image = state_info.get('image_initial')\n                        if stored_image is not None:\n                            if stored_image.dim() > 3 and stored_image.shape[0] > batch_num:\n                                stored_image = stored_image[batch_num]\n                            if stored_image.shape == x.shape:\n                                x_input = stored_image.to(x.device, dtype=x.dtype)\n                                RESplain(\"Using stored image_initial from previous sampler\", debug=True)\n\n                    if 'BONGMATH' in sampler.extra_options:\n                        sampler.extra_options['noise_initial'] = noise\n                        sampler.extra_options['image_initial'] = x_input\n\n                    x0_output = {}\n\n                    if latent_image is not None and 'state_info' in latent_image and 'sigmas' in latent_image['state_info']:\n                        steps_len = max(sigmas.shape[-1] - 1,    latent_image['state_info']['sigmas'].shape[-1]-1)\n                    else:\n                        steps_len = sigmas.shape[-1]-1\n                    callback     = latent_preview.prepare_callback(work_model, steps_len, x0_output)\n\n                    if 'BONGMATH' in sampler.extra_options: # verify the sampler is rk_sampler_beta()\n                        sampler.extra_options['state_info']     = copy.deepcopy(state_info)         ##############################\n                        if state_info != {} and state_info != {'data_prev_': None}:  #second condition is for ultracascade\n                            sampler.extra_options['state_info']['raw_x']            = state_info['raw_x']           [batch_num]\n                            sampler.extra_options['state_info']['data_prev_']       = state_info['data_prev_']      [batch_num]\n                            sampler.extra_options['state_info']['last_rng']         = state_info['last_rng']        [batch_num]\n                            sampler.extra_options['state_info']['last_rng_substep'] = state_info['last_rng_substep'][batch_num]\n                            if 'image_initial' in state_info and state_info['image_initial'].dim() > 3:\n                                sampler.extra_options['state_info']['image_initial'] = state_info['image_initial'][batch_num]\n                            if 'noise_initial' in state_info and state_info['noise_initial'].dim() > 3:\n                                sampler.extra_options['state_info']['noise_initial'] = state_info['noise_initial'][batch_num]\n                        #state_info     = copy.deepcopy(latent_image['state_info']) if 'state_info' in latent_image else {}\n                        state_info_out = {}\n                        sampler.extra_options['state_info_out'] = state_info_out\n                        \n                    if type(pos_cond[0][0]) == list:\n                        pos_cond_tmp = pos_cond[batch_num]\n                        positive_tmp = positive[batch_num]\n                    else:\n                        pos_cond_tmp = pos_cond\n                        positive_tmp = positive\n                    \n                    for i in range(len(neg_cond)): # crude fix for copy.deepcopy converting superclass into real object\n                        if 'control' in neg_cond[i][1]:\n                            neg_cond[i][1]['control']          = negative[i][1]['control']\n                            if hasattr(negative[i][1]['control'], 'base'):\n                                neg_cond[i][1]['control'].base     = negative[i][1]['control'].base\n                    for i in range(len(pos_cond_tmp)): # crude fix for copy.deepcopy converting superclass into real object\n                        if 'control' in pos_cond_tmp[i][1]:\n                            pos_cond_tmp[i][1]['control']      = positive_tmp[i][1]['control']\n                            if hasattr(positive[i][1]['control'], 'base'):\n                                pos_cond_tmp[i][1]['control'].base = positive_tmp[i][1]['control'].base\n                    \n                    # SETUP REGIONAL COND\n                    \n                    if pos_cond_tmp[0][1] is not None: \n                        if 'callback_regional' in pos_cond_tmp[0][1]:\n                            pos_cond_tmp = pos_cond_tmp[0][1]['callback_regional'](work_model)\n                        \n                        if 'AttnMask' in pos_cond_tmp[0][1]:\n                            sampler.extra_options['AttnMask']   = pos_cond_tmp[0][1]['AttnMask']\n                            sampler.extra_options['RegContext'] = pos_cond_tmp[0][1]['RegContext']\n                            sampler.extra_options['RegParam']   = pos_cond_tmp[0][1]['RegParam']\n                            \n                            if isinstance(model.model.model_config, (comfy.supported_models.SDXL, comfy.supported_models.SD15)):\n                                latent_up_dummy = F.interpolate(latent_image['samples'].to(torch.float16), size=(latent_image['samples'].shape[-2] * 2, latent_image['samples'].shape[-1] * 2), mode=\"nearest\")\n                                sampler.extra_options['AttnMask'].set_latent(latent_up_dummy)\n                                sampler.extra_options['AttnMask'].generate()\n                                sampler.extra_options['AttnMask'].mask_up   = sampler.extra_options['AttnMask'].attn_mask.mask\n                                \n                                latent_down_dummy = F.interpolate(latent_image['samples'].to(torch.float16), size=(latent_image['samples'].shape[-2] // 2, latent_image['samples'].shape[-1] // 2), mode=\"nearest\")\n                                sampler.extra_options['AttnMask'].set_latent(latent_down_dummy)\n                                sampler.extra_options['AttnMask'].generate()\n                                sampler.extra_options['AttnMask'].mask_down = sampler.extra_options['AttnMask'].attn_mask.mask\n                                \n                                if isinstance(model.model.model_config, comfy.supported_models.SD15):\n                                    latent_down_dummy = F.interpolate(latent_image['samples'].to(torch.float16), size=(latent_image['samples'].shape[-2] // 4, latent_image['samples'].shape[-1] // 4), mode=\"nearest\")\n                                    sampler.extra_options['AttnMask'].set_latent(latent_down_dummy)\n                                    sampler.extra_options['AttnMask'].generate()\n                                    sampler.extra_options['AttnMask'].mask_down2 = sampler.extra_options['AttnMask'].attn_mask.mask\n                                    \n                            if isinstance(model.model.model_config, (comfy.supported_models.Stable_Cascade_C)):\n                                latent_up_dummy = F.interpolate(latent_image['samples'].to(torch.float16), size=(latent_image['samples'].shape[-2] * 2, latent_image['samples'].shape[-1] * 2), mode=\"nearest\")\n                                sampler.extra_options['AttnMask'].set_latent(latent_up_dummy)\n                                # cascade concats 4 + 4 tokens (clip_text_pooled, clip_img)\n                                sampler.extra_options['AttnMask'].context_lens = [context_len + 8 for context_len in sampler.extra_options['AttnMask'].context_lens] \n                                sampler.extra_options['AttnMask'].text_len = sum(sampler.extra_options['AttnMask'].context_lens)\n                            else:\n                                sampler.extra_options['AttnMask'].set_latent(latent_image['samples'])\n                            sampler.extra_options['AttnMask'].generate()\n                            \n                    if neg_cond[0][1] is not None: \n                        if 'callback_regional' in neg_cond[0][1]:\n                            neg_cond = neg_cond[0][1]['callback_regional'](work_model)\n                        \n                        if 'AttnMask' in neg_cond[0][1]:\n                            sampler.extra_options['AttnMask_neg']   = neg_cond[0][1]['AttnMask']\n                            sampler.extra_options['RegContext_neg'] = neg_cond[0][1]['RegContext']\n                            sampler.extra_options['RegParam_neg']   = neg_cond[0][1]['RegParam']\n                            \n                            if isinstance(model.model.model_config, (comfy.supported_models.SDXL, comfy.supported_models.SD15)):\n                                latent_up_dummy = F.interpolate(latent_image['samples'].to(torch.float16), size=(latent_image['samples'].shape[-2] * 2, latent_image['samples'].shape[-1] * 2), mode=\"nearest\")\n                                sampler.extra_options['AttnMask_neg'].set_latent(latent_up_dummy)\n                                sampler.extra_options['AttnMask_neg'].generate()\n                                sampler.extra_options['AttnMask_neg'].mask_up   = sampler.extra_options['AttnMask_neg'].attn_mask.mask\n                                \n                                latent_down_dummy = F.interpolate(latent_image['samples'].to(torch.float16), size=(latent_image['samples'].shape[-2] // 2, latent_image['samples'].shape[-1] // 2), mode=\"nearest\")\n                                sampler.extra_options['AttnMask_neg'].set_latent(latent_down_dummy)\n                                sampler.extra_options['AttnMask_neg'].generate()\n                                sampler.extra_options['AttnMask_neg'].mask_down = sampler.extra_options['AttnMask_neg'].attn_mask.mask\n                                \n                                if isinstance(model.model.model_config, comfy.supported_models.SD15):\n                                    latent_down_dummy = F.interpolate(latent_image['samples'].to(torch.float16), size=(latent_image['samples'].shape[-2] // 4, latent_image['samples'].shape[-1] // 4), mode=\"nearest\")\n                                    sampler.extra_options['AttnMask_neg'].set_latent(latent_down_dummy)\n                                    sampler.extra_options['AttnMask_neg'].generate()\n                                    sampler.extra_options['AttnMask_neg'].mask_down2 = sampler.extra_options['AttnMask_neg'].attn_mask.mask\n                            \n                            if isinstance(model.model.model_config, (comfy.supported_models.Stable_Cascade_C)):\n                                latent_up_dummy = F.interpolate(latent_image['samples'].to(torch.float16), size=(latent_image['samples'].shape[-2] * 2, latent_image['samples'].shape[-1] * 2), mode=\"nearest\")\n                                sampler.extra_options['AttnMask'].set_latent(latent_up_dummy)\n                                # cascade concats 4 + 4 tokens (clip_text_pooled, clip_img)\n                                sampler.extra_options['AttnMask'].context_lens = [context_len + 8 for context_len in sampler.extra_options['AttnMask'].context_lens] \n                                sampler.extra_options['AttnMask'].text_len = sum(sampler.extra_options['AttnMask'].context_lens)\n                            else:\n                                sampler.extra_options['AttnMask_neg'].set_latent(latent_image['samples'])\n                            sampler.extra_options['AttnMask_neg'].generate()\n                    \n                    \n                    \n                    \n                    \n                    if guider is None:\n                        guider = SharkGuider(work_model)\n                        flow_cond = options_mgr.get('flow_cond', {})\n                        if flow_cond != {} and 'yt_positive' in flow_cond and not 'yt_inv_positive' in flow_cond:   #and not 'yt_inv;_positive' in flow_cond:   # typo???\n                            guider.set_conds(yt_positive=flow_cond.get('yt_positive'), yt_negative=flow_cond.get('yt_negative'),)\n                            guider.set_cfgs(yt=flow_cond.get('yt_cfg'), xt=cfg)\n                        elif flow_cond != {} and 'yt_positive' in flow_cond and 'yt_inv_positive' in flow_cond:\n                            guider.set_conds(yt_positive=flow_cond.get('yt_positive'), yt_negative=flow_cond.get('yt_negative'), yt_inv_positive=flow_cond.get('yt_inv_positive'), yt_inv_negative=flow_cond.get('yt_inv_negative'),)\n                            guider.set_cfgs(yt=flow_cond.get('yt_cfg'), yt_inv=flow_cond.get('yt_inv_cfg'), xt=cfg)\n                        else:\n                            guider.set_cfgs(xt=cfg)\n                        \n                        guider.set_conds(xt_positive=pos_cond_tmp, xt_negative=neg_cond)\n                        \n                    elif type(guider) == SharkGuider:\n                        guider.set_cfgs(xt=cfg)\n                        guider.set_conds(xt_positive=pos_cond_tmp, xt_negative=neg_cond)\n                    else:\n                        try:\n                            guider.set_cfg(cfg)\n                        except:\n                            RESplain(\"SharkWarning: guider.set_cfg failed but assuming cfg already set correctly.\")\n                        try:\n                            guider.set_conds(pos_cond_tmp, neg_cond)\n                        except:\n                            RESplain(\"SharkWarning: guider.set_conds failed but assuming conds already set correctly.\")\n                    \n                    if rebounds > 0:\n                        cfgs_cached = guider.cfgs\n                        steps_to_run_cached = sampler.extra_options['steps_to_run']\n                        eta_cached         = sampler.extra_options['eta']\n                        eta_substep_cached = sampler.extra_options['eta_substep']\n                        \n                        etas_cached         = sampler.extra_options['etas'].clone()\n                        etas_substep_cached = sampler.extra_options['etas_substep'].clone()\n                        \n                        unsample_etas = torch.full_like(etas_cached, unsample_eta)\n                        rk_type_cached = sampler.extra_options['rk_type']\n                        \n                        if sampler.extra_options['sampler_mode'] == \"unsample\":\n                            guider.cfgs = {\n                                'xt': unsample_cfg,\n                                'yt': unsample_cfg,\n                            }\n                            if unsample_eta != -1.0:\n                                sampler.extra_options['eta_substep']  = unsample_eta\n                                sampler.extra_options['eta']          = unsample_eta\n                                sampler.extra_options['etas_substep'] = unsample_etas\n                                sampler.extra_options['etas']         = unsample_etas\n                            if unsampler_name != \"none\":\n                                sampler.extra_options['rk_type']      = unsampler_name\n                            if unsample_steps_to_run > -1:\n                                sampler.extra_options['steps_to_run'] = unsample_steps_to_run\n                                \n                        else:\n                            guider.cfgs = cfgs_cached\n                        \n                        guider.cfgs = cfgs_cached\n                        sampler.extra_options['steps_to_run'] = steps_to_run_cached\n                        \n                        eta_decay           = eta_cached\n                        eta_substep_decay   = eta_substep_cached\n                        unsample_eta_decay  = unsample_eta\n                        \n                        etas_decay          = etas_cached\n                        etas_substep_decay  = etas_substep_cached\n                        unsample_etas_decay = unsample_etas\n                    if isinstance(x_input, comfy.nested_tensor.NestedTensor):\n                        samples = guider.sample(noise, x_input._copy(), sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed)\n                    else:\n                        samples = guider.sample(noise, x_input.clone(), sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed)\n                    \n                    if rebounds > 0: \n                        noise_seed_cached   = sampler.extra_options['noise_seed']\n                        cfgs_cached         = guider.cfgs\n                        sampler_mode_cached = sampler.extra_options['sampler_mode']\n                        \n                        for restarts_iter in range(rebounds):\n                            sampler.extra_options['state_info'] = sampler.extra_options['state_info_out']\n                            \n                            #steps = sampler.extra_options['state_info_out']['sigmas'].shape[-1] - 3\n                            sigmas = sampler.extra_options['state_info_out']['sigmas'] if sigmas is None else sigmas\n                            #if len(sigmas) > 2 and sigmas[1] < sigmas[2] and sampler.extra_options['state_info_out']['sampler_mode'] == \"unsample\": # and sampler_mode == \"resample\":\n                            #    sigmas = torch.flip(sigmas, dims=[0])\n                                \n                            if   sampler.extra_options['sampler_mode'] == \"standard\":\n                                sampler.extra_options['sampler_mode'] = \"unsample\"\n                            elif sampler.extra_options['sampler_mode'] == \"unsample\":\n                                sampler.extra_options['sampler_mode'] = \"resample\"\n                            elif sampler.extra_options['sampler_mode'] == \"resample\":\n                                sampler.extra_options['sampler_mode'] = \"unsample\"\n                            \n                            sampler.extra_options['noise_seed'] = -1\n                            \n                            if sampler.extra_options['sampler_mode'] == \"unsample\":\n                                guider.cfgs = {\n                                    'xt': unsample_cfg,\n                                    'yt': unsample_cfg,\n                                }\n                                if unsample_eta != -1.0:\n                                    sampler.extra_options['eta_substep']  = unsample_eta_decay\n                                    sampler.extra_options['eta']          = unsample_eta_decay\n                                    sampler.extra_options['etas_substep'] = unsample_etas\n                                    sampler.extra_options['etas']         = unsample_etas\n                                else:\n                                    sampler.extra_options['eta_substep']  = eta_substep_decay\n                                    sampler.extra_options['eta']          = eta_decay\n                                    sampler.extra_options['etas_substep'] = etas_substep_decay\n                                    sampler.extra_options['etas']         = etas_decay\n                                if unsampler_name != \"none\":\n                                    sampler.extra_options['rk_type']  = unsampler_name\n                                if unsample_steps_to_run > -1:\n                                    sampler.extra_options['steps_to_run'] = unsample_steps_to_run\n                            else:\n                                guider.cfgs = cfgs_cached\n                                sampler.extra_options['eta_substep']  = eta_substep_decay\n                                sampler.extra_options['eta']          = eta_decay\n                                sampler.extra_options['etas_substep'] = etas_substep_decay\n                                sampler.extra_options['etas']         = etas_decay\n                                sampler.extra_options['rk_type']      = rk_type_cached\n                                sampler.extra_options['steps_to_run'] = steps_to_run_cached\n\n                                \n                            samples = guider.sample(noise, samples.clone(), sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=-1)\n\n                            eta_substep_decay   *= eta_decay_scale\n                            eta_decay           *= eta_decay_scale\n                            unsample_eta_decay  *= eta_decay_scale\n                            \n                            etas_substep_decay  *= eta_decay_scale\n                            etas_decay          *= eta_decay_scale\n                            unsample_etas_decay *= eta_decay_scale                        \n                        \n                        sampler.extra_options['noise_seed'] = noise_seed_cached\n                        guider.cfgs = cfgs_cached\n                        sampler.extra_options['sampler_mode'] = sampler_mode_cached\n                        sampler.extra_options['eta_substep']  = eta_substep_cached\n                        sampler.extra_options['eta']          = eta_cached\n                        sampler.extra_options['etas_substep'] = etas_substep_cached\n                        sampler.extra_options['etas']         = etas_cached\n                        sampler.extra_options['rk_type']      = rk_type_cached\n                        sampler.extra_options['steps_to_run'] = steps_to_run_cached   # TODO: verify this is carried on\n\n                    if noise_mask is not None:\n                        if 'BONGMATH' in sampler.extra_options:\n                            batch_state_info = sampler.extra_options.get('state_info', {})\n                            latent_for_mask = batch_state_info.get('image_initial', x)\n                        else:\n                            stored_image = state_info.get('image_initial')\n                            if stored_image is not None and stored_image.dim() > 3:\n                                latent_for_mask = stored_image[batch_num]\n                            elif stored_image is not None:\n                                latent_for_mask = stored_image\n                            else:\n                                latent_for_mask = x\n                        reshaped_mask = comfy.utils.reshape_mask(noise_mask, samples.shape).to(samples.device)\n                        samples = samples * reshaped_mask + latent_for_mask.to(samples.device) * (1.0 - reshaped_mask)\n\n                    out = latent_unbatch.copy()\n                    out[\"samples\"] = samples\n                    \n                    if \"x0\" in x0_output:\n                        out_denoised            = latent_unbatch.copy()\n                        out_denoised[\"samples\"] = work_model.model.process_latent_out(x0_output[\"x0\"].cpu())\n                    else:\n                        out_denoised            = out\n\n                    out_samples         .append(out         [\"samples\"])\n                    out_denoised_samples.append(out_denoised[\"samples\"])\n                    \n                    \n                    \n                    # ACCUMULATE UNSAMPLED SDE NOISE\n                    if total_steps_iter > 1: \n                        if 'raw_x' in state_info_out:\n                            sde_noise_out = state_info_out['raw_x']\n                        else:\n                            sde_noise_out = out[\"samples\"]  \n                        sde_noise.append(normalize_zscore(sde_noise_out, channelwise=True, inplace=True))    \n                    \n                    out_state_info.append(state_info_out)\n                    \n                    # INCREMENT BATCH LOOP\n                    if not EO(\"lock_batch_seed\"):\n                        seed += 1\n                    if latent_image is not None: #needed for ultracascade, where latent_image input is not really used for stage C/first stage\n                        if latent_image.get('state_info', {}).get('last_rng', None) is None:\n                            torch.manual_seed(seed)\n\n\n            gc.collect()\n\n            # STACK SDE NOISES, SAVE STATE INFO\n            state_info_out = out_state_info[0]\n            if 'raw_x' in out_state_info[0]:\n                state_info_out['raw_x']            = torch.stack([out_state_info[_]['raw_x']            for _ in range(len(out_state_info))])\n                state_info_out['data_prev_']       = torch.stack([out_state_info[_]['data_prev_']       for _ in range(len(out_state_info))])\n                state_info_out['last_rng']         = torch.stack([out_state_info[_]['last_rng']         for _ in range(len(out_state_info))])\n                state_info_out['last_rng_substep'] = torch.stack([out_state_info[_]['last_rng_substep'] for _ in range(len(out_state_info))])\n                if 'image_initial' in out_state_info[0]:\n                    state_info_out['image_initial'] = torch.stack([out_state_info[_]['image_initial'] for _ in range(len(out_state_info))])\n                if 'noise_initial' in out_state_info[0]:\n                    state_info_out['noise_initial'] = torch.stack([out_state_info[_]['noise_initial'] for _ in range(len(out_state_info))])\n            elif 'raw_x' in state_info:\n                state_info_out = state_info\n\n            out_samples             = [tensor.squeeze(0) for tensor in out_samples]\n            out_denoised_samples    = [tensor.squeeze(0) for tensor in out_denoised_samples]\n\n            out         ['samples'] = torch.stack(out_samples,          dim=0)\n            out_denoised['samples'] = torch.stack(out_denoised_samples, dim=0)\n\n            out['state_info']       = copy.deepcopy(state_info_out)\n            state_info              = {}\n            \n            out['positive'] = positive\n            out['negative'] = negative\n            out['model']    = work_model#.clone()\n            out['sampler']  = sampler\n            \n\n            return (out, out_denoised, sde_noise,)\n\n\n\n\nclass SharkSampler_Beta:\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"scheduler\":       (get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\n                \"steps\":           (\"INT\",                        {\"default\": 30,  \"min\": 1,        \"max\": 10000.0}),\n                \"steps_to_run\":    (\"INT\",                        {\"default\": -1,  \"min\": -1,       \"max\": MAX_STEPS}),\n                \"denoise\":         (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01}),\n                \"cfg\":             (\"FLOAT\",                      {\"default\": 5.5, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Negative values use channelwise CFG.\" }),\n                \"seed\":            (\"INT\",                        {\"default\": 0,   \"min\": -1,       \"max\": 0xffffffffffffffff}),\n                \"sampler_mode\": (['unsample', 'standard', 'resample'], {\"default\": \"standard\"}),\n                },\n            \"optional\": {\n                \"model\":           (\"MODEL\",),\n                \"positive\":        (\"CONDITIONING\", ),\n                \"negative\":        (\"CONDITIONING\", ),\n                \"sampler\":         (\"SAMPLER\", ),\n                \"sigmas\":          (\"SIGMAS\", ),\n                \"latent_image\":    (\"LATENT\", ),     \n                \"options\":         (\"OPTIONS\", ),   \n                }\n            }\n\n    RETURN_TYPES = (\"LATENT\", \n                    \"LATENT\", \n                    \"OPTIONS\",)\n    \n    RETURN_NAMES = (\"output\", \n                    \"denoised\",\n                    \"options\",) \n    \n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/samplers\"\n    \n    def main(self, \n            model                                    = None,\n            cfg             : float                  =  5.5, \n            scheduler       : str                    = \"beta57\", \n            steps           : int                    = 30, \n            steps_to_run    : int                    = -1,\n            sampler_mode    : str                    = \"standard\",\n            denoise         : float                  =  1.0, \n            denoise_alt     : float                  =  1.0,\n            noise_type_init : str                    = \"gaussian\",\n            latent_image    : Optional[dict[Tensor]] = None,\n            \n            positive                                 = None,\n            negative                                 = None,\n            sampler                                  = None,\n            sigmas          : Optional[Tensor]       = None,\n            noise_stdev     : float                  =  1.0,\n            noise_mean      : float                  =  0.0,\n            noise_normalize : bool                   = True,\n            \n            d_noise         : float                  =  1.0,\n            alpha_init      : float                  = -1.0,\n            k_init          : float                  =  1.0,\n            cfgpp           : float                  =  0.0,\n            seed            : int                    = -1,\n            options                                  = None,\n            sde_noise                                = None,\n            sde_noise_steps : int                    =  1,\n        \n            extra_options   : str                    = \"\", \n            **kwargs,\n            ): \n        \n\n        options_mgr = OptionsManager(options, **kwargs)\n        \n        if denoise < 0:\n            denoise_alt = -denoise\n            denoise = 1.0\n        \n        #if 'steps_to_run' in sampler.extra_options:\n        #    sampler.extra_options['steps_to_run'] = steps_to_run\n        if 'positive' in latent_image and positive is None:\n            positive = latent_image['positive']\n        if 'negative' in latent_image and negative is None:\n            negative = latent_image['negative']\n        if 'sampler'  in latent_image and sampler  is None:\n            sampler  = latent_image['sampler']\n        if 'model'    in latent_image and model    is None:\n            model    = latent_image['model']\n            \n        #if model.model.model_config.unet_config.get('stable_cascade_stage') == 'b':\n        #    if 'noise_type_sde' in sampler.extra_options:\n        #        noise_type_sde         = \"pyramid-cascade_B\"\n        #        noise_type_sde_substep = \"pyramid-cascade_B\"\n        \n        output, denoised, sde_noise = SharkSampler().main(\n            model           = model, \n            cfg             = cfg, \n            scheduler       = scheduler,\n            steps           = steps, \n            steps_to_run    = steps_to_run,\n            denoise         = denoise,\n            latent_image    = latent_image, \n            positive        = positive,\n            negative        = negative, \n            sampler         = sampler, \n            cfgpp           = cfgpp, \n            noise_seed      = seed, \n            options         = options, \n            sde_noise       = sde_noise, \n            sde_noise_steps = sde_noise_steps, \n            noise_type_init = noise_type_init,\n            noise_stdev     = noise_stdev,\n            sampler_mode    = sampler_mode,\n            denoise_alt     = denoise_alt,\n            sigmas          = sigmas,\n\n            extra_options   = extra_options)\n        \n        return (output, denoised,options_mgr.as_dict())\n\n\n\n\n\nclass SharkChainsampler_Beta(SharkSampler_Beta):  \n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"steps_to_run\":    (\"INT\",                        {\"default\": -1,  \"min\": -1,       \"max\": MAX_STEPS}),\n                \"cfg\":             (\"FLOAT\",                      {\"default\": 5.5, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Negative values use channelwise CFG.\" }),\n                \"sampler_mode\": (['unsample', 'resample'], {\"default\": \"resample\"}),\n                },\n            \"optional\": {\n                \"model\":           (\"MODEL\",),\n                \"positive\":        (\"CONDITIONING\", ),\n                \"negative\":        (\"CONDITIONING\", ),\n                \"sampler\":         (\"SAMPLER\", ),\n                \"sigmas\":          (\"SIGMAS\", ),\n                \"latent_image\":    (\"LATENT\", ),     \n                \"options\":         (\"OPTIONS\", ),   \n                }\n            }\n\n    def main(self, \n            model                 = None,\n            steps_to_run          = -1, \n            cfg                   = 5.5, \n            latent_image          = None,\n            sigmas                = None,\n            sampler_mode          = \"\",\n            seed            : int = -1, \n             **kwargs):  \n        \n        steps = latent_image['state_info']['sigmas'].shape[-1] - 3\n        sigmas = latent_image['state_info']['sigmas'] if sigmas is None else sigmas\n        if len(sigmas) > 2 and sigmas[1] < sigmas[2] and latent_image['state_info']['sampler_mode'] == \"unsample\" and sampler_mode == \"resample\":\n            sigmas = torch.flip(sigmas, dims=[0])\n        \n        return super().main(model=model, sampler_mode=sampler_mode, steps_to_run=steps_to_run, sigmas=sigmas, steps=steps, cfg=cfg, seed=seed, latent_image=latent_image, **kwargs)\n\n\n\n\n\nclass ClownSamplerAdvanced_Beta:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\"required\":\n                    {\n                    \"noise_type_sde\":         (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\n                    \"noise_type_sde_substep\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\n                    \"noise_mode_sde\":         (NOISE_MODE_NAMES,             {\"default\": 'hard',                                                        \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\n                    \"noise_mode_sde_substep\": (NOISE_MODE_NAMES,             {\"default\": 'hard',                                                        \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\n                    \"overshoot_mode\":         (NOISE_MODE_NAMES,             {\"default\": 'hard',                                                        \"tooltip\": \"How step size overshoot scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\n                    \"overshoot_mode_substep\": (NOISE_MODE_NAMES,             {\"default\": 'hard',                                                        \"tooltip\": \"How substep size overshoot scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\n                    \"eta\":                    (\"FLOAT\",                      {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\n                    \"eta_substep\":            (\"FLOAT\",                      {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\n                    \"overshoot\":              (\"FLOAT\",                      {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Boost the size of each denoising step, then rescale to match the original. Has a softening effect.\"}),\n                    \"overshoot_substep\":      (\"FLOAT\",                      {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Boost the size of each denoising substep, then rescale to match the original. Has a softening effect.\"}),\n                    \"noise_scaling_weight\":   (\"FLOAT\",                      {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set to positive values to create a sharper, grittier, more detailed image. Set to negative values to soften and deepen the colors.\"}),\n                    \"noise_boost_step\":       (\"FLOAT\",                      {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set to positive values to create a sharper, grittier, more detailed image. Set to negative values to soften and deepen the colors.\"}),\n                    \"noise_boost_substep\":    (\"FLOAT\",                      {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set to positive values to create a sharper, grittier, more detailed image. Set to negative values to soften and deepen the colors.\"}),\n                    \"noise_anchor\":           (\"FLOAT\",                      {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Typically set to between 1.0 and 0.0. Lower values cerate a grittier, more detailed image.\"}),\n                    \"s_noise\":                (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01,                 \"tooltip\": \"Adds extra SDE noise. Values around 1.03-1.07 can lead to a moderate boost in detail and paint textures.\"}),\n                    \"s_noise_substep\":        (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01,                 \"tooltip\": \"Adds extra SDE noise. Values around 1.03-1.07 can lead to a moderate boost in detail and paint textures.\"}),\n                    \"d_noise\":                (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01,                 \"tooltip\": \"Downscales the sigma schedule. Values around 0.98-0.95 can lead to a large boost in detail and paint textures.\"}),\n                    \"momentum\":               (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01,                 \"tooltip\": \"Accelerate convergence with positive values when sampling, negative values when unsampling.\"}),\n                    \"noise_seed_sde\":         (\"INT\",                        {\"default\": -1, \"min\": -1, \"max\": 0xffffffffffffffff}),\n                    \"sampler_name\":           (get_sampler_name_list(),      {\"default\": get_default_sampler_name()}), \n\n                    \"implicit_type\":          (IMPLICIT_TYPE_NAMES,          {\"default\": \"predictor-corrector\"}), \n                    \"implicit_type_substeps\": (IMPLICIT_TYPE_NAMES,          {\"default\": \"predictor-corrector\"}), \n                    \"implicit_steps\":         (\"INT\",                        {\"default\": 0, \"min\": 0, \"max\": 10000}),\n                    \"implicit_substeps\":      (\"INT\",                        {\"default\": 0, \"min\": 0, \"max\": 10000}),\n                    \"bongmath\":               (\"BOOLEAN\",                    {\"default\": True}),\n                    },\n                \"optional\": \n                    {\n                    \"guides\":                 (\"GUIDES\", ),     \n                    \"automation\":             (\"AUTOMATION\", ),\n                    \"extra_options\":          (\"STRING\",                     {\"default\": \"\", \"multiline\": True}),   \n                    \"options\":                (\"OPTIONS\", ),   \n                    }\n                }\n\n    RETURN_TYPES = (\"SAMPLER\",)\n    RETURN_NAMES = (\"sampler\", ) \n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/samplers\"\n    EXPERIMENTAL = True\n    \n    def main(self, \n            noise_type_sde                : str = \"gaussian\",\n            noise_type_sde_substep        : str = \"gaussian\",\n            noise_mode_sde                : str = \"hard\",\n            overshoot_mode                : str = \"hard\",\n            overshoot_mode_substep        : str = \"hard\",\n            \n            eta                           : float = 0.5,\n            eta_substep                   : float = 0.5,\n            momentum                      : float = 0.0,\n\n\n\n            noise_scaling_weight          : float = 0.0,\n            noise_scaling_type            : str   = \"sampler\",\n            noise_scaling_mode            : str   = \"linear\",\n            noise_scaling_eta             : float = 0.0,\n            noise_scaling_cycles          : int   = 1,\n            \n            noise_scaling_weights         : Optional[Tensor]       = None,\n            noise_scaling_etas            : Optional[Tensor]       = None,\n            \n            noise_boost_step              : float = 0.0,\n            noise_boost_substep           : float = 0.0,\n            noise_boost_normalize         : bool  = True,\n            noise_anchor                  : float = 1.0,\n            \n            s_noise                       : float = 1.0,\n            s_noise_substep               : float = 1.0,\n            d_noise                       : float = 1.0,\n            d_noise_start_step            : int   = 0,\n            d_noise_inv                   : float = 1.0,\n            d_noise_inv_start_step        : int   = 0,\n            \n            \n            \n            alpha_sde                     : float = -1.0,\n            k_sde                         : float = 1.0,\n            cfgpp                         : float = 0.0,\n            c1                            : float = 0.0,\n            c2                            : float = 0.5,\n            c3                            : float = 1.0,\n            noise_seed_sde                : int = -1,\n            sampler_name                  : str = \"res_2m\",\n            implicit_sampler_name         : str = \"gauss-legendre_2s\",\n            \n            implicit_substeps             : int = 0,\n            implicit_steps                : int = 0,\n            \n            rescale_floor                 : bool = True,\n            sigmas_override               : Optional[Tensor] = None,\n            \n            guides                        = None,\n            options                       = None,\n            sde_noise                     = None,\n            sde_noise_steps               : int = 1,\n            \n            extra_options                 : str = \"\",\n            automation                    = None,\n            etas                          : Optional[Tensor] = None,\n            etas_substep                  : Optional[Tensor] = None,\n            s_noises                      : Optional[Tensor] = None,\n            s_noises_substep              : Optional[Tensor] = None,\n            epsilon_scales                : Optional[Tensor] = None,\n            regional_conditioning_weights : Optional[Tensor] = None,\n            frame_weights_mgr             = None,\n            noise_mode_sde_substep        : str = \"hard\",\n            \n            overshoot                     : float = 0.0,\n            overshoot_substep             : float = 0.0,\n\n            bongmath                      : bool = True,\n            \n            implicit_type                 : str = \"predictor-corrector\",\n            implicit_type_substeps        : str = \"predictor-corrector\",\n            \n            rk_swap_step                  : int = MAX_STEPS,\n            rk_swap_print                 : bool = False,\n            rk_swap_threshold             : float = 0.0,\n            rk_swap_type                  : str = \"\",\n            \n            steps_to_run                  : int = -1,\n            \n            sde_mask                      : Optional[Tensor] = None,\n            \n            **kwargs,\n            ): \n        \n        \n        \n            options_mgr = OptionsManager(options, **kwargs)\n            extra_options    += \"\\n\" + options_mgr.get('extra_options', \"\")\n            EO = ExtraOptions(extra_options)\n            default_dtype = EO(\"default_dtype\", torch.float64)\n\n    \n    \n            sampler_name, implicit_sampler_name = process_sampler_name(sampler_name)\n\n            implicit_steps_diag = implicit_substeps\n            implicit_steps_full = implicit_steps\n\n            if noise_mode_sde == \"none\":\n                eta = 0.0\n                noise_mode_sde = \"hard\"\n\n            noise_type_sde    = options_mgr.get('noise_type_sde'   , noise_type_sde)\n            noise_mode_sde    = options_mgr.get('noise_mode_sde'   , noise_mode_sde)\n            eta               = options_mgr.get('eta'              , eta)\n            eta_substep       = options_mgr.get('eta_substep'      , eta_substep)\n            \n            \n            \n            noise_scaling_weight   = options_mgr.get('noise_scaling_weight'  , noise_scaling_weight)\n            noise_scaling_type     = options_mgr.get('noise_scaling_type'    , noise_scaling_type)\n            noise_scaling_mode     = options_mgr.get('noise_scaling_mode'    , noise_scaling_mode)\n            noise_scaling_eta      = options_mgr.get('noise_scaling_eta'     , noise_scaling_eta)\n            noise_scaling_cycles   = options_mgr.get('noise_scaling_cycles'  , noise_scaling_cycles)\n            \n            noise_scaling_weights  = options_mgr.get('noise_scaling_weights' , noise_scaling_weights)\n            noise_scaling_etas     = options_mgr.get('noise_scaling_etas'    , noise_scaling_etas)\n            \n            noise_boost_step       = options_mgr.get('noise_boost_step'      , noise_boost_step)\n            noise_boost_substep    = options_mgr.get('noise_boost_substep'   , noise_boost_substep)\n            noise_boost_normalize  = options_mgr.get('noise_boost_normalize' , noise_boost_normalize)\n            noise_anchor           = options_mgr.get('noise_anchor'          , noise_anchor)\n            \n            s_noise                = options_mgr.get('s_noise'               , s_noise)\n            s_noise_substep        = options_mgr.get('s_noise_substep'       , s_noise_substep)\n            d_noise                = options_mgr.get('d_noise'               , d_noise)\n            d_noise_start_step     = options_mgr.get('d_noise_start_step'    , d_noise_start_step)\n            d_noise_inv            = options_mgr.get('d_noise_inv'           , d_noise_inv)\n            d_noise_inv_start_step = options_mgr.get('d_noise_inv_start_step', d_noise_inv_start_step)\n            \n            \n            \n            alpha_sde         = options_mgr.get('alpha_sde'        , alpha_sde)\n            k_sde             = options_mgr.get('k_sde'            , k_sde)\n            c1                = options_mgr.get('c1'               , c1)\n            c2                = options_mgr.get('c2'               , c2)\n            c3                = options_mgr.get('c3'               , c3)\n\n            frame_weights_mgr = options_mgr.get('frame_weights_mgr', frame_weights_mgr)\n            sde_noise         = options_mgr.get('sde_noise'        , sde_noise)\n            sde_noise_steps   = options_mgr.get('sde_noise_steps'  , sde_noise_steps)\n            \n            rk_swap_step      = options_mgr.get('rk_swap_step'     , rk_swap_step)\n            rk_swap_print     = options_mgr.get('rk_swap_print'    , rk_swap_print)\n            rk_swap_threshold = options_mgr.get('rk_swap_threshold', rk_swap_threshold)\n            rk_swap_type      = options_mgr.get('rk_swap_type'     , rk_swap_type)\n\n            steps_to_run      = options_mgr.get('steps_to_run'     , steps_to_run)\n            \n            noise_seed_sde    = options_mgr.get('noise_seed_sde'   , noise_seed_sde)\n            momentum          = options_mgr.get('momentum'         , momentum)\n\n            sde_mask          = options_mgr.get('sde_mask'         , sde_mask)\n\n\n            rescale_floor = EO(\"rescale_floor\")\n\n            if automation is not None:\n                etas              = automation['etas']              if 'etas'              in automation else None\n                etas_substep      = automation['etas_substep']      if 'etas_substep'      in automation else None\n                s_noises          = automation['s_noises']          if 's_noises'          in automation else None\n                s_noises_substep  = automation['s_noises_substep']  if 's_noises_substep'  in automation else None\n                epsilon_scales    = automation['epsilon_scales']    if 'epsilon_scales'    in automation else None\n                frame_weights_mgr = automation['frame_weights_mgr'] if 'frame_weights_mgr' in automation else None\n                \n            etas             = options_mgr.get('etas',         etas)\n            etas_substep     = options_mgr.get('etas_substep', etas_substep)\n            \n            s_noises         = options_mgr.get('s_noises',         s_noises)\n            s_noises_substep = options_mgr.get('s_noises_substep', s_noises_substep)\n\n            etas             = initialize_or_scale(etas,             eta,             MAX_STEPS).to(default_dtype)\n            etas_substep     = initialize_or_scale(etas_substep,     eta_substep,     MAX_STEPS).to(default_dtype)\n            s_noises         = initialize_or_scale(s_noises,         s_noise,         MAX_STEPS).to(default_dtype)\n            s_noises_substep = initialize_or_scale(s_noises_substep, s_noise_substep, MAX_STEPS).to(default_dtype)\n\n            etas             = F.pad(etas,             (0, MAX_STEPS), value=0.0)\n            etas_substep     = F.pad(etas_substep,     (0, MAX_STEPS), value=0.0)\n            s_noises         = F.pad(s_noises,         (0, MAX_STEPS), value=1.0)\n            s_noises_substep = F.pad(s_noises_substep, (0, MAX_STEPS), value=1.0)\n\n            if sde_noise is None:\n                sde_noise = []\n            else:\n                sde_noise = copy.deepcopy(sde_noise)\n                sde_noise = normalize_zscore(sde_noise, channelwise=True, inplace=True)\n\n\n            sampler = comfy.samplers.ksampler(\"rk_beta\", \n                {\n                    \"eta\"                           : eta,\n                    \"eta_substep\"                   : eta_substep,\n\n                    \"alpha\"                         : alpha_sde,\n                    \"k\"                             : k_sde,\n                    \"c1\"                            : c1,\n                    \"c2\"                            : c2,\n                    \"c3\"                            : c3,\n                    \"cfgpp\"                         : cfgpp,\n\n                    \"noise_sampler_type\"            : noise_type_sde,\n                    \"noise_sampler_type_substep\"    : noise_type_sde_substep,\n                    \"noise_mode_sde\"                : noise_mode_sde,\n                    \"noise_seed\"                    : noise_seed_sde,\n                    \"rk_type\"                       : sampler_name,\n                    \"implicit_sampler_name\"         : implicit_sampler_name,\n\n                    \"implicit_steps_diag\"           : implicit_steps_diag,\n                    \"implicit_steps_full\"           : implicit_steps_full,\n\n                    \"LGW_MASK_RESCALE_MIN\"          : rescale_floor,\n                    \"sigmas_override\"               : sigmas_override,\n                    \"sde_noise\"                     : sde_noise,\n\n                    \"extra_options\"                 : extra_options,\n                    \"sampler_mode\"                  : \"standard\",\n\n                    \"etas\"                          : etas,\n                    \"etas_substep\"                  : etas_substep,\n                    \n                    \n                    \n                    \"s_noises\"                      : s_noises,\n                    \"s_noises_substep\"              : s_noises_substep,\n                    \"epsilon_scales\"                : epsilon_scales,\n                    \"regional_conditioning_weights\" : regional_conditioning_weights,\n\n                    \"guides\"                        : guides,\n                    \"frame_weights_mgr\"             : frame_weights_mgr,\n                    \"eta_substep\"                   : eta_substep,\n                    \"noise_mode_sde_substep\"        : noise_mode_sde_substep,\n                    \n                    \n                    \n                    \"noise_scaling_weight\"          : noise_scaling_weight,\n                    \"noise_scaling_type\"            : noise_scaling_type,\n                    \"noise_scaling_mode\"            : noise_scaling_mode,\n                    \"noise_scaling_eta\"             : noise_scaling_eta,\n                    \"noise_scaling_cycles\"          : noise_scaling_cycles,\n                    \n                    \"noise_scaling_weights\"         : noise_scaling_weights,\n                    \"noise_scaling_etas\"            : noise_scaling_etas,\n                    \n                    \"noise_boost_step\"              : noise_boost_step,\n                    \"noise_boost_substep\"           : noise_boost_substep,\n                    \"noise_boost_normalize\"         : noise_boost_normalize,\n                    \"noise_anchor\"                  : noise_anchor,\n                    \n                    \"s_noise\"                       : s_noise,\n                    \"s_noise_substep\"               : s_noise_substep,\n                    \"d_noise\"                       : d_noise,\n                    \"d_noise_start_step\"            : d_noise_start_step,\n                    \"d_noise_inv\"                   : d_noise_inv,\n                    \"d_noise_inv_start_step\"        : d_noise_inv_start_step,\n\n\n\n                    \"overshoot_mode\"                : overshoot_mode,\n                    \"overshoot_mode_substep\"        : overshoot_mode_substep,\n                    \"overshoot\"                     : overshoot,\n                    \"overshoot_substep\"             : overshoot_substep,\n                    \"BONGMATH\"                      : bongmath,\n\n                    \"implicit_type\"                 : implicit_type,\n                    \"implicit_type_substeps\"        : implicit_type_substeps,\n                    \n                    \"rk_swap_step\"                  : rk_swap_step,\n                    \"rk_swap_print\"                 : rk_swap_print,\n                    \"rk_swap_threshold\"             : rk_swap_threshold,\n                    \"rk_swap_type\"                  : rk_swap_type,\n                    \n                    \"steps_to_run\"                  : steps_to_run,\n                    \n                    \"sde_mask\"                      : sde_mask,\n                    \n                    \"momentum\"                      : momentum,\n                })\n\n\n            return (sampler, )\n\n\n\n\n\n\n\nclass ClownsharKSampler_Beta:\n    @classmethod\n    def INPUT_TYPES(cls):\n        inputs = {\"required\":\n                    {\n                    \"eta\":          (\"FLOAT\",                      {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0,     \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\n                    \"sampler_name\": (get_sampler_name_list     (), {\"default\": get_default_sampler_name()}), \n                    \"scheduler\":    (get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\n                    \"steps\":        (\"INT\",                        {\"default\": 30,  \"min\":  1,     \"max\": MAX_STEPS}),\n                    \"steps_to_run\": (\"INT\",                        {\"default\": -1,  \"min\": -1,     \"max\": MAX_STEPS}),\n                    \"denoise\":      (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000, \"max\": MAX_STEPS, \"step\":0.01}),\n                    \"cfg\":          (\"FLOAT\",                      {\"default\": 5.5, \"min\": -100.0, \"max\": 100.0,     \"step\":0.01, \"round\": False, }),\n                    \"seed\":         (\"INT\",                        {\"default\": 0,   \"min\": -1,     \"max\": 0xffffffffffffffff}),\n                    \"sampler_mode\": (['unsample', 'standard', 'resample'], {\"default\": \"standard\"}),\n                    \"bongmath\":     (\"BOOLEAN\",                    {\"default\": True}),\n                    },\n                \"optional\": \n                    {\n                    \"model\":        (\"MODEL\",),\n                    \"positive\":     (\"CONDITIONING\",),\n                    \"negative\":     (\"CONDITIONING\",),\n                    \"latent_image\": (\"LATENT\",),\n                    \"sigmas\":       (\"SIGMAS\",), \n                    \"guides\":       (\"GUIDES\",), \n                    \"options\":      (\"OPTIONS\", {}),   \n                    }\n                }\n        \n        return inputs\n\n    RETURN_TYPES = (\"LATENT\", \n                    \"LATENT\",\n                    \"OPTIONS\",\n                    )\n    \n    RETURN_NAMES = (\"output\", \n                    \"denoised\",\n                    \"options\",\n                    ) \n    \n    FUNCTION = \"main\"\n    CATEGORY = \"RES4LYF/samplers\"\n    \n    def main(self, \n            model                                                  = None,\n            denoise                       : float                  = 1.0, \n            scheduler                     : str                    = \"beta57\", \n            cfg                           : float                  = 1.0, \n            seed                          : int                    = -1, \n            positive                                               = None, \n            negative                                               = None, \n            latent_image                  : Optional[dict[Tensor]] = None, \n            steps                         : int                    = 30,\n            steps_to_run                  : int                    = -1,\n            bongmath                      : bool                   = True,\n            sampler_mode                  : str                    = \"standard\",\n            \n            noise_type_sde                : str                    = \"gaussian\", \n            noise_type_sde_substep        : str                    = \"gaussian\", \n            noise_mode_sde                : str                    = \"hard\",\n            noise_mode_sde_substep        : str                    = \"hard\",\n\n            \n            overshoot_mode                : str                    = \"hard\", \n            overshoot_mode_substep        : str                    = \"hard\",\n            overshoot                     : float                  = 0.0, \n            overshoot_substep             : float                  = 0.0,\n            \n            eta                           : float                  = 0.5, \n            eta_substep                   : float                  = 0.5,\n            momentum                      : float                  = 0.0,\n            \n            \n            \n            noise_scaling_weight         : float                  = 0.0,\n            noise_scaling_type            : str                    = \"sampler\",\n            noise_scaling_mode            : str                    = \"linear\",\n            noise_scaling_eta             : float                  = 0.0,\n            noise_scaling_cycles          : int                    = 1,\n            \n            noise_scaling_weights         : Optional[Tensor]       = None,\n            noise_scaling_etas            : Optional[Tensor]       = None,\n            \n            noise_boost_step              : float                  = 0.0,\n            noise_boost_substep           : float                  = 0.0,\n            noise_boost_normalize         : bool                   = True,\n            noise_anchor                  : float                  = 1.0,\n            \n            s_noise                       : float                  = 1.0,\n            s_noise_substep               : float                  = 1.0,\n            d_noise                       : float                  = 1.0,\n            d_noise_start_step            : int                    = 0,\n            d_noise_inv                   : float                  = 1.0,\n            d_noise_inv_start_step        : int                    = 0,\n            \n            \n            \n            alpha_sde                     : float                  = -1.0, \n            k_sde                         : float                  = 1.0,\n            cfgpp                         : float                  = 0.0,\n            c1                            : float                  = 0.0, \n            c2                            : float                  = 0.5, \n            c3                            : float                  = 1.0,\n            noise_seed_sde                : int                    = -1,\n            sampler_name                  : str                    = \"res_2m\", \n            implicit_sampler_name         : str                    = \"use_explicit\",\n            \n            implicit_type                 : str                    = \"bongmath\",\n            implicit_type_substeps        : str                    = \"bongmath\",\n            implicit_steps                : int                    = 0,\n            implicit_substeps             : int                    = 0, \n\n            sigmas                        : Optional[Tensor]       = None,\n            sigmas_override               : Optional[Tensor]       = None, \n            guides                                                 = None, \n            options                                                = None, \n            sde_noise                                              = None,\n            sde_noise_steps               : int                    = 1, \n            extra_options                 : str                    = \"\", \n            automation                                             = None, \n\n            epsilon_scales                : Optional[Tensor]       = None, \n            regional_conditioning_weights : Optional[Tensor]       = None,\n            frame_weights_mgr                                      = None, \n\n\n            rescale_floor                 : bool                   = True, \n            \n            rk_swap_step                  : int                    = MAX_STEPS,\n            rk_swap_print                 : bool                   = False,\n            rk_swap_threshold             : float                  = 0.0,\n            rk_swap_type                  : str                    = \"\",\n            \n            sde_mask                      : Optional[Tensor]       = None,\n\n            #start_at_step                 : int                    = 0,\n            #stop_at_step                  : int                    = MAX_STEPS,\n                        \n            **kwargs\n            ): \n        \n        options_mgr = OptionsManager(options, **kwargs)\n        extra_options    += \"\\n\" + options_mgr.get('extra_options', \"\")\n\n        #if model is None:\n        #    model = latent_image['model']\n        \n        \n        # defaults for ClownSampler\n        eta_substep = eta\n        \n        # defaults for SharkSampler\n        noise_type_init = \"gaussian\"\n        noise_stdev     = 1.0\n        denoise_alt     = 1.0\n        channelwise_cfg = False\n        \n        if denoise < 0:\n            denoise_alt = -denoise\n            denoise = 1.0\n        \n        is_chained = False\n\n        if latent_image is not None and 'positive' in latent_image and positive is None:\n            positive = latent_image['positive']\n            is_chained = True\n        if latent_image is not None and 'negative' in latent_image and negative is None:\n            negative = latent_image['negative']\n            is_chained = True\n        if latent_image is not None and 'model' in latent_image and model is None:\n            model = latent_image['model']\n            is_chained = True\n        \n        guider = options_mgr.get('guider', None)\n        if is_chained is False and guider is not None:\n            model = guider.model_patcher\n\n        if model.model.model_config.unet_config.get('stable_cascade_stage') == 'b':\n            noise_type_sde         = \"pyramid-cascade_B\"\n            noise_type_sde_substep = \"pyramid-cascade_B\"\n        \n        \n        #if options is not None:\n        #options_mgr = OptionsManager(options_inputs)\n        noise_seed_sde         = options_mgr.get('noise_seed_sde'        , noise_seed_sde)\n        \n        \n        noise_type_sde         = options_mgr.get('noise_type_sde'        , noise_type_sde)\n        noise_type_sde_substep = options_mgr.get('noise_type_sde_substep', noise_type_sde_substep)\n        \n        options_mgr.update('noise_type_sde',         noise_type_sde)\n        options_mgr.update('noise_type_sde_substep', noise_type_sde_substep)\n        \n        noise_mode_sde         = options_mgr.get('noise_mode_sde'        , noise_mode_sde)\n        noise_mode_sde_substep = options_mgr.get('noise_mode_sde_substep', noise_mode_sde_substep)\n        \n        overshoot_mode         = options_mgr.get('overshoot_mode'        , overshoot_mode)\n        overshoot_mode_substep = options_mgr.get('overshoot_mode_substep', overshoot_mode_substep)\n\n        eta                    = options_mgr.get('eta'                   , eta)\n        eta_substep            = options_mgr.get('eta_substep'           , eta_substep)\n        \n        options_mgr.update('eta',         eta)\n        options_mgr.update('eta_substep', eta_substep)\n\n        overshoot              = options_mgr.get('overshoot'             , overshoot)\n        overshoot_substep      = options_mgr.get('overshoot_substep'     , overshoot_substep)\n        \n        \n    \n        noise_scaling_weight   = options_mgr.get('noise_scaling_weight' , noise_scaling_weight)\n        noise_scaling_type     = options_mgr.get('noise_scaling_type'    , noise_scaling_type)\n        noise_scaling_mode     = options_mgr.get('noise_scaling_mode'    , noise_scaling_mode)\n        noise_scaling_eta      = options_mgr.get('noise_scaling_eta'     , noise_scaling_eta)\n        noise_scaling_cycles   = options_mgr.get('noise_scaling_cycles'  , noise_scaling_cycles)\n        \n        noise_scaling_weights  = options_mgr.get('noise_scaling_weights' , noise_scaling_weights)\n        noise_scaling_etas     = options_mgr.get('noise_scaling_etas'    , noise_scaling_etas)\n        \n        noise_boost_step       = options_mgr.get('noise_boost_step'      , noise_boost_step)\n        noise_boost_substep    = options_mgr.get('noise_boost_substep'   , noise_boost_substep)\n        noise_boost_normalize  = options_mgr.get('noise_boost_normalize' , noise_boost_normalize)\n        noise_anchor           = options_mgr.get('noise_anchor'          , noise_anchor)\n        \n        s_noise                = options_mgr.get('s_noise'               , s_noise)\n        s_noise_substep        = options_mgr.get('s_noise_substep'       , s_noise_substep)\n        d_noise                = options_mgr.get('d_noise'               , d_noise)\n        d_noise_start_step     = options_mgr.get('d_noise_start_step'          , d_noise_start_step)\n        d_noise_inv            = options_mgr.get('d_noise_inv'           , d_noise_inv)\n        d_noise_inv_start_step = options_mgr.get('d_noise_inv_start_step', d_noise_inv_start_step)\n        \n        \n        momentum               = options_mgr.get('momentum'              , momentum)\n\n        \n        implicit_type          = options_mgr.get('implicit_type'         , implicit_type)\n        implicit_type_substeps = options_mgr.get('implicit_type_substeps', implicit_type_substeps)\n        implicit_steps         = options_mgr.get('implicit_steps'        , implicit_steps)\n        implicit_substeps      = options_mgr.get('implicit_substeps'     , implicit_substeps)\n        \n        alpha_sde              = options_mgr.get('alpha_sde'             , alpha_sde)\n        k_sde                  = options_mgr.get('k_sde'                 , k_sde)\n        c1                     = options_mgr.get('c1'                    , c1)\n        c2                     = options_mgr.get('c2'                    , c2)\n        c3                     = options_mgr.get('c3'                    , c3)\n\n        frame_weights_mgr      = options_mgr.get('frame_weights_mgr'     , frame_weights_mgr)\n        \n        sde_noise              = options_mgr.get('sde_noise'             , sde_noise)\n        sde_noise_steps        = options_mgr.get('sde_noise_steps'       , sde_noise_steps)\n        \n        extra_options          = options_mgr.get('extra_options'         , extra_options)\n        \n        automation             = options_mgr.get('automation'            , automation)\n        \n        # SharkSampler Options\n        noise_type_init        = options_mgr.get('noise_type_init'       , noise_type_init)\n        noise_stdev            = options_mgr.get('noise_stdev'           , noise_stdev)\n        sampler_mode           = options_mgr.get('sampler_mode'          , sampler_mode)\n        denoise_alt            = options_mgr.get('denoise_alt'           , denoise_alt)\n        \n        channelwise_cfg        = options_mgr.get('channelwise_cfg'       , channelwise_cfg)\n        \n        options_mgr.update('noise_type_init', noise_type_init)\n        options_mgr.update('noise_stdev',     noise_stdev)\n        options_mgr.update('denoise_alt',     denoise_alt)\n        #options_mgr.update('channelwise_cfg', channelwise_cfg)\n        \n        sigmas                 = options_mgr.get('sigmas'                , sigmas)\n        \n        rk_swap_type           = options_mgr.get('rk_swap_type'          , rk_swap_type)\n        rk_swap_step           = options_mgr.get('rk_swap_step'          , rk_swap_step)\n        rk_swap_threshold      = options_mgr.get('rk_swap_threshold'     , rk_swap_threshold)\n        rk_swap_print          = options_mgr.get('rk_swap_print'         , rk_swap_print)\n        \n        sde_mask               = options_mgr.get('sde_mask'              , sde_mask)\n\n        \n        #start_at_step          = options_mgr.get('start_at_step'         , start_at_step)\n        #stop_at_ste            = options_mgr.get('stop_at_step'          , stop_at_step)\n                \n        if channelwise_cfg: # != 1.0:\n            cfg = -abs(cfg)  # set cfg negative for shark, to flag as cfg_cw\n\n\n\n\n        sampler, = ClownSamplerAdvanced_Beta().main(\n            noise_type_sde                = noise_type_sde,\n            noise_type_sde_substep        = noise_type_sde_substep,\n            noise_mode_sde                = noise_mode_sde,\n            noise_mode_sde_substep        = noise_mode_sde_substep,\n            \n            eta                           = eta,\n            eta_substep                   = eta_substep,\n            \n\n            \n            overshoot                     = overshoot,\n            overshoot_substep             = overshoot_substep,\n            \n            overshoot_mode                = overshoot_mode,\n            overshoot_mode_substep        = overshoot_mode_substep,\n            \n            \n            momentum                      = momentum,\n\n            alpha_sde                     = alpha_sde,\n            k_sde                         = k_sde,\n            cfgpp                         = cfgpp,\n            c1                            = c1,\n            c2                            = c2,\n            c3                            = c3,\n            sampler_name                  = sampler_name,\n            implicit_sampler_name         = implicit_sampler_name,\n\n            implicit_type                 = implicit_type,\n            implicit_type_substeps        = implicit_type_substeps,\n            implicit_steps                = implicit_steps,\n            implicit_substeps             = implicit_substeps,\n\n            rescale_floor                 = rescale_floor,\n            sigmas_override               = sigmas_override,\n            \n            noise_seed_sde                = noise_seed_sde,\n            \n            guides                        = guides,\n            options                       = options_mgr.as_dict(),\n\n            extra_options                 = extra_options,\n            automation                    = automation,\n\n\n\n            noise_scaling_weight          = noise_scaling_weight,\n            noise_scaling_type            = noise_scaling_type,\n            noise_scaling_mode            = noise_scaling_mode,\n            noise_scaling_eta             = noise_scaling_eta,\n            noise_scaling_cycles          = noise_scaling_cycles,\n            \n            noise_scaling_weights         = noise_scaling_weights,\n            noise_scaling_etas            = noise_scaling_etas,\n\n            noise_boost_step              = noise_boost_step,\n            noise_boost_substep           = noise_boost_substep,\n            noise_boost_normalize         = noise_boost_normalize,\n            noise_anchor                  = noise_anchor,\n            \n            s_noise                       = s_noise,\n            s_noise_substep               = s_noise_substep,\n            d_noise                       = d_noise,\n            d_noise_start_step            = d_noise_start_step,\n            d_noise_inv                   = d_noise_inv,\n            d_noise_inv_start_step        = d_noise_inv_start_step,\n            \n            \n            epsilon_scales                = epsilon_scales,\n            regional_conditioning_weights = regional_conditioning_weights,\n            frame_weights_mgr             = frame_weights_mgr,\n            \n            sde_noise                     = sde_noise,\n            sde_noise_steps               = sde_noise_steps,\n            \n            rk_swap_step                  = rk_swap_step,\n            rk_swap_print                 = rk_swap_print,\n            rk_swap_threshold             = rk_swap_threshold,\n            rk_swap_type                  = rk_swap_type,\n            \n            steps_to_run                  = steps_to_run,\n            \n            sde_mask                      = sde_mask,\n            \n            bongmath                      = bongmath,\n            )\n            \n        \n        output, denoised, sde_noise = SharkSampler().main(\n            model           = model, \n            cfg             = cfg, \n            scheduler       = scheduler,\n            steps           = steps, \n            steps_to_run    = steps_to_run,\n            denoise         = denoise,\n            latent_image    = latent_image, \n            positive        = positive,\n            negative        = negative, \n            sampler         = sampler, \n            cfgpp           = cfgpp, \n            noise_seed      = seed, \n            options         = options_mgr.as_dict(), \n            sde_noise       = sde_noise, \n            sde_noise_steps = sde_noise_steps, \n            noise_type_init = noise_type_init,\n            noise_stdev     = noise_stdev,\n            sampler_mode    = sampler_mode,\n            denoise_alt     = denoise_alt,\n            sigmas          = sigmas,\n\n            extra_options   = extra_options)\n        \n        return (output, denoised, options_mgr.as_dict(),) # {'model':model,},)\n\n\n\n\n\n\n\n\n\n\n\nclass ClownsharkChainsampler_Beta(ClownsharKSampler_Beta):  \n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"eta\":          (\"FLOAT\",                 {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0,     \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\n                \"sampler_name\": (get_sampler_name_list(), {\"default\": get_default_sampler_name()}), \n                \"steps_to_run\": (\"INT\",                   {\"default\": -1,  \"min\": -1,       \"max\": MAX_STEPS}),\n                \"cfg\":          (\"FLOAT\",                 {\"default\": 5.5, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Negative values use channelwise CFG.\" }),\n                \"sampler_mode\": (['unsample', 'resample'],{\"default\": \"resample\"}),\n                \"bongmath\":     (\"BOOLEAN\",               {\"default\": True}),\n                },\n            \"optional\": {\n                \"model\":        (\"MODEL\",),\n                \"positive\":     (\"CONDITIONING\", ),\n                \"negative\":     (\"CONDITIONING\", ),\n                #\"sampler\":      (\"SAMPLER\", ),\n                \"sigmas\":       (\"SIGMAS\", ),\n                \"latent_image\": (\"LATENT\", ),     \n                \"guides\":       (\"GUIDES\", ),   \n                \"options\":      (\"OPTIONS\", ),   \n                }\n            }\n\n    def main(self, \n            eta                   =  0.5,\n            sampler_name          = \"res_2m\",\n            steps_to_run          = -1, \n            cfg                   =  5.5, \n            bongmath              = True,\n            seed            : int = -1, \n            latent_image          = None,\n            sigmas                = None,\n            sampler_mode          = \"\",\n            \n             **kwargs):  \n        \n        steps = latent_image['state_info']['sigmas'].shape[-1] - 3\n        sigmas = latent_image['state_info']['sigmas'] if sigmas is None else sigmas\n        if len(sigmas) > 2 and sigmas[1] < sigmas[2] and latent_image['state_info']['sampler_mode'] == \"unsample\" and sampler_mode == \"resample\":\n            sigmas = torch.flip(sigmas, dims=[0])\n        \n        return super().main(eta=eta, sampler_name=sampler_name, sampler_mode=sampler_mode, sigmas=sigmas, steps_to_run=steps_to_run, steps=steps, cfg=cfg, bongmath=bongmath, seed=seed, latent_image=latent_image, **kwargs)\n\n\n\n\n\n\nclass ClownSampler_Beta:\n    @classmethod\n    def INPUT_TYPES(cls):\n        inputs = {\"required\":\n                    {\n                    \"eta\":          (\"FLOAT\",                      {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0,     \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\n                    \"sampler_name\": (get_sampler_name_list     (), {\"default\": get_default_sampler_name()}), \n                    \"seed\":         (\"INT\",                        {\"default\": -1,   \"min\": -1,     \"max\": 0xffffffffffffffff}),\n                    \"bongmath\":     (\"BOOLEAN\",                    {\"default\": True}),\n                    },\n                \"optional\": \n                    {\n                    \"guides\":       (\"GUIDES\",), \n                    \"options\":      (\"OPTIONS\", {}),   \n                    }\n                }\n        \n        return inputs\n\n    RETURN_TYPES = (\"SAMPLER\",)\n    \n    RETURN_NAMES = (\"sampler\",) \n    \n    FUNCTION = \"main\"\n    CATEGORY = \"RES4LYF/samplers\"\n    \n    def main(self, \n            model                                                  = None,\n            denoise                       : float                  = 1.0, \n            scheduler                     : str                    = \"beta57\", \n            cfg                           : float                  = 1.0, \n            seed                          : int                    = -1, \n            positive                                               = None, \n            negative                                               = None, \n            latent_image                  : Optional[dict[Tensor]] = None, \n            steps                         : int                    = 30,\n            steps_to_run                  : int                    = -1,\n            bongmath                      : bool                   = True,\n            sampler_mode                  : str                    = \"standard\",\n            \n            noise_type_sde                : str                    = \"gaussian\", \n            noise_type_sde_substep        : str                    = \"gaussian\", \n            noise_mode_sde                : str                    = \"hard\",\n            noise_mode_sde_substep        : str                    = \"hard\",\n\n            \n            overshoot_mode                : str                    = \"hard\", \n            overshoot_mode_substep        : str                    = \"hard\",\n            overshoot                     : float                  = 0.0, \n            overshoot_substep             : float                  = 0.0,\n            \n            eta                           : float                  = 0.5, \n            eta_substep                   : float                  = 0.5,\n            \n            noise_scaling_weight          : float                  = 0.0,\n            noise_boost_step              : float                  = 0.0, \n            noise_boost_substep           : float                  = 0.0, \n            noise_anchor                  : float                  = 1.0,\n            \n            s_noise                       : float                  = 1.0, \n            s_noise_substep               : float                  = 1.0, \n            d_noise                       : float                  = 1.0, \n            d_noise_start_step            : int                    = 0,\n            d_noise_inv                   : float                  = 1.0,\n            d_noise_inv_start_step        : int                    = 0,\n\n            \n            alpha_sde                     : float                  = -1.0, \n            k_sde                         : float                  = 1.0,\n            cfgpp                         : float                  = 0.0,\n            c1                            : float                  = 0.0, \n            c2                            : float                  = 0.5, \n            c3                            : float                  = 1.0,\n            noise_seed_sde                : int                    = -1,\n            sampler_name                  : str                    = \"res_2m\", \n            implicit_sampler_name         : str                    = \"use_explicit\",\n            \n            implicit_type                 : str                    = \"bongmath\",\n            implicit_type_substeps        : str                    = \"bongmath\",\n            implicit_steps                : int                    = 0,\n            implicit_substeps             : int                    = 0, \n\n            sigmas                        : Optional[Tensor]       = None,\n            sigmas_override               : Optional[Tensor]       = None, \n            guides                                                 = None, \n            options                                                = None, \n            sde_noise                                              = None,\n            sde_noise_steps               : int                    = 1, \n            extra_options                 : str                    = \"\", \n            automation                                             = None, \n\n            epsilon_scales                : Optional[Tensor]       = None, \n            regional_conditioning_weights : Optional[Tensor]       = None,\n            frame_weights_mgr                                      = None, \n\n\n            rescale_floor                 : bool                   = True, \n            \n            rk_swap_step                  : int                    = MAX_STEPS,\n            rk_swap_print                 : bool                   = False,\n            rk_swap_threshold             : float                  = 0.0,\n            rk_swap_type                  : str                    = \"\",\n            \n            sde_mask                      : Optional[Tensor]       = None,\n\n            \n            #start_at_step                 : int                    = 0,\n            #stop_at_step                  : int                    = MAX_STEPS,\n                        \n            **kwargs\n            ): \n        \n        options_mgr = OptionsManager(options, **kwargs)\n        extra_options    += \"\\n\" + options_mgr.get('extra_options', \"\")\n        \n        \n        # defaults for ClownSampler\n        eta_substep = eta\n        \n        # defaults for SharkSampler\n        noise_type_init = \"gaussian\"\n        noise_stdev     = 1.0\n        denoise_alt     = 1.0\n        channelwise_cfg = False #1.0\n        \n        \n        #if options is not None:\n        #options_mgr = OptionsManager(options_inputs)\n        noise_type_sde         = options_mgr.get('noise_type_sde'        , noise_type_sde)\n        noise_type_sde_substep = options_mgr.get('noise_type_sde_substep', noise_type_sde_substep)\n        \n        noise_mode_sde         = options_mgr.get('noise_mode_sde'        , noise_mode_sde)\n        noise_mode_sde_substep = options_mgr.get('noise_mode_sde_substep', noise_mode_sde_substep)\n        \n        overshoot_mode         = options_mgr.get('overshoot_mode'        , overshoot_mode)\n        overshoot_mode_substep = options_mgr.get('overshoot_mode_substep', overshoot_mode_substep)\n\n        eta                    = options_mgr.get('eta'                   , eta)\n        eta_substep            = options_mgr.get('eta_substep'           , eta_substep)\n\n        overshoot              = options_mgr.get('overshoot'             , overshoot)\n        overshoot_substep      = options_mgr.get('overshoot_substep'     , overshoot_substep)\n        \n        noise_scaling_weight   = options_mgr.get('noise_scaling_weight' , noise_scaling_weight)\n        noise_boost_step       = options_mgr.get('noise_boost_step'      , noise_boost_step)\n        noise_boost_substep    = options_mgr.get('noise_boost_substep'   , noise_boost_substep)\n        \n        noise_anchor           = options_mgr.get('noise_anchor'          , noise_anchor)\n\n        s_noise                = options_mgr.get('s_noise'               , s_noise)\n        s_noise_substep        = options_mgr.get('s_noise_substep'       , s_noise_substep)\n\n        d_noise                = options_mgr.get('d_noise'               , d_noise)\n        d_noise_start_step     = options_mgr.get('d_noise_start_step'    , d_noise_start_step)\n        d_noise_inv            = options_mgr.get('d_noise_inv'           , d_noise_inv)\n        d_noise_inv_start_step = options_mgr.get('d_noise_inv_start_step', d_noise_inv_start_step)\n        \n        implicit_type          = options_mgr.get('implicit_type'         , implicit_type)\n        implicit_type_substeps = options_mgr.get('implicit_type_substeps', implicit_type_substeps)\n        implicit_steps         = options_mgr.get('implicit_steps'        , implicit_steps)\n        implicit_substeps      = options_mgr.get('implicit_substeps'     , implicit_substeps)\n        \n        alpha_sde              = options_mgr.get('alpha_sde'             , alpha_sde)\n        k_sde                  = options_mgr.get('k_sde'                 , k_sde)\n        c1                     = options_mgr.get('c1'                    , c1)\n        c2                     = options_mgr.get('c2'                    , c2)\n        c3                     = options_mgr.get('c3'                    , c3)\n\n        frame_weights_mgr      = options_mgr.get('frame_weights_mgr'     , frame_weights_mgr)\n        \n        sde_noise              = options_mgr.get('sde_noise'             , sde_noise)\n        sde_noise_steps        = options_mgr.get('sde_noise_steps'       , sde_noise_steps)\n        \n        extra_options          = options_mgr.get('extra_options'         , extra_options)\n        \n        automation             = options_mgr.get('automation'            , automation)\n        \n        # SharkSampler Options\n        noise_type_init        = options_mgr.get('noise_type_init'       , noise_type_init)\n        noise_stdev            = options_mgr.get('noise_stdev'           , noise_stdev)\n        sampler_mode           = options_mgr.get('sampler_mode'          , sampler_mode)\n        denoise_alt            = options_mgr.get('denoise_alt'           , denoise_alt)\n        \n        channelwise_cfg        = options_mgr.get('channelwise_cfg'       , channelwise_cfg)\n        \n        sigmas                 = options_mgr.get('sigmas'                , sigmas)\n        \n        rk_swap_type           = options_mgr.get('rk_swap_type'          , rk_swap_type)\n        rk_swap_step           = options_mgr.get('rk_swap_step'          , rk_swap_step)\n        rk_swap_threshold      = options_mgr.get('rk_swap_threshold'     , rk_swap_threshold)\n        rk_swap_print          = options_mgr.get('rk_swap_print'         , rk_swap_print)\n        \n        sde_mask               = options_mgr.get('sde_mask'              , sde_mask)\n\n        \n        #start_at_step          = options_mgr.get('start_at_step'         , start_at_step)\n        #stop_at_ste            = options_mgr.get('stop_at_step'          , stop_at_step)\n                \n        if channelwise_cfg: # != 1.0:\n            cfg = -abs(cfg)  # set cfg negative for shark, to flag as cfg_cw\n\n        noise_seed_sde = seed\n\n\n        sampler, = ClownSamplerAdvanced_Beta().main(\n            noise_type_sde                = noise_type_sde,\n            noise_type_sde_substep        = noise_type_sde_substep,\n            noise_mode_sde                = noise_mode_sde,\n            noise_mode_sde_substep        = noise_mode_sde_substep,\n            \n            eta                           = eta,\n            eta_substep                   = eta_substep,\n            \n            s_noise                       = s_noise,\n            s_noise_substep               = s_noise_substep,\n            \n            overshoot                     = overshoot,\n            overshoot_substep             = overshoot_substep,\n            \n            overshoot_mode                = overshoot_mode,\n            overshoot_mode_substep        = overshoot_mode_substep,\n            \n            d_noise                       = d_noise,\n            d_noise_start_step            = d_noise_start_step,\n            d_noise_inv                   = d_noise_inv,\n            d_noise_inv_start_step        = d_noise_inv_start_step,\n\n            alpha_sde                     = alpha_sde,\n            k_sde                         = k_sde,\n            cfgpp                         = cfgpp,\n            c1                            = c1,\n            c2                            = c2,\n            c3                            = c3,\n            sampler_name                  = sampler_name,\n            implicit_sampler_name         = implicit_sampler_name,\n\n            implicit_type                 = implicit_type,\n            implicit_type_substeps        = implicit_type_substeps,\n            implicit_steps                = implicit_steps,\n            implicit_substeps             = implicit_substeps,\n\n            rescale_floor                 = rescale_floor,\n            sigmas_override               = sigmas_override,\n            \n            noise_seed_sde                = noise_seed_sde,\n            \n            guides                        = guides,\n            options                       = options_mgr.as_dict(),\n\n            extra_options                 = extra_options,\n            automation                    = automation,\n\n            noise_scaling_weight          = noise_scaling_weight,\n            noise_boost_step              = noise_boost_step,\n            noise_boost_substep           = noise_boost_substep,\n            \n            epsilon_scales                = epsilon_scales,\n            regional_conditioning_weights = regional_conditioning_weights,\n            frame_weights_mgr             = frame_weights_mgr,\n            \n            sde_noise                     = sde_noise,\n            sde_noise_steps               = sde_noise_steps,\n            \n            rk_swap_step                  = rk_swap_step,\n            rk_swap_print                 = rk_swap_print,\n            rk_swap_threshold             = rk_swap_threshold,\n            rk_swap_type                  = rk_swap_type,\n            \n            steps_to_run                  = steps_to_run,\n            \n            sde_mask                      = sde_mask,\n            \n            bongmath                      = bongmath,\n            )\n            \n        return (sampler,)\n    \n\n\n\n\n\n\n\n\nclass BongSampler:\n    @classmethod\n    def INPUT_TYPES(cls):\n        inputs = {\"required\":\n                    {\n                    \"model\":        (\"MODEL\",),\n                    \"seed\":         (\"INT\",                        {\"default\": 0,   \"min\": -1,     \"max\": 0xffffffffffffffff}),\n                    \"steps\":        (\"INT\",                        {\"default\": 30,  \"min\":  1,     \"max\": MAX_STEPS}),\n                    \"cfg\":          (\"FLOAT\",                      {\"default\": 5.5, \"min\": -100.0, \"max\": 100.0,     \"step\":0.01, \"round\": False, }),\n                    \"sampler_name\": ([\"res_2m\", \"res_3m\", \"res_2s\", \"res_3s\",\"res_2m_sde\", \"res_3m_sde\", \"res_2s_sde\", \"res_3s_sde\"], {\"default\": \"res_2s_sde\"}), \n                    \"scheduler\":    (get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\n                    \"denoise\":      (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000, \"max\": MAX_STEPS, \"step\":0.01}),\n                    },\n                \"optional\": \n                    {\n                    \"positive\":     (\"CONDITIONING\",),\n                    \"negative\":     (\"CONDITIONING\",),\n                    \"latent_image\": (\"LATENT\",),\n                    }\n                }\n        \n        return inputs\n\n    RETURN_TYPES = (\"LATENT\", )\n    \n    RETURN_NAMES = (\"output\", )\n    \n    FUNCTION = \"main\"\n    CATEGORY = \"RES4LYF/samplers\"\n    \n    def main(self, \n            model                                                  = None,\n            denoise                       : float                  = 1.0, \n            scheduler                     : str                    = \"beta57\", \n            cfg                           : float                  = 1.0, \n            seed                          : int                    = 42, \n            positive                                               = None, \n            negative                                               = None, \n            latent_image                  : Optional[dict[Tensor]] = None, \n            steps                         : int                    = 30,\n            steps_to_run                  : int                    = -1,\n            bongmath                      : bool                   = True,\n            sampler_mode                  : str                    = \"standard\",\n            \n            noise_type_sde                : str                    = \"brownian\", \n            noise_type_sde_substep        : str                    = \"brownian\", \n            noise_mode_sde                : str                    = \"hard\",\n            noise_mode_sde_substep        : str                    = \"hard\",\n\n            \n            overshoot_mode                : str                    = \"hard\", \n            overshoot_mode_substep        : str                    = \"hard\",\n            overshoot                     : float                  = 0.0, \n            overshoot_substep             : float                  = 0.0,\n            \n            eta                           : float                  = 0.5, \n            eta_substep                   : float                  = 0.5,\n            d_noise                       : float                  = 1.0, \n            s_noise                       : float                  = 1.0, \n            s_noise_substep               : float                  = 1.0, \n            \n            alpha_sde                     : float                  = -1.0, \n            k_sde                         : float                  = 1.0,\n            cfgpp                         : float                  = 0.0,\n            c1                            : float                  = 0.0, \n            c2                            : float                  = 0.5, \n            c3                            : float                  = 1.0,\n            noise_seed_sde                : int                    = -1,\n            sampler_name                  : str                    = \"res_2m\", \n            implicit_sampler_name         : str                    = \"use_explicit\",\n            \n            implicit_type                 : str                    = \"bongmath\",\n            implicit_type_substeps        : str                    = \"bongmath\",\n            implicit_steps                : int                    = 0,\n            implicit_substeps             : int                    = 0, \n\n            sigmas                        : Optional[Tensor]       = None,\n            sigmas_override               : Optional[Tensor]       = None, \n            guides                                                 = None, \n            options                                                = None, \n            sde_noise                                              = None,\n            sde_noise_steps               : int                    = 1, \n            extra_options                 : str                    = \"\", \n            automation                                             = None, \n\n            epsilon_scales                : Optional[Tensor]       = None, \n            regional_conditioning_weights : Optional[Tensor]       = None,\n            frame_weights_mgr                                      = None, \n            noise_scaling_weight         : float                  = 0.0, \n            noise_boost_step              : float                  = 0.0, \n            noise_boost_substep           : float                  = 0.0, \n            noise_anchor                  : float                  = 1.0,\n\n            rescale_floor                 : bool                   = True, \n            \n            rk_swap_step                  : int                    = MAX_STEPS,\n            rk_swap_print                 : bool                   = False,\n            rk_swap_threshold             : float                  = 0.0,\n            rk_swap_type                  : str                    = \"\",\n            \n            #start_at_step                 : int                    = 0,\n            #stop_at_step                  : int                    = MAX_STEPS,\n                        \n            **kwargs\n            ): \n        \n        options_mgr = OptionsManager(options, **kwargs)\n        extra_options    += \"\\n\" + options_mgr.get('extra_options', \"\")\n        \n        if model.model.model_config.unet_config.get('stable_cascade_stage') == 'b':\n            noise_type_sde         = \"pyramid-cascade_B\"\n            noise_type_sde_substep = \"pyramid-cascade_B\"\n        \n        if sampler_name.endswith(\"_sde\"):\n            sampler_name = sampler_name[:-4]\n            eta = 0.5\n        else:\n            eta = 0.0\n        \n        # defaults for ClownSampler\n        eta_substep = eta\n        \n        # defaults for SharkSampler\n        noise_type_init = \"gaussian\"\n        noise_stdev     = 1.0\n        denoise_alt     = 1.0\n        channelwise_cfg = False #1.0\n        \n        \n        #if options is not None:\n        #options_mgr = OptionsManager(options_inputs)\n        noise_type_sde         = options_mgr.get('noise_type_sde'        , noise_type_sde)\n        noise_type_sde_substep = options_mgr.get('noise_type_sde_substep', noise_type_sde_substep)\n        \n        noise_mode_sde         = options_mgr.get('noise_mode_sde'        , noise_mode_sde)\n        noise_mode_sde_substep = options_mgr.get('noise_mode_sde_substep', noise_mode_sde_substep)\n        \n        overshoot_mode         = options_mgr.get('overshoot_mode'        , overshoot_mode)\n        overshoot_mode_substep = options_mgr.get('overshoot_mode_substep', overshoot_mode_substep)\n\n        eta                    = options_mgr.get('eta'                   , eta)\n        eta_substep            = options_mgr.get('eta_substep'           , eta_substep)\n\n        overshoot              = options_mgr.get('overshoot'             , overshoot)\n        overshoot_substep      = options_mgr.get('overshoot_substep'     , overshoot_substep)\n        \n        noise_scaling_weight  = options_mgr.get('noise_scaling_weight' , noise_scaling_weight)\n\n        noise_boost_step       = options_mgr.get('noise_boost_step'      , noise_boost_step)\n        noise_boost_substep    = options_mgr.get('noise_boost_substep'   , noise_boost_substep)\n        \n        noise_anchor           = options_mgr.get('noise_anchor'          , noise_anchor)\n\n        s_noise                = options_mgr.get('s_noise'               , s_noise)\n        s_noise_substep        = options_mgr.get('s_noise_substep'       , s_noise_substep)\n\n        d_noise                = options_mgr.get('d_noise'               , d_noise)\n        \n        implicit_type          = options_mgr.get('implicit_type'         , implicit_type)\n        implicit_type_substeps = options_mgr.get('implicit_type_substeps', implicit_type_substeps)\n        implicit_steps         = options_mgr.get('implicit_steps'        , implicit_steps)\n        implicit_substeps      = options_mgr.get('implicit_substeps'     , implicit_substeps)\n        \n        alpha_sde              = options_mgr.get('alpha_sde'             , alpha_sde)\n        k_sde                  = options_mgr.get('k_sde'                 , k_sde)\n        c1                     = options_mgr.get('c1'                    , c1)\n        c2                     = options_mgr.get('c2'                    , c2)\n        c3                     = options_mgr.get('c3'                    , c3)\n\n        frame_weights_mgr      = options_mgr.get('frame_weights_mgr'     , frame_weights_mgr)\n        \n        sde_noise              = options_mgr.get('sde_noise'             , sde_noise)\n        sde_noise_steps        = options_mgr.get('sde_noise_steps'       , sde_noise_steps)\n        \n        extra_options          = options_mgr.get('extra_options'         , extra_options)\n        \n        automation             = options_mgr.get('automation'            , automation)\n        \n        # SharkSampler Options\n        noise_type_init        = options_mgr.get('noise_type_init'       , noise_type_init)\n        noise_stdev            = options_mgr.get('noise_stdev'           , noise_stdev)\n        sampler_mode           = options_mgr.get('sampler_mode'          , sampler_mode)\n        denoise_alt            = options_mgr.get('denoise_alt'           , denoise_alt)\n        \n        channelwise_cfg        = options_mgr.get('channelwise_cfg'       , channelwise_cfg)\n        \n        sigmas                 = options_mgr.get('sigmas'                , sigmas)\n        \n        rk_swap_type           = options_mgr.get('rk_swap_type'          , rk_swap_type)\n        rk_swap_step           = options_mgr.get('rk_swap_step'          , rk_swap_step)\n        rk_swap_threshold      = options_mgr.get('rk_swap_threshold'     , rk_swap_threshold)\n        rk_swap_print          = options_mgr.get('rk_swap_print'         , rk_swap_print)\n        \n        #start_at_step          = options_mgr.get('start_at_step'         , start_at_step)\n        #stop_at_ste            = options_mgr.get('stop_at_step'          , stop_at_step)\n                \n        if channelwise_cfg: # != 1.0:\n            cfg = -abs(cfg)  # set cfg negative for shark, to flag as cfg_cw\n\n\n\n\n        sampler, = ClownSamplerAdvanced_Beta().main(\n            noise_type_sde                = noise_type_sde,\n            noise_type_sde_substep        = noise_type_sde_substep,\n            noise_mode_sde                = noise_mode_sde,\n            noise_mode_sde_substep        = noise_mode_sde_substep,\n            \n            eta                           = eta,\n            eta_substep                   = eta_substep,\n            \n            s_noise                       = s_noise,\n            s_noise_substep               = s_noise_substep,\n            \n            overshoot                     = overshoot,\n            overshoot_substep             = overshoot_substep,\n            \n            overshoot_mode                = overshoot_mode,\n            overshoot_mode_substep        = overshoot_mode_substep,\n            \n            d_noise                       = d_noise,\n            #d_noise_start_step            = d_noise_start_step,\n            #d_noise_inv                   = d_noise_inv,\n            #d_noise_inv_start_step        = d_noise_inv_start_step,\n\n            alpha_sde                     = alpha_sde,\n            k_sde                         = k_sde,\n            cfgpp                         = cfgpp,\n            c1                            = c1,\n            c2                            = c2,\n            c3                            = c3,\n            sampler_name                  = sampler_name,\n            implicit_sampler_name         = implicit_sampler_name,\n\n            implicit_type                 = implicit_type,\n            implicit_type_substeps        = implicit_type_substeps,\n            implicit_steps                = implicit_steps,\n            implicit_substeps             = implicit_substeps,\n\n            rescale_floor                 = rescale_floor,\n            sigmas_override               = sigmas_override,\n            \n            noise_seed_sde                = noise_seed_sde,\n            \n            guides                        = guides,\n            options                       = options_mgr.as_dict(),\n\n            extra_options                 = extra_options,\n            automation                    = automation,\n\n            noise_scaling_weight         = noise_scaling_weight,\n            noise_boost_step              = noise_boost_step,\n            noise_boost_substep           = noise_boost_substep,\n            \n            epsilon_scales                = epsilon_scales,\n            regional_conditioning_weights = regional_conditioning_weights,\n            frame_weights_mgr             = frame_weights_mgr,\n            \n            sde_noise                     = sde_noise,\n            sde_noise_steps               = sde_noise_steps,\n            \n            rk_swap_step                  = rk_swap_step,\n            rk_swap_print                 = rk_swap_print,\n            rk_swap_threshold             = rk_swap_threshold,\n            rk_swap_type                  = rk_swap_type,\n            \n            steps_to_run                  = steps_to_run,\n            \n            bongmath                      = bongmath,\n            )\n            \n        \n        output, denoised, sde_noise = SharkSampler().main(\n            model           = model, \n            cfg             = cfg, \n            scheduler       = scheduler,\n            steps           = steps, \n            steps_to_run    = steps_to_run,\n            denoise         = denoise,\n            latent_image    = latent_image, \n            positive        = positive,\n            negative        = negative, \n            sampler         = sampler, \n            cfgpp           = cfgpp, \n            noise_seed      = seed, \n            options         = options_mgr.as_dict(), \n            sde_noise       = sde_noise, \n            sde_noise_steps = sde_noise_steps, \n            noise_type_init = noise_type_init,\n            noise_stdev     = noise_stdev,\n            sampler_mode    = sampler_mode,\n            denoise_alt     = denoise_alt,\n            sigmas          = sigmas,\n\n            extra_options   = extra_options)\n        \n        return (output, )"
  },
  {
    "path": "beta/samplers_extensions.py",
    "content": "import torch\r\nfrom torch import Tensor\r\nimport torch.nn.functional as F\r\n\r\nfrom dataclasses import dataclass, asdict\r\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union\r\nimport copy\r\n\r\nfrom nodes import MAX_RESOLUTION\r\n\r\nfrom ..latents               import get_edge_mask\r\nfrom ..helper                import OptionsManager, FrameWeightsManager, initialize_or_scale, get_res4lyf_scheduler_list, parse_range_string, parse_tile_sizes, parse_range_string_int\r\n\r\nfrom .rk_coefficients_beta   import RK_SAMPLER_NAMES_BETA_FOLDERS, get_default_sampler_name, get_sampler_name_list, process_sampler_name\r\n\r\nfrom .noise_classes          import NOISE_GENERATOR_NAMES_SIMPLE\r\nfrom .rk_noise_sampler_beta  import NOISE_MODE_NAMES\r\nfrom .constants              import IMPLICIT_TYPE_NAMES, GUIDE_MODE_NAMES_BETA_SIMPLE, MAX_STEPS, FRAME_WEIGHTS_CONFIG_NAMES, FRAME_WEIGHTS_DYNAMICS_NAMES, FRAME_WEIGHTS_SCHEDULE_NAMES\r\n\r\n\r\nclass ClownSamplerSelector_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"sampler_name\": (get_sampler_name_list(),  {\"default\": get_default_sampler_name()}), \r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (RK_SAMPLER_NAMES_BETA_FOLDERS,)\r\n    RETURN_NAMES = (\"sampler_name\",) \r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            sampler_name = \"res_2m\",\r\n            ):\r\n        \r\n        sampler_name, implicit_sampler_name = process_sampler_name(sampler_name)\r\n        \r\n        sampler_name = sampler_name if implicit_sampler_name == \"use_explicit\" else implicit_sampler_name\r\n        \r\n        return (sampler_name,)\r\n\r\n\r\n\r\nclass ClownOptions_SDE_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"noise_type_sde\":         (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\r\n                    \"noise_type_sde_substep\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\r\n                    \"noise_mode_sde\":         (NOISE_MODE_NAMES,             {\"default\": 'hard',                                                        \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\r\n                    \"noise_mode_sde_substep\": (NOISE_MODE_NAMES,             {\"default\": 'hard',                                                        \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\r\n                    \"eta\":                    (\"FLOAT\",                      {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\r\n                    \"eta_substep\":            (\"FLOAT\",                      {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\r\n                    \"seed\":                   (\"INT\",                        {\"default\": -1, \"min\": -1, \"max\": 0xffffffffffffffff}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"etas\":                   (\"SIGMAS\", ),\r\n                    \"etas_substep\":           (\"SIGMAS\", ),\r\n                    \"options\":                (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            noise_type_sde         = \"gaussian\",\r\n            noise_type_sde_substep = \"gaussian\",\r\n            noise_mode_sde         = \"hard\",\r\n            noise_mode_sde_substep = \"hard\",\r\n            eta                    = 0.5,\r\n            eta_substep            = 0.5,\r\n            seed             : int = -1,\r\n            etas             : Optional[Tensor] = None,\r\n            etas_substep     : Optional[Tensor] = None,\r\n            options                = None,\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n        \r\n        if noise_mode_sde == \"none\":\r\n            noise_mode_sde = \"hard\"\r\n            eta = 0.0\r\n            \r\n        if noise_mode_sde_substep == \"none\":\r\n            noise_mode_sde_substep = \"hard\"\r\n            eta_substep = 0.0\r\n            \r\n        if noise_type_sde == \"none\":\r\n            noise_type_sde = \"gaussian\"\r\n            eta = 0.0\r\n            \r\n        if noise_type_sde_substep == \"none\":\r\n            noise_type_sde_substep = \"gaussian\"\r\n            eta_substep = 0.0\r\n            \r\n        options['noise_type_sde']         = noise_type_sde\r\n        options['noise_type_sde_substep'] = noise_type_sde_substep\r\n        options['noise_mode_sde']         = noise_mode_sde\r\n        options['noise_mode_sde_substep'] = noise_mode_sde_substep\r\n        options['eta']                    = eta\r\n        options['eta_substep']            = eta_substep\r\n        options['noise_seed_sde']         = seed\r\n        \r\n        options['etas']                   = etas\r\n        options['etas_substep']           = etas_substep\r\n\r\n        return (options,)\r\n\r\n\r\n\r\nclass ClownOptions_StepSize_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"overshoot_mode\":         (NOISE_MODE_NAMES, {\"default\": 'hard',                                                        \"tooltip\": \"How step size overshoot scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\r\n                    \"overshoot_mode_substep\": (NOISE_MODE_NAMES, {\"default\": 'hard',                                                        \"tooltip\": \"How substep size overshoot scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\r\n                    \"overshoot\":              (\"FLOAT\",          {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Boost the size of each denoising step, then rescale to match the original. Has a softening effect.\"}),\r\n                    \"overshoot_substep\":      (\"FLOAT\",          {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Boost the size of each denoising substep, then rescale to match the original. Has a softening effect.\"}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":                (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            overshoot_mode         = \"hard\",\r\n            overshoot_mode_substep = \"hard\",\r\n            overshoot              = 0.0,\r\n            overshoot_substep      = 0.0,\r\n            options                = None,\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['overshoot_mode']         = overshoot_mode\r\n        options['overshoot_mode_substep'] = overshoot_mode_substep\r\n        options['overshoot']              = overshoot\r\n        options['overshoot_substep']      = overshoot_substep\r\n\r\n        return (options,\r\n            )\r\n\r\n\r\n@dataclass\r\nclass DetailBoostOptions:\r\n    noise_scaling_weight : float = 0.0\r\n    noise_boost_step     : float = 0.0\r\n    noise_boost_substep  : float = 0.0\r\n    noise_anchor         : float = 1.0\r\n    s_noise              : float = 1.0\r\n    s_noise_substep      : float = 1.0\r\n    d_noise              : float = 1.0\r\n\r\nDETAIL_BOOST_METHODS = [\r\n    'sampler',\r\n    'sampler_normal',\r\n    'sampler_substep',\r\n    'sampler_substep_normal',\r\n    'model',\r\n    'model_alpha',\r\n    ]\r\n\r\nclass ClownOptions_DetailBoost_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"weight\":     (\"FLOAT\",              {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set to positive values to create a sharper, grittier, more detailed image. Set to negative values to soften and deepen the colors.\"}),\r\n                    \"method\":     (DETAIL_BOOST_METHODS, {\"default\": \"model\",                                                       \"tooltip\": \"Determines whether the sampler or the model underestimates the noise level.\"}),\r\n                    #\"noise_scaling_mode\":    (['linear'] + NOISE_MODE_NAMES,  {\"default\": 'hard',                                          \"tooltip\": \"Changes the steps where the effect is greatest. Most affect early steps, sinusoidal affects middle steps.\"}),\r\n                    \"mode\":       (NOISE_MODE_NAMES,     {\"default\": 'hard',                                                        \"tooltip\": \"Changes the steps where the effect is greatest. Most affect early steps, sinusoidal affects middle steps.\"}),\r\n                    \"eta\":        (\"FLOAT\",              {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"The strength of the effect of the noise_scaling_mode. Linear ignores this parameter.\"}),\r\n                    \"start_step\": (\"INT\",                {\"default\": 3,   \"min\": 0,      \"max\": MAX_STEPS}),\r\n                    \"end_step\":   (\"INT\",                {\"default\": 10,  \"min\": -1,     \"max\": MAX_STEPS}),\r\n\r\n                    #\"noise_scaling_cycles\":  (\"INT\",              {\"default\": 1, \"min\": 1, \"max\": MAX_STEPS}),\r\n\r\n                    #\"noise_boost_step\":      (\"FLOAT\",            {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set to positive values to create a sharper, grittier, more detailed image. Set to negative values to soften and deepen the colors.\"}),\r\n                    #\"noise_boost_substep\":   (\"FLOAT\",            {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set to positive values to create a sharper, grittier, more detailed image. Set to negative values to soften and deepen the colors.\"}),\r\n                    #\"sampler_scaling_normalize\":(\"BOOLEAN\",          {\"default\": False,                                                          \"tooltip\": \"Limit saturation and luminosity drift.\"}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"weights\": (\"SIGMAS\", ),\r\n                    \"etas\":    (\"SIGMAS\", ),\r\n                    \"options\":               (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            weight      : float = 0.0,\r\n            method      : str   = \"sampler\",\r\n            mode        : str   = \"linear\",\r\n            eta         : float = 0.5,\r\n            start_step  : int   = 0,\r\n            end_step    : int   = -1,\r\n\r\n\r\n            noise_scaling_cycles      : int   = 1,\r\n\r\n            noise_boost_step          : float = 0.0,\r\n            noise_boost_substep       : float = 0.0,\r\n            sampler_scaling_normalize : bool  = False,\r\n\r\n            weights     : Optional[Tensor] = None,\r\n            etas        : Optional[Tensor] = None,\r\n            \r\n            options                        = None\r\n            ):\r\n        \r\n        noise_scaling_weight     = weight\r\n        noise_scaling_type       = method\r\n        noise_scaling_mode       = mode\r\n        noise_scaling_eta        = eta\r\n        noise_scaling_start_step = start_step\r\n        noise_scaling_end_step   = end_step\r\n        \r\n        noise_scaling_weights = weights\r\n        noise_scaling_etas    = etas\r\n        \r\n        \r\n        options = options if options is not None else {}\r\n        \r\n        default_dtype = torch.float64\r\n        default_device = torch.device('cuda')\r\n        \r\n        if noise_scaling_type.endswith(\"_normal\"):\r\n            sampler_scaling_normalize = True\r\n            noise_scaling_type = noise_scaling_type[:-7]\r\n        \r\n        if noise_scaling_end_step == -1:\r\n            noise_scaling_end_step = MAX_STEPS\r\n        \r\n        if noise_scaling_weights == None: \r\n            noise_scaling_weights = initialize_or_scale(None, noise_scaling_weight, MAX_STEPS).to(default_dtype).to(default_device)\r\n        \r\n        if noise_scaling_etas == None: \r\n            noise_scaling_etas = initialize_or_scale(None, noise_scaling_eta, MAX_STEPS).to(default_dtype).to(default_device)\r\n        \r\n        noise_scaling_prepend = torch.zeros((noise_scaling_start_step,), dtype=default_dtype, device=default_device)\r\n        \r\n        noise_scaling_weights = torch.cat((noise_scaling_prepend, noise_scaling_weights), dim=0)\r\n        noise_scaling_etas    = torch.cat((noise_scaling_prepend, noise_scaling_etas),    dim=0)\r\n\r\n        if noise_scaling_weights.shape[-1] > noise_scaling_end_step:\r\n            noise_scaling_weights = noise_scaling_weights[:noise_scaling_end_step]\r\n            \r\n        if noise_scaling_etas.shape[-1] > noise_scaling_end_step:\r\n            noise_scaling_etas = noise_scaling_etas[:noise_scaling_end_step]\r\n        \r\n        noise_scaling_weights = F.pad(noise_scaling_weights, (0, MAX_STEPS), value=0.0)\r\n        noise_scaling_etas = F.pad(noise_scaling_etas, (0, MAX_STEPS), value=0.0)\r\n        \r\n        options['noise_scaling_weight']  = noise_scaling_weight\r\n        options['noise_scaling_type']    = noise_scaling_type\r\n        options['noise_scaling_mode']    = noise_scaling_mode\r\n        options['noise_scaling_eta']     = noise_scaling_eta\r\n        options['noise_scaling_cycles']  = noise_scaling_cycles\r\n        \r\n        options['noise_scaling_weights'] = noise_scaling_weights\r\n        options['noise_scaling_etas']    = noise_scaling_etas\r\n        \r\n        options['noise_boost_step']      = noise_boost_step\r\n        options['noise_boost_substep']   = noise_boost_substep\r\n        options['noise_boost_normalize'] = sampler_scaling_normalize\r\n\r\n        \"\"\"options['DetailBoostOptions'] = DetailBoostOptions(\r\n            noise_scaling_weight = noise_scaling_weight,\r\n            noise_scaling_type    = noise_scaling_type,\r\n            noise_scaling_mode    = noise_scaling_mode,\r\n            noise_scaling_eta     = noise_scaling_eta,\r\n            \r\n            noise_boost_step      = noise_boost_step,\r\n            noise_boost_substep   = noise_boost_substep,\r\n            noise_boost_normalize = noise_boost_normalize,\r\n            \r\n            noise_anchor          = noise_anchor,\r\n            s_noise               = s_noise,\r\n            s_noise_substep       = s_noise_substep,\r\n            d_noise               = d_noise\r\n            d_noise_start_step    = d_noise_start_step\r\n        )\"\"\"\r\n\r\n        return (options,)\r\n\r\n\r\n\r\n\r\nclass ClownOptions_SigmaScaling_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"s_noise\":              (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01,                 \"tooltip\": \"Adds extra SDE noise. Values around 1.03-1.07 can lead to a moderate boost in detail and paint textures.\"}),\r\n                    \"s_noise_substep\":      (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01,                 \"tooltip\": \"Adds extra SDE noise. Values around 1.03-1.07 can lead to a moderate boost in detail and paint textures.\"}),\r\n                    \"noise_anchor_sde\":     (\"FLOAT\", {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Typically set to between 1.0 and 0.0. Lower values cerate a grittier, more detailed image.\"}),\r\n                    \r\n                    \"lying\":                (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01,                 \"tooltip\": \"Downscales the sigma schedule. Values around 0.98-0.95 can lead to a large boost in detail and paint textures.\"}),\r\n                    \"lying_inv\":            (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01,                 \"tooltip\": \"Upscales the sigma schedule. Will soften the image and deepen colors. Use after d_noise to counteract desaturation.\"}),\r\n                    \"lying_start_step\":     (\"INT\",   {\"default\": 0, \"min\": 0, \"max\": MAX_STEPS}),\r\n                    \"lying_inv_start_step\": (\"INT\",   {\"default\": 1, \"min\": 0, \"max\": MAX_STEPS}),\r\n\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"s_noises\":             (\"SIGMAS\", ),\r\n                    \"s_noises_substep\":     (\"SIGMAS\", ),\r\n                    \"options\":              (\"OPTIONS\", ),\r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            noise_anchor_sde        : float = 1.0,\r\n            \r\n            s_noise                 : float = 1.0,\r\n            s_noise_substep         : float = 1.0,\r\n            lying                   : float = 1.0,\r\n            lying_start_step        : int   = 0,\r\n            \r\n            lying_inv               : float = 1.0,\r\n            lying_inv_start_step    : int   = 1,\r\n            \r\n            s_noises                : Optional[Tensor] = None,\r\n            s_noises_substep        : Optional[Tensor] = None,\r\n            options                         = None\r\n            ):\r\n        \r\n        options = options if options is not None else {}\r\n        \r\n        default_dtype = torch.float64\r\n        default_device = torch.device('cuda')\r\n        \r\n        \r\n        \r\n        options['noise_anchor']           = noise_anchor_sde\r\n        options['s_noise']                = s_noise\r\n        options['s_noise_substep']        = s_noise_substep\r\n        options['d_noise']                = lying\r\n        options['d_noise_start_step']     = lying_start_step\r\n        options['d_noise_inv']            = lying_inv\r\n        options['d_noise_inv_start_step'] = lying_inv_start_step\r\n\r\n        options['s_noises']                = s_noises\r\n        options['s_noises_substep']        = s_noises_substep\r\n\r\n        return (options,)\r\n\r\n\r\n\r\nclass ClownOptions_FlowGuide:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"sync_eps\": (\"FLOAT\", {\"default\": 0.75, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Accelerate convergence with positive values when sampling, negative values when unsampling.\"}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":               (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            sync_eps = 0.75,\r\n            options  = None\r\n            ):\r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['flow_sync_eps'] = sync_eps\r\n\r\n        return (options,)\r\n\r\n\r\n\r\nclass ClownOptions_Momentum_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"momentum\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Accelerate convergence with positive values when sampling, negative values when unsampling.\"}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":               (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            momentum = 0.0,\r\n            options  = None\r\n            ):\r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['momentum'] = momentum\r\n\r\n        return (options,)\r\n\r\n\r\n\r\nclass ClownOptions_ImplicitSteps_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"implicit_type\":          (IMPLICIT_TYPE_NAMES, {\"default\": \"bongmath\"}), \r\n                    \"implicit_type_substeps\": (IMPLICIT_TYPE_NAMES, {\"default\": \"bongmath\"}), \r\n                    \"implicit_steps\":         (\"INT\",               {\"default\": 0, \"min\": 0, \"max\": 10000}),\r\n                    \"implicit_substeps\":      (\"INT\",               {\"default\": 0, \"min\": 0, \"max\": 10000}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":                (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            implicit_type          = \"bongmath\",\r\n            implicit_type_substeps = \"bongmath\",\r\n            implicit_steps         = 0,\r\n            implicit_substeps      = 0,\r\n            options                = None\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['implicit_type']          = implicit_type\r\n        options['implicit_type_substeps'] = implicit_type_substeps\r\n        options['implicit_steps']         = implicit_steps\r\n        options['implicit_substeps']      = implicit_substeps\r\n\r\n        return (options,)\r\n\r\n\r\n\r\nclass ClownOptions_Cycles_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"cycles\"          : (\"FLOAT\", {\"default\": 0.0, \"min\":  0.0,   \"max\": 10000, \"step\":0.5,  \"round\": 0.5}),\r\n                    \"eta_decay_scale\" : (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01, \"tooltip\": \"Multiplies etas by this number after every cycle. May help drive convergence.\" }),\r\n                    \"unsample_eta\"    : (\"FLOAT\", {\"default\": 0.5, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    \"unsampler_override\"  : (get_sampler_name_list(), {\"default\": \"none\"}),\r\n                    \"unsample_steps_to_run\"  : (\"INT\", {\"default\": -1, \"min\":  -1,   \"max\": 10000, \"step\":1,  \"round\": 1}),\r\n                    \"unsample_cfg\"    : (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    \"unsample_bongmath\" : (\"BOOLEAN\", {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":    (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            cycles          = 0,\r\n            unsample_eta    = 0.5,\r\n            eta_decay_scale = 1.0,\r\n            unsample_cfg    = 1.0,\r\n            unsampler_override  = \"none\",\r\n            unsample_steps_to_run = -1,\r\n            unsample_bongmath = False,\r\n            options         = None\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['rebounds']        = int(cycles * 2)\r\n        options['unsample_eta']    = unsample_eta\r\n        options['unsampler_name']  = unsampler_override\r\n        options['eta_decay_scale'] = eta_decay_scale\r\n        options['unsample_steps_to_run'] = unsample_steps_to_run\r\n        options['unsample_cfg']    = unsample_cfg\r\n        options['unsample_bongmath'] = unsample_bongmath\r\n\r\n        return (options,)\r\n\r\n\r\n\r\n\r\nclass SharkOptions_StartStep_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"start_at_step\": (\"INT\", {\"default\": 0, \"min\": -1, \"max\": 10000, \"step\":1,}),\r\n\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":    (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            start_at_step = 0,\r\n            options       = None\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['start_at_step'] = start_at_step\r\n\r\n        return (options,)\r\n\r\n\r\n\r\n\r\nclass ClownOptions_Tile_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"tile_width\" : (\"INT\", {\"default\": 1024, \"min\": -1, \"max\": 10000, \"step\":1,}),\r\n                    \"tile_height\": (\"INT\", {\"default\": 1024, \"min\": -1, \"max\": 10000, \"step\":1,}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":    (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            tile_height = 1024,\r\n            tile_width  = 1024,\r\n            options     = None\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n        \r\n        tile_sizes = options.get('tile_sizes', [])\r\n        tile_sizes.append((tile_height, tile_width))\r\n        options['tile_sizes'] = tile_sizes\r\n\r\n        return (options,)\r\n\r\n\r\n\r\nclass ClownOptions_Tile_Advanced_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"tile_sizes\": (\"STRING\", {\"default\": \"1024,1024\", \"multiline\": True}),   \r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":    (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            tile_sizes = \"1024,1024\",\r\n            options    = None\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        tiles_height_width = parse_tile_sizes(tile_sizes)\r\n        options['tile_sizes'] = [(tile[-1], tile[-2]) for tile in tiles_height_width]  # swap height and width to be consistent... width, height\r\n\r\n        return (options,)\r\n\r\n\r\n\r\n\r\nclass ClownOptions_ExtraOptions_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"extra_options\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),   \r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\": (\"OPTIONS\", ),   \r\n                    }  \r\n            }\r\n        \r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self,\r\n            extra_options = \"\",\r\n            options       = None\r\n            ):\r\n\r\n        options = options if options is not None else {}\r\n        \r\n        if 'extra_options' in options:\r\n            options['extra_options'] += '\\n' + extra_options\r\n        else:\r\n            options['extra_options']  = extra_options\r\n\r\n        return (options, )\r\n\r\n\r\n\r\n\r\nclass ClownOptions_DenoisedSampling_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"cycles\"          : (\"FLOAT\", {\"default\": 0.0, \"min\":  0.0,   \"max\": 10000, \"step\":0.5,  \"round\": 0.5}),\r\n                    \"eta_decay_scale\" : (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01, \"tooltip\": \"Multiplies etas by this number after every cycle. May help drive convergence.\" }),\r\n                    \"unsample_eta\"    : (\"FLOAT\", {\"default\": 0.5, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    \"unsampler_override\"  : (get_sampler_name_list(), {\"default\": \"none\"}),\r\n                    \"unsample_steps_to_run\"  : (\"INT\", {\"default\": -1, \"min\":  -1,   \"max\": 10000, \"step\":1,  \"round\": 1}),\r\n                    \"unsample_cfg\"    : (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    \"unsample_bongmath\" : (\"BOOLEAN\", {\"default\": False}),                    \r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\": (\"OPTIONS\", ),   \r\n                    }  \r\n            }\r\n        \r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self,\r\n            extra_options = \"\",\r\n            options       = None\r\n            ):\r\n\r\n        options = options if options is not None else {}\r\n        \r\n        if 'extra_options' in options:\r\n            options['extra_options'] += '\\n' + extra_options\r\n        else:\r\n            options['extra_options']  = extra_options\r\n\r\n        return (options, )\r\n\r\n\r\n\r\n\r\nclass ClownOptions_Automation_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\": {},\r\n                \"optional\": {\r\n                    \"etas\":             (\"SIGMAS\", ),\r\n                    \"etas_substep\":     (\"SIGMAS\", ),\r\n                    \"s_noises\":         (\"SIGMAS\", ),\r\n                    \"s_noises_substep\": (\"SIGMAS\", ),\r\n                    \"epsilon_scales\":   (\"SIGMAS\", ),\r\n                    \"frame_weights\":    (\"SIGMAS\", ),\r\n                    \"options\":          (\"OPTIONS\",),  \r\n                    }  \r\n                }\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self,\r\n            etas             = None,\r\n            etas_substep     = None,\r\n            s_noises         = None,\r\n            s_noises_substep = None,\r\n            epsilon_scales   = None,\r\n            frame_weights    = None,\r\n            options          = None\r\n            ):\r\n                \r\n        options = options if options is not None else {}\r\n            \r\n        options_mgr = OptionsManager(options)\r\n\r\n        frame_weights_mgr = options_mgr.get(\"frame_weights_mgr\")\r\n        if frame_weights_mgr is None and frame_weights is not None:\r\n            frame_weights_mgr = FrameWeightsManager()\r\n            frame_weights_mgr.set_custom_weights(\"frame_weights\", frame_weights)\r\n            \r\n        automation = {\r\n            \"etas\"              : etas,\r\n            \"etas_substep\"      : etas_substep,\r\n            \"s_noises\"          : s_noises,\r\n            \"s_noises_substep\"  : s_noises_substep,\r\n            \"epsilon_scales\"    : epsilon_scales,\r\n            \"frame_weights_mgr\" : frame_weights_mgr,\r\n        }\r\n        \r\n        options[\"automation\"] = automation\r\n\r\n        return (options, )\r\n\r\n\r\n\r\n\r\n\r\nclass SharkOptions_GuideCond_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\": {},\r\n                \"optional\": {\r\n                    \"positive\" : (\"CONDITIONING\", ),\r\n                    \"negative\" : (\"CONDITIONING\", ),\r\n                    \"cfg\"      : (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    \"options\"  : (\"OPTIONS\",),  \r\n                    }  \r\n                }\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self,\r\n            positive = None,\r\n            negative = None,\r\n            cfg      = 1.0,\r\n            options  = None,\r\n            ):\r\n                \r\n        options = options if options is not None else {}\r\n\r\n        flow_cond = {\r\n            \"yt_positive\" : positive,\r\n            \"yt_negative\" : negative,\r\n            \"yt_cfg\"      : cfg,\r\n        }\r\n        \r\n        options[\"flow_cond\"] = flow_cond\r\n\r\n        return (options, )\r\n\r\n\r\n\r\n\r\nclass SharkOptions_GuideConds_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\": {},\r\n                \"optional\": {\r\n                    \"positive_masked\"   : (\"CONDITIONING\", ),\r\n                    \"positive_unmasked\" : (\"CONDITIONING\", ),\r\n                    \"negative_masked\"   : (\"CONDITIONING\", ),\r\n                    \"negative_unmasked\" : (\"CONDITIONING\", ),\r\n                    \"cfg_masked\"        : (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    \"cfg_unmasked\"      : (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    \"options\"           : (\"OPTIONS\",),  \r\n                    }  \r\n                }\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self,\r\n            positive_masked   = None,\r\n            negative_masked   = None,\r\n            cfg_masked        = 1.0,\r\n            positive_unmasked = None,\r\n            negative_unmasked = None,\r\n            cfg_unmasked      = 1.0,\r\n            options  = None,\r\n            ):\r\n                \r\n        options = options if options is not None else {}\r\n\r\n        flow_cond = {\r\n            \"yt_positive\"     : positive_masked,\r\n            \"yt_negative\"     : negative_masked,\r\n            \"yt_cfg\"          : cfg_masked,\r\n            \"yt_inv_positive\" : positive_unmasked,\r\n            \"yt_inv_negative\" : negative_unmasked,\r\n            \"yt_inv_cfg\"      : cfg_unmasked,\r\n        }\r\n        \r\n        options[\"flow_cond\"] = flow_cond\r\n\r\n        return (options, )\r\n\r\n\r\n\r\n\r\n\r\nclass SharkOptions_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"noise_type_init\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\r\n                \"s_noise_init\":    (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, }),\r\n                \"denoise_alt\":     (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000,   \"max\": 10000,   \"step\":0.01}),\r\n                \"channelwise_cfg\": (\"BOOLEAN\",                    {\"default\": False}),\r\n                },\r\n            \"optional\": {\r\n                \"options\":         (\"OPTIONS\", ),   \r\n                }\r\n            }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            noise_type_init = \"gaussian\",\r\n            s_noise_init    = 1.0,\r\n            denoise_alt     = 1.0,\r\n            channelwise_cfg = False,\r\n            options         = None\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['noise_type_init']  = noise_type_init\r\n        options['noise_init_stdev'] = s_noise_init\r\n        options['denoise_alt']      = denoise_alt\r\n        options['channelwise_cfg']  = channelwise_cfg\r\n\r\n        return (options,)\r\n    \r\n\r\n\r\n\r\nclass SharkOptions_UltraCascade_Latent_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"width\":   (\"INT\", {\"default\": 60, \"min\": 1, \"max\": MAX_RESOLUTION, \"step\": 1}),\r\n                \"height\":  (\"INT\", {\"default\": 36, \"min\": 1, \"max\": MAX_RESOLUTION, \"step\": 1}),\r\n                },\r\n            \"optional\": {\r\n                \"options\": (\"OPTIONS\",),   \r\n                }\r\n            }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            width  : int = 60,\r\n            height : int = 36,\r\n            options       = None,\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['ultracascade_latent_width']  = width\r\n        options['ultracascade_latent_height'] = height\r\n\r\n        return (options,)\r\n\r\n\r\n\r\n\r\nclass ClownOptions_SwapSampler_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"sampler_name\":       (get_sampler_name_list(), {\"default\": get_default_sampler_name()}), \r\n                \"swap_below_err\":     (\"FLOAT\",                 {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Swap samplers if the error per step falls below this threshold.\"}),\r\n                \"swap_at_step\":       (\"INT\",                   {\"default\": 30,  \"min\": 1,      \"max\": 10000}),\r\n                \"log_err_to_console\": (\"BOOLEAN\",               {\"default\": False}),\r\n                },\r\n            \"optional\": {\r\n                \"options\":            (\"OPTIONS\", ),   \r\n                }\r\n            }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            sampler_name       = \"res_3m\",\r\n            swap_below_err     = 0.0,\r\n            swap_at_step       = 30,\r\n            log_err_to_console = False,\r\n            options            = None,\r\n            ): \r\n        \r\n        sampler_name, implicit_sampler_name = process_sampler_name(sampler_name)\r\n        \r\n        sampler_name = sampler_name if implicit_sampler_name == \"use_explicit\" else implicit_sampler_name\r\n                \r\n        options = options if options is not None else {}\r\n            \r\n        options['rk_swap_type']      = sampler_name\r\n        options['rk_swap_threshold'] = swap_below_err\r\n        options['rk_swap_step']      = swap_at_step\r\n        options['rk_swap_print']     = log_err_to_console\r\n\r\n        return (options,)\r\n    \r\n    \r\n\r\n    \r\n    \r\nclass ClownOptions_SDE_Mask_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"max\":               (\"FLOAT\",                                     {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Clamp the max value for the mask.\"}),\r\n                \"min\":               (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Clamp the min value for the mask.\"}),\r\n                \"invert_mask\":       (\"BOOLEAN\",                                   {\"default\": False}),\r\n                },\r\n            \"optional\": {\r\n                \"mask\":              (\"MASK\", ),\r\n                \"options\":           (\"OPTIONS\", ),   \r\n                }\r\n            }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            max = 1.0,\r\n            min = 0.0,\r\n            invert_mask = False,\r\n            mask     = None,\r\n            options      = None,\r\n            ): \r\n        \r\n        options = copy.deepcopy(options) if options is not None else {}\r\n        \r\n        if invert_mask:\r\n            mask = 1-mask\r\n        \r\n        mask = ((mask - mask.min()) * (max - min)) / (mask.max() - mask.min()) + min    \r\n        \r\n        options['sde_mask'] = mask\r\n\r\n        return (options,)\r\n\r\n\r\n\r\n\r\nclass ClownGuide_Mean_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"weight\":               (\"FLOAT\",                                     {\"default\": 0.75, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"cutoff\":               (\"FLOAT\",                                     {\"default\": 1.0,  \"min\":  0.0,    \"max\": 1.0,   \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"weight_scheduler\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"start_step\":           (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step\":             (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":          (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide\":                (\"LATENT\", ),\r\n                    \"mask\":                 (\"MASK\", ),\r\n                    \"weights\":              (\"SIGMAS\", ),\r\n                    \"guides\":               (\"GUIDES\", ),\r\n                    }  \r\n                }\r\n        \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            weight_scheduler          = \"constant\",\r\n            start_step                = 0,\r\n            end_step                  = 30,\r\n            cutoff                    = 1.0,\r\n            guide                     = None,\r\n            weight                    = 0.0,\r\n\r\n            channelwise_mode          = False,\r\n            projection_mode           = False,\r\n            weights                   = None,\r\n            mask                      = None,\r\n            invert_mask               = False,\r\n            \r\n            guides                    = None,\r\n            ):\r\n        \r\n        default_dtype = torch.float64\r\n        \r\n        mask = 1-mask if mask is not None else None\r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        if guide is not None:\r\n            raw_x = guide.get('state_info', {}).get('raw_x', None)\r\n            if raw_x is not None:\r\n                guide          = {'samples': guide['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide          = {'samples': guide['samples'].clone()}\r\n        \r\n        if weight_scheduler == \"constant\": # and weights == None: \r\n            weights = initialize_or_scale(None, weight, end_step).to(default_dtype)\r\n            weights = F.pad(weights, (0, MAX_STEPS), value=0.0)\r\n            \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        \r\n        guides['weight_mean']           = weight\r\n        guides['weights_mean']          = weights\r\n        guides['guide_mean']            = guide\r\n        guides['mask_mean']             = mask\r\n        \r\n        guides['weight_scheduler_mean'] = weight_scheduler\r\n        guides['start_step_mean']       = start_step\r\n        guides['end_step_mean']         = end_step\r\n        \r\n        guides['cutoff_mean']           = cutoff\r\n        \r\n        return (guides, )\r\n\r\n\r\n\r\nclass ClownGuide_FrequencySeparation:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"apply_to\"       : ([\"AdaIN\"], {\"default\": \"AdaIN\"}),\r\n                    \"method\"         : ([\"gaussian\", \"gaussian_pw\", \"median\", \"median_pw\",], {\"default\": \"median\"}),\r\n                    \"sigma\":             (\"FLOAT\", {\"default\": 3.0, \"min\":  -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Low values produce results closer to the guide image. No effect with median.\"}),\r\n                    \"kernel_size\":       (\"INT\",   {\"default\": 8,    \"min\":  1,      \"max\": 11111, \"step\": 1, \"tooltip\": \"Primary control with median. Set the Re___Patcher node to float32 or lower precision if you have OOMs. You may have them regardless at higher kernel sizes with median.\"}),\r\n                    \"inner_kernel_size\": (\"INT\",   {\"default\": 2,    \"min\":  1,      \"max\": 11111, \"step\": 1, \"tooltip\": \"Should be equal to, or less than, kernel_size.\"}),\r\n                    \"stride\":            (\"INT\",   {\"default\": 2,    \"min\":  1,      \"max\": 11111, \"step\": 1, \"tooltip\": \"Should be equal to, or less than, inner_kernel_size.\"}),\r\n\r\n\r\n                    \"lowpass_weight\":    (\"FLOAT\", {\"default\": 1.0, \"min\":  -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Typically should be set to 1.0. Lower values may sharpen the image, higher values may blur the image.\"}),\r\n                    \"highpass_weight\":   (\"FLOAT\", {\"default\": 1.0, \"min\":  -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Typically should be set to 1.0. Higher values may sharpen the image, lower values may blur the image.\"}),\r\n\r\n                    \"guides\":            (\"GUIDES\", ),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"mask\"           : (\"MASK\",),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n    EXPERIMENTAL = True\r\n\r\n    def main(self,\r\n            apply_to       = \"AdaIN\",\r\n            method         = \"median\",\r\n            sigma          = 3.0,\r\n            kernel_size       = 9,\r\n            inner_kernel_size = 2,\r\n            stride            = 2,\r\n            lowpass_weight    = 1.0,\r\n            highpass_weight   = 1.0,\r\n            guides            = None,\r\n            mask              = None,\r\n            ):\r\n        \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        \r\n        guides['freqsep_apply_to']       = apply_to\r\n        guides['freqsep_lowpass_method'] = method\r\n        guides['freqsep_sigma']          = sigma\r\n        guides['freqsep_kernel_size']    = kernel_size\r\n        guides['freqsep_inner_kernel_size']    = inner_kernel_size\r\n        guides['freqsep_stride']         = stride\r\n\r\n        guides['freqsep_lowpass_weight'] = lowpass_weight\r\n        guides['freqsep_highpass_weight']= highpass_weight\r\n        guides['freqsep_mask']           = mask\r\n\r\n        return (guides, )\r\n\r\n\r\n\r\n\r\nclass ClownGuide_Style_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"apply_to\":         ([\"positive\", \"negative\", \"denoised\"],           {\"default\": \"positive\", \"tooltip\": \"When using CFG, decides whether to apply the guide to the positive or negative conditioning.\"}),\r\n                    \"method\":           ([\"AdaIN\", \"WCT\", \"WCT2\", \"scattersort\",\"none\"], {\"default\": \"WCT\"}),\r\n                    \"weight\":           (\"FLOAT\",                                        {\"default\": 1.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide by multiplying all other weights by this value.\"}),\r\n                    \"synweight\":        (\"FLOAT\",                                        {\"default\": 1.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the relative strength of the guide on the opposite conditioning to what was selected: i.e., negative if positive in apply_to. Recommended to avoid CFG burn.\"}),\r\n                    \"weight_scheduler\": ([\"constant\"] + get_res4lyf_scheduler_list(),    {\"default\": \"constant\", \"tooltip\": \"Selecting any scheduler except constant will cause the strength to gradually decay to zero. Try beta57 vs. linear quadratic.\"},),\r\n                    \"start_step\":       (\"INT\",                                          {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step\":         (\"INT\",                                          {\"default\": -1,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":      (\"BOOLEAN\",                                      {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide\":            (\"LATENT\", ),\r\n                    \"mask\":             (\"MASK\", ),\r\n                    \"weights\":          (\"SIGMAS\", ),\r\n                    \"guides\":           (\"GUIDES\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n    DESCRIPTION  = \"Transfer some visual aspects of style from a guide (reference) image. If nothing about style is specified in the prompt, it may just transfer the lighting and color scheme.\" + \\\r\n                \"If using CFG results in burn, or a very dark/bright image in the preview followed by a bad output, try duplicating and chaining this node, so that the guide may be applied to both positive and negative conditioning.\" + \\\r\n                \"Currently supported models: SD1.5, SDXL, Stable Cascade, SD3.5, AuraFlow, Flux, HiDream, WAN, and LTXV.\"\r\n\r\n    def main(self,\r\n            apply_to         = \"all\",\r\n            method           = \"WCT\",\r\n            weight           = 1.0,\r\n            synweight        = 1.0,\r\n            weight_scheduler = \"constant\",\r\n            start_step       = 0,\r\n            end_step         = 15,\r\n            invert_mask      = False,\r\n            \r\n            guide            = None,\r\n            mask             = None,\r\n            weights          = None,\r\n            guides           = None,\r\n            ):\r\n        \r\n        default_dtype = torch.float64\r\n        \r\n        mask = 1-mask if mask is not None else None\r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        if guide is not None:\r\n            raw_x = guide.get('state_info', {}).get('raw_x', None)\r\n            if raw_x is not None:\r\n                guide          = {'samples': guide['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide          = {'samples': guide['samples'].clone()}\r\n        \r\n        if weight_scheduler == \"constant\": # and weights == None: \r\n            weights = initialize_or_scale(None, weight, end_step).to(default_dtype)\r\n            prepend = torch.zeros(start_step).to(weights)\r\n            weights = torch.cat([prepend, weights])\r\n            weights = F.pad(weights, (0, MAX_STEPS), value=0.0)\r\n            \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        \r\n        guides['style_method'] = method\r\n        \r\n        if apply_to in {\"positive\", \"all\"}:\r\n        \r\n            guides['weight_style_pos']           = weight\r\n            guides['weights_style_pos']          = weights\r\n\r\n            guides['synweight_style_pos']        = synweight\r\n\r\n            guides['guide_style_pos']            = guide\r\n            guides['mask_style_pos']             = mask\r\n\r\n            guides['weight_scheduler_style_pos'] = weight_scheduler\r\n            guides['start_step_style_pos']       = start_step\r\n            guides['end_step_style_pos']         = end_step\r\n            \r\n        if apply_to in {\"negative\", \"all\"}:\r\n            guides['weight_style_neg']           = weight\r\n            guides['weights_style_neg']          = weights\r\n\r\n            guides['synweight_style_neg']        = synweight\r\n\r\n            guides['guide_style_neg']            = guide\r\n            guides['mask_style_neg']             = mask\r\n\r\n            guides['weight_scheduler_style_neg'] = weight_scheduler\r\n            guides['start_step_style_neg']       = start_step\r\n            guides['end_step_style_neg']         = end_step\r\n        \r\n        if apply_to in {\"denoised\", \"all\"}:\r\n        \r\n            guides['weight_style_denoised']           = weight\r\n            guides['weights_style_denoised']          = weights\r\n\r\n            guides['synweight_style_denoised']        = synweight\r\n\r\n            guides['guide_style_denoised']            = guide\r\n            guides['mask_style_denoised']             = mask\r\n\r\n            guides['weight_scheduler_style_denoised'] = weight_scheduler\r\n            guides['start_step_style_denoised']       = start_step\r\n            guides['end_step_style_denoised']         = end_step\r\n            \r\n        return (guides, )\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownGuide_Style_EdgeWidth:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"edge_width\":       (\"INT\",     {\"default\": 20,  \"min\":  1,   \"max\": 10000}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guides\":           (\"GUIDES\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n    DESCRIPTION  = \"Set an edge mask for some style guide types such as scattersort. Can help mitigate seams.\"\r\n\r\n    def main(self,\r\n            edge_width = 20,\r\n            guides     = None,\r\n            ):\r\n        \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        \r\n        if guides.get('mask_style_pos') is not None:\r\n            guides['mask_edge_style_pos'] = get_edge_mask(guides.get('mask_style_pos'), edge_width)\r\n            \r\n        if guides.get('mask_style_neg') is not None:\r\n            guides['mask_edge_style_neg'] = get_edge_mask(guides.get('mask_style_neg'), edge_width)\r\n        \r\n        return (guides, )\r\n\r\n\r\n\r\n\r\n\r\nclass ClownGuide_Style_TileSize:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"height\": (\"INT\",     {\"default\": 128,  \"min\":  16,   \"max\": 10000, \"step\": 16}),\r\n                    \"width\" : (\"INT\",     {\"default\": 128,  \"min\":  16,   \"max\": 10000, \"step\": 16}),\r\n                    \"padding\" : (\"INT\",     {\"default\": 64,  \"min\":  0,   \"max\": 10000, \"step\": 16}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guides\":           (\"GUIDES\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n    DESCRIPTION  = \"Set a tile size for some style guide types such as scattersort. Can improve adherence to the input image.\"\r\n\r\n    def main(self,\r\n            height = 128,\r\n            width  = 128,\r\n            padding = 64,\r\n            guides = None,\r\n            ):\r\n        \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        \r\n        guides['style_tile_height']  = height  // 16\r\n        guides['style_tile_width']   = width   // 16\r\n        guides['style_tile_padding'] = padding // 16\r\n\r\n        return (guides, )\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownGuides_Sync:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"weight_masked\":               (\"FLOAT\",                                     {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"weight_unmasked\":             (\"FLOAT\",                                     {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                    \"weight_scheduler_masked\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"weight_scheduler_unmasked\":   ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"weight_start_step_masked\":           (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"weight_start_step_unmasked\":         (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"weight_end_step_masked\":             (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"weight_end_step_unmasked\":           (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \r\n                    \"sync_masked\":               (\"FLOAT\",                                     {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"sync_unmasked\":             (\"FLOAT\",                                     {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                    \"sync_scheduler_masked\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"sync_scheduler_unmasked\":   ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"sync_start_step_masked\":           (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"sync_start_step_unmasked\":         (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"sync_end_step_masked\":             (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"sync_end_step_unmasked\":           (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":                 (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide_masked\":                (\"LATENT\", ),\r\n                    \"guide_unmasked\":              (\"LATENT\", ),\r\n                    \"mask\":                        (\"MASK\", ),\r\n                    \"weights_masked\":              (\"SIGMAS\", ),\r\n                    \"weights_unmasked\":            (\"SIGMAS\", ),\r\n                    \"syncs_masked\":              (\"SIGMAS\", ),\r\n                    \"syncs_unmasked\":            (\"SIGMAS\", ),\r\n                    }  \r\n                }\r\n        \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n    EXPERIMENTAL = True\r\n\r\n    def main(self,\r\n            weight_masked              = 0.0,\r\n            weight_unmasked            = 0.0,\r\n            weight_scheduler_masked    = \"constant\",\r\n            weight_scheduler_unmasked  = \"constant\",\r\n            weight_start_step_masked   = 0,\r\n            weight_start_step_unmasked = 0,\r\n            weight_end_step_masked     = 30,\r\n            weight_end_step_unmasked   = 30,\r\n\r\n            sync_masked                = 0.0,\r\n            sync_unmasked              = 0.0,\r\n            sync_scheduler_masked      = \"constant\",\r\n            sync_scheduler_unmasked    = \"constant\",\r\n            sync_start_step_masked     = 0,\r\n            sync_start_step_unmasked   = 0,\r\n            sync_end_step_masked       = 30,\r\n            sync_end_step_unmasked     = 30,\r\n\r\n            guide_masked               = None,\r\n            guide_unmasked             = None,\r\n            \r\n            weights_masked             = None,\r\n            weights_unmasked           = None,\r\n            syncs_masked               = None,\r\n            syncs_unmasked             = None,\r\n            mask                       = None,\r\n            unmask                     = None,\r\n            invert_mask                = False,\r\n            \r\n            guide_mode                 = \"sync\",\r\n            channelwise_mode           = False,\r\n            projection_mode            = False,\r\n            \r\n            cutoff_masked              = 1.0,\r\n            cutoff_unmasked            = 1.0,\r\n            ):\r\n\r\n        default_dtype = torch.float64\r\n        \r\n        if weight_end_step_masked   == -1:\r\n            weight_end_step_masked   = MAX_STEPS\r\n        if weight_end_step_unmasked == -1:\r\n            weight_end_step_unmasked = MAX_STEPS\r\n            \r\n        if sync_end_step_masked   == -1:\r\n            sync_end_step_masked   = MAX_STEPS\r\n        if sync_end_step_unmasked == -1:\r\n            sync_end_step_unmasked = MAX_STEPS\r\n        \r\n        if guide_masked is None:\r\n            weight_scheduler_masked = \"constant\"\r\n            weight_start_step_masked       = 0\r\n            weight_end_step_masked         = 30\r\n            weight_masked           = 0.0\r\n            weights_masked          = None\r\n            \r\n            sync_scheduler_masked = \"constant\"\r\n            sync_start_step_masked       = 0\r\n            sync_end_step_masked         = 30\r\n            sync_masked           = 0.0\r\n            syncs_masked          = None\r\n        \r\n        if guide_unmasked is None:\r\n            weight_scheduler_unmasked = \"constant\"\r\n            weight_start_step_unmasked       = 0\r\n            weight_end_step_unmasked         = 30\r\n            weight_unmasked           = 0.0\r\n            weights_unmasked          = None\r\n        \r\n            sync_scheduler_unmasked = \"constant\"\r\n            sync_start_step_unmasked       = 0\r\n            sync_end_step_unmasked         = 30\r\n            sync_unmasked           = 0.0\r\n            syncs_unmasked          = None\r\n        \r\n        if guide_masked is not None:\r\n            raw_x = guide_masked.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_masked   = {'samples': guide_masked['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_masked   = {'samples': guide_masked['samples'].clone()}\r\n        \r\n        if guide_unmasked is not None:\r\n            raw_x = guide_unmasked.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_unmasked = {'samples': guide_unmasked['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_unmasked = {'samples': guide_unmasked['samples'].clone()}\r\n        \r\n        if invert_mask and mask is not None:\r\n            mask = 1-mask\r\n                \r\n        if projection_mode:\r\n            guide_mode = guide_mode + \"_projection\"\r\n        \r\n        if channelwise_mode:\r\n            guide_mode = guide_mode + \"_cw\"\r\n            \r\n        if guide_mode == \"unsample_cw\":\r\n            guide_mode = \"unsample\"\r\n        if guide_mode == \"resample_cw\":\r\n            guide_mode = \"resample\"\r\n        \r\n        if weight_scheduler_masked == \"constant\" and weights_masked == None: \r\n            weights_masked = initialize_or_scale(None, weight_masked, weight_end_step_masked).to(default_dtype)\r\n            prepend      = torch.zeros(weight_start_step_masked, dtype=default_dtype, device=weights_masked.device)\r\n            weights_masked = torch.cat((prepend, weights_masked), dim=0)\r\n            weights_masked = F.pad(weights_masked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if weight_scheduler_unmasked == \"constant\" and weights_unmasked == None: \r\n            weights_unmasked = initialize_or_scale(None, weight_unmasked, weight_end_step_unmasked).to(default_dtype)\r\n            prepend      = torch.zeros(weight_start_step_unmasked, dtype=default_dtype, device=weights_unmasked.device)\r\n            weights_unmasked = torch.cat((prepend, weights_unmasked), dim=0)\r\n            weights_unmasked = F.pad(weights_unmasked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        # Values for the sync scheduler will be inverted in rk_guide_func_beta.py as it's easier to understand:\r\n        # makes it so that a sync weight of 1.0 = full guide strength (which previously was 0.0)\r\n        if sync_scheduler_masked == \"constant\" and syncs_masked == None: \r\n            syncs_masked = initialize_or_scale(None, sync_masked, sync_end_step_masked).to(default_dtype)\r\n            prepend      = torch.zeros(sync_start_step_masked, dtype=default_dtype, device=syncs_masked.device)\r\n            syncs_masked = torch.cat((prepend, syncs_masked), dim=0)\r\n            syncs_masked = F.pad(syncs_masked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if sync_scheduler_unmasked == \"constant\" and syncs_unmasked == None: \r\n            syncs_unmasked = initialize_or_scale(None, sync_unmasked, sync_end_step_unmasked).to(default_dtype)\r\n            prepend      = torch.zeros(sync_start_step_unmasked, dtype=default_dtype, device=syncs_unmasked.device)\r\n            syncs_unmasked = torch.cat((prepend, syncs_unmasked), dim=0)\r\n            syncs_unmasked = F.pad(syncs_unmasked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        guides = {\r\n            \"guide_mode\"                : guide_mode,\r\n\r\n            \"guide_masked\"              : guide_masked,\r\n            \"guide_unmasked\"            : guide_unmasked,\r\n            \"mask\"                      : mask,\r\n            \"unmask\"                    : unmask,\r\n\r\n            \"weight_masked\"             : weight_masked,\r\n            \"weight_unmasked\"           : weight_unmasked,\r\n            \"weight_scheduler_masked\"   : weight_scheduler_masked,\r\n            \"weight_scheduler_unmasked\" : weight_scheduler_unmasked,\r\n            \"start_step_masked\"         : weight_start_step_masked,\r\n            \"start_step_unmasked\"       : weight_start_step_unmasked,\r\n            \"end_step_masked\"           : weight_end_step_masked,\r\n            \"end_step_unmasked\"         : weight_end_step_unmasked,\r\n            \r\n            \"weights_masked\"            : weights_masked,\r\n            \"weights_unmasked\"          : weights_unmasked,\r\n            \r\n            \"weight_masked_sync\"             : sync_masked,\r\n            \"weight_unmasked_sync\"           : sync_unmasked,\r\n            \"weight_scheduler_masked_sync\"   : sync_scheduler_masked,\r\n            \"weight_scheduler_unmasked_sync\" : sync_scheduler_unmasked,\r\n            \"start_step_masked_sync\"         : sync_start_step_masked,\r\n            \"start_step_unmasked_sync\"       : sync_start_step_unmasked,\r\n            \"end_step_masked_sync\"           : sync_end_step_masked,\r\n            \"end_step_unmasked_sync\"         : sync_end_step_unmasked,\r\n            \r\n            \"weights_masked_sync\"            : syncs_masked,\r\n            \"weights_unmasked_sync\"          : syncs_unmasked,\r\n            \r\n            \"cutoff_masked\"             : cutoff_masked,\r\n            \"cutoff_unmasked\"           : cutoff_unmasked\r\n        }\r\n        \r\n        \r\n        return (guides, )\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownGuides_Sync_Advanced:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"weight_masked\":               (\"FLOAT\",                                     {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"weight_unmasked\":             (\"FLOAT\",                                     {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                    \"weight_scheduler_masked\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"weight_scheduler_unmasked\":   ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"weight_start_step_masked\":    (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"weight_start_step_unmasked\":  (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"weight_end_step_masked\":      (\"INT\",                                       {\"default\": 30,   \"min\": -1,      \"max\": 10000}),\r\n                    \"weight_end_step_unmasked\":    (\"INT\",                                       {\"default\": -1,   \"min\": -1,      \"max\": 10000}),\r\n                    \r\n                    \"sync_masked\":                 (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"sync_unmasked\":               (\"FLOAT\",                                     {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                    \"sync_scheduler_masked\":       ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"sync_scheduler_unmasked\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"sync_start_step_masked\":      (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"sync_start_step_unmasked\":    (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"sync_end_step_masked\":        (\"INT\",                                       {\"default\": -1,   \"min\": -1,      \"max\": 10000}),\r\n                    \"sync_end_step_unmasked\":      (\"INT\",                                       {\"default\": -1,   \"min\": -1,      \"max\": 10000}),\r\n                    \r\n                    \"drift_x_data\":                (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"drift_x_sync\":                (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"drift_x_masked\":              (\"FLOAT\",                                     {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"drift_x_unmasked\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                    \"drift_x_scheduler_masked\":    ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"drift_x_scheduler_unmasked\":  ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"drift_x_start_step_masked\":   (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"drift_x_start_step_unmasked\": (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"drift_x_end_step_masked\":     (\"INT\",                                       {\"default\": -1,   \"min\": -1,      \"max\": 10000}),\r\n                    \"drift_x_end_step_unmasked\":   (\"INT\",                                       {\"default\": -1,   \"min\": -1,      \"max\": 10000}),\r\n                    \r\n                    \"drift_y_data\":                (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"drift_y_sync\":                (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"drift_y_guide\":               (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"drift_y_masked\":              (\"FLOAT\",                                     {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"drift_y_unmasked\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                    \"drift_y_scheduler_masked\":    ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"drift_y_scheduler_unmasked\":  ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"drift_y_start_step_masked\":   (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"drift_y_start_step_unmasked\": (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"drift_y_end_step_masked\":     (\"INT\",                                       {\"default\": -1,   \"min\": -1,      \"max\": 10000}),\r\n                    \"drift_y_end_step_unmasked\":   (\"INT\",                                       {\"default\": -1,   \"min\": -1,      \"max\": 10000}),\r\n                    \r\n                    \"lure_x_masked\":               (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"lure_x_unmasked\":             (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                    \"lure_x_scheduler_masked\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"lure_x_scheduler_unmasked\":   ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"lure_x_start_step_masked\":    (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"lure_x_start_step_unmasked\":  (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"lure_x_end_step_masked\":      (\"INT\",                                       {\"default\": -1,   \"min\": -1,      \"max\": 10000}),\r\n                    \"lure_x_end_step_unmasked\":    (\"INT\",                                       {\"default\": -1,   \"min\": -1,      \"max\": 10000}),\r\n                    \r\n                    \"lure_y_masked\":               (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"lure_y_unmasked\":             (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                    \"lure_y_scheduler_masked\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"lure_y_scheduler_unmasked\":   ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"lure_y_start_step_masked\":    (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"lure_y_start_step_unmasked\":  (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"lure_y_end_step_masked\":      (\"INT\",                                       {\"default\": -1,   \"min\": -1,      \"max\": 10000}),\r\n                    \"lure_y_end_step_unmasked\":    (\"INT\",                                       {\"default\": -1,   \"min\": -1,      \"max\": 10000}),\r\n                    \r\n                    \"lure_iter\":          (\"INT\",                                       {\"default\": 0,   \"min\": 0,      \"max\": 10000}),\r\n                    \"lure_sequence\":      ([\"x -> y\", \"y -> x\", \"xy -> xy\"],                                   {\"default\": \"y -> x\"}),\r\n                    \r\n                    \"invert_mask\":        (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    \"invert_mask_sync\":   (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    \"invert_mask_drift_x\": (\"BOOLEAN\",                                  {\"default\": False}),\r\n                    \"invert_mask_drift_y\": (\"BOOLEAN\",                                  {\"default\": False}),\r\n                    \"invert_mask_lure_x\": (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    \"invert_mask_lure_y\": (\"BOOLEAN\",                                   {\"default\": False}),\r\n\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide_masked\":       (\"LATENT\", ),\r\n                    \"guide_unmasked\":     (\"LATENT\", ),\r\n                    \"mask\":               (\"MASK\", ),\r\n                    \"mask_sync\":          (\"MASK\", ),\r\n                    \"mask_drift_x\":        (\"MASK\", ),\r\n                    \"mask_drift_y\":        (\"MASK\", ),\r\n                    \"mask_lure_x\":        (\"MASK\", ),\r\n                    \"mask_lure_y\":        (\"MASK\", ),\r\n                    \"weights_masked\":     (\"SIGMAS\", ),\r\n                    \"weights_unmasked\":   (\"SIGMAS\", ),\r\n                    \"syncs_masked\":       (\"SIGMAS\", ),\r\n                    \"syncs_unmasked\":     (\"SIGMAS\", ),\r\n                    \"drift_xs_masked\":     (\"SIGMAS\", ),\r\n                    \"drift_xs_unmasked\":   (\"SIGMAS\", ),\r\n                    \"drift_ys_masked\":     (\"SIGMAS\", ),\r\n                    \"drift_ys_unmasked\":   (\"SIGMAS\", ),\r\n                    \"lure_xs_masked\":     (\"SIGMAS\", ),\r\n                    \"lure_xs_unmasked\":   (\"SIGMAS\", ),\r\n                    \"lure_ys_masked\":     (\"SIGMAS\", ),\r\n                    \"lure_ys_unmasked\":   (\"SIGMAS\", ),\r\n                    }  \r\n                }\r\n        \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n    EXPERIMENTAL = True\r\n\r\n    def main(self,\r\n            weight_masked              = 0.0,\r\n            weight_unmasked            = 0.0,\r\n            weight_scheduler_masked    = \"constant\",\r\n            weight_scheduler_unmasked  = \"constant\",\r\n            weight_start_step_masked   = 0,\r\n            weight_start_step_unmasked = 0,\r\n            weight_end_step_masked     = 30,\r\n            weight_end_step_unmasked   = 30,\r\n\r\n            sync_masked                = 0.0,\r\n            sync_unmasked              = 0.0,\r\n            sync_scheduler_masked      = \"constant\",\r\n            sync_scheduler_unmasked    = \"constant\",\r\n            sync_start_step_masked     = 0,\r\n            sync_start_step_unmasked   = 0,\r\n            sync_end_step_masked       = 30,\r\n            sync_end_step_unmasked     = 30,\r\n            \r\n            drift_x_data = 0.0,\r\n            drift_x_sync = 0.0,\r\n            drift_y_data = 0.0,\r\n            drift_y_sync = 0.0,\r\n            drift_y_guide = 0.0,\r\n\r\n            drift_x_masked                = 0.0,\r\n            drift_x_unmasked              = 0.0,\r\n            drift_x_scheduler_masked      = \"constant\",\r\n            drift_x_scheduler_unmasked    = \"constant\",\r\n            drift_x_start_step_masked     = 0,\r\n            drift_x_start_step_unmasked   = 0,\r\n            drift_x_end_step_masked       = 30,\r\n            drift_x_end_step_unmasked     = 30,\r\n            \r\n            drift_y_masked                = 0.0,\r\n            drift_y_unmasked              = 0.0,\r\n            drift_y_scheduler_masked      = \"constant\",\r\n            drift_y_scheduler_unmasked    = \"constant\",\r\n            drift_y_start_step_masked     = 0,\r\n            drift_y_start_step_unmasked   = 0,\r\n            drift_y_end_step_masked       = 30,\r\n            drift_y_end_step_unmasked     = 30,\r\n\r\n            lure_x_masked                = 0.0,\r\n            lure_x_unmasked              = 0.0,\r\n            lure_x_scheduler_masked      = \"constant\",\r\n            lure_x_scheduler_unmasked    = \"constant\",\r\n            lure_x_start_step_masked     = 0,\r\n            lure_x_start_step_unmasked   = 0,\r\n            lure_x_end_step_masked       = 30,\r\n            lure_x_end_step_unmasked     = 30,\r\n            \r\n            lure_y_masked                = 0.0,\r\n            lure_y_unmasked              = 0.0,\r\n            lure_y_scheduler_masked      = \"constant\",\r\n            lure_y_scheduler_unmasked    = \"constant\",\r\n            lure_y_start_step_masked     = 0,\r\n            lure_y_start_step_unmasked   = 0,\r\n            lure_y_end_step_masked       = 30,\r\n            lure_y_end_step_unmasked     = 30,\r\n\r\n            guide_masked               = None,\r\n            guide_unmasked             = None,\r\n            \r\n            weights_masked             = None,\r\n            weights_unmasked           = None,\r\n            syncs_masked               = None,\r\n            syncs_unmasked             = None,\r\n            drift_xs_masked             = None,\r\n            drift_xs_unmasked           = None,\r\n            drift_ys_masked             = None,\r\n            drift_ys_unmasked           = None,\r\n            lure_xs_masked             = None,\r\n            lure_xs_unmasked           = None,\r\n            lure_ys_masked             = None,\r\n            lure_ys_unmasked           = None,\r\n            \r\n            lure_iter                  = 0,\r\n            lure_sequence              = \"x -> y\",\r\n            \r\n            mask                       = None,\r\n            unmask                     = None,\r\n            mask_sync                  = None,\r\n            mask_drift_x                = None,\r\n            mask_drift_y                = None,\r\n            mask_lure_x                = None,\r\n            mask_lure_y                = None,\r\n\r\n            invert_mask                = False,\r\n            invert_mask_sync           = False,\r\n            invert_mask_drift_x         = False,\r\n            invert_mask_drift_y         = False,\r\n            invert_mask_lure_x         = False,\r\n            invert_mask_lure_y         = False,\r\n            \r\n            guide_mode                 = \"sync\",\r\n            channelwise_mode           = False,\r\n            projection_mode            = False,\r\n            \r\n            cutoff_masked              = 1.0,\r\n            cutoff_unmasked            = 1.0,\r\n            ):\r\n\r\n        default_dtype = torch.float64\r\n        \r\n        if weight_end_step_masked   == -1:\r\n            weight_end_step_masked   = MAX_STEPS\r\n        if weight_end_step_unmasked == -1:\r\n            weight_end_step_unmasked = MAX_STEPS\r\n        \r\n        if sync_end_step_masked   == -1:\r\n            sync_end_step_masked   = MAX_STEPS\r\n        if sync_end_step_unmasked == -1:\r\n            sync_end_step_unmasked = MAX_STEPS\r\n        \r\n        if drift_x_end_step_masked   == -1:\r\n            drift_x_end_step_masked   = MAX_STEPS\r\n        if drift_x_end_step_unmasked == -1:\r\n            drift_x_end_step_unmasked = MAX_STEPS\r\n        if drift_y_end_step_masked   == -1:\r\n            drift_y_end_step_masked   = MAX_STEPS\r\n        if drift_y_end_step_unmasked == -1:\r\n            drift_y_end_step_unmasked = MAX_STEPS\r\n        \r\n        if lure_x_end_step_masked   == -1:\r\n            lure_x_end_step_masked   = MAX_STEPS\r\n        if lure_x_end_step_unmasked == -1:\r\n            lure_x_end_step_unmasked = MAX_STEPS\r\n        if lure_y_end_step_masked   == -1:\r\n            lure_y_end_step_masked   = MAX_STEPS\r\n        if lure_y_end_step_unmasked == -1:\r\n            lure_y_end_step_unmasked = MAX_STEPS\r\n        \r\n        \r\n        \r\n        if guide_masked is None:\r\n            weight_scheduler_masked = \"constant\"\r\n            weight_start_step_masked       = 0\r\n            weight_end_step_masked         = 30\r\n            weight_masked           = 0.0\r\n            weights_masked          = None\r\n            \r\n            sync_scheduler_masked = \"constant\"\r\n            sync_start_step_masked       = 0\r\n            sync_end_step_masked         = 30\r\n            sync_masked           = 0.0\r\n            syncs_masked          = None\r\n        \r\n            drift_x_scheduler_masked = \"constant\"\r\n            drift_x_start_step_masked       = 0\r\n            drift_x_end_step_masked         = 30\r\n            drift_x_masked           = 0.0\r\n            drift_xs_masked          = None\r\n        \r\n            drift_y_scheduler_masked = \"constant\"\r\n            drift_y_start_step_masked       = 0\r\n            drift_y_end_step_masked         = 30\r\n            drift_y_masked           = 0.0\r\n            drift_ys_masked          = None\r\n        \r\n            lure_x_scheduler_masked = \"constant\"\r\n            lure_x_start_step_masked       = 0\r\n            lure_x_end_step_masked         = 30\r\n            lure_x_masked           = 0.0\r\n            lure_xs_masked          = None\r\n        \r\n            lure_y_scheduler_masked = \"constant\"\r\n            lure_y_start_step_masked       = 0\r\n            lure_y_end_step_masked         = 30\r\n            lure_y_masked           = 0.0\r\n            lure_ys_masked          = None\r\n        \r\n        if guide_unmasked is None:\r\n            weight_scheduler_unmasked = \"constant\"\r\n            weight_start_step_unmasked       = 0\r\n            weight_end_step_unmasked         = 30\r\n            weight_unmasked           = 0.0\r\n            weights_unmasked          = None\r\n        \r\n            sync_scheduler_unmasked = \"constant\"\r\n            sync_start_step_unmasked       = 0\r\n            sync_end_step_unmasked         = 30\r\n            sync_unmasked           = 0.0\r\n            syncs_unmasked          = None\r\n        \r\n            drift_x_scheduler_unmasked = \"constant\"\r\n            drift_x_start_step_unmasked       = 0\r\n            drift_x_end_step_unmasked         = 30\r\n            drift_x_unmasked           = 0.0\r\n            drift_xs_unmasked          = None\r\n        \r\n            drift_y_scheduler_unmasked = \"constant\"\r\n            drift_y_start_step_unmasked       = 0\r\n            drift_y_end_step_unmasked         = 30\r\n            drift_y_unmasked           = 0.0\r\n            drift_ys_unmasked          = None\r\n        \r\n            lure_x_scheduler_unmasked = \"constant\"\r\n            lure_x_start_step_unmasked       = 0\r\n            lure_x_end_step_unmasked         = 30\r\n            lure_x_unmasked           = 0.0\r\n            lure_xs_unmasked          = None\r\n        \r\n            lure_y_scheduler_unmasked = \"constant\"\r\n            lure_y_start_step_unmasked       = 0\r\n            lure_y_end_step_unmasked         = 30\r\n            lure_y_unmasked           = 0.0\r\n            lure_ys_unmasked          = None\r\n        \r\n        \r\n        if guide_masked is not None:\r\n            raw_x = guide_masked.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_masked   = {'samples': guide_masked['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_masked   = {'samples': guide_masked['samples'].clone()}\r\n        \r\n        if guide_unmasked is not None:\r\n            raw_x = guide_unmasked.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_unmasked = {'samples': guide_unmasked['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_unmasked = {'samples': guide_unmasked['samples'].clone()}\r\n        \r\n        if invert_mask and mask is not None:\r\n            mask = 1-mask\r\n        if invert_mask_sync and mask_sync is not None:\r\n            mask_sync = 1-mask_sync\r\n        if invert_mask_drift_x and mask_drift_x is not None:\r\n            mask_drift_x = 1-mask_drift_x\r\n        if invert_mask_drift_y and mask_drift_y is not None:\r\n            mask_drift_y = 1-mask_drift_y\r\n        if invert_mask_lure_x and mask_lure_x is not None:\r\n            mask_lure_x = 1-mask_lure_x\r\n        if invert_mask_lure_y and mask_lure_y is not None:\r\n            mask_lure_y = 1-mask_lure_y\r\n        \r\n        if projection_mode:\r\n            guide_mode = guide_mode + \"_projection\"\r\n        \r\n        if channelwise_mode:\r\n            guide_mode = guide_mode + \"_cw\"\r\n            \r\n        if guide_mode == \"unsample_cw\":\r\n            guide_mode = \"unsample\"\r\n        if guide_mode == \"resample_cw\":\r\n            guide_mode = \"resample\"\r\n        \r\n        if weight_scheduler_masked == \"constant\" and weights_masked == None: \r\n            weights_masked = initialize_or_scale(None, weight_masked, weight_end_step_masked).to(default_dtype)\r\n            prepend      = torch.zeros(weight_start_step_masked, dtype=default_dtype, device=weights_masked.device)\r\n            weights_masked = torch.cat((prepend, weights_masked), dim=0)\r\n            weights_masked = F.pad(weights_masked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if weight_scheduler_unmasked == \"constant\" and weights_unmasked == None: \r\n            weights_unmasked = initialize_or_scale(None, weight_unmasked, weight_end_step_unmasked).to(default_dtype)\r\n            prepend      = torch.zeros(weight_start_step_unmasked, dtype=default_dtype, device=weights_unmasked.device)\r\n            weights_unmasked = torch.cat((prepend, weights_unmasked), dim=0)\r\n            weights_unmasked = F.pad(weights_unmasked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        # Values for the sync scheduler will be inverted in rk_guide_func_beta.py as it's easier to understand:\r\n        # makes it so that a sync weight of 1.0 = full guide strength (which previously was 0.0)\r\n        if sync_scheduler_masked == \"constant\" and syncs_masked == None: \r\n            syncs_masked = initialize_or_scale(None, sync_masked, sync_end_step_masked).to(default_dtype)\r\n            prepend      = torch.zeros(sync_start_step_masked, dtype=default_dtype, device=syncs_masked.device)\r\n            syncs_masked = torch.cat((prepend, syncs_masked), dim=0)\r\n            syncs_masked = F.pad(syncs_masked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if sync_scheduler_unmasked == \"constant\" and syncs_unmasked == None: \r\n            syncs_unmasked = initialize_or_scale(None, sync_unmasked, sync_end_step_unmasked).to(default_dtype)\r\n            prepend      = torch.zeros(sync_start_step_unmasked, dtype=default_dtype, device=syncs_unmasked.device)\r\n            syncs_unmasked = torch.cat((prepend, syncs_unmasked), dim=0)\r\n            syncs_unmasked = F.pad(syncs_unmasked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if drift_x_scheduler_masked == \"constant\" and drift_xs_masked == None: \r\n            drift_xs_masked = initialize_or_scale(None, drift_x_masked, drift_x_end_step_masked).to(default_dtype)\r\n            prepend      = torch.zeros(drift_x_start_step_masked, dtype=default_dtype, device=drift_xs_masked.device)\r\n            drift_xs_masked = torch.cat((prepend, drift_xs_masked), dim=0)\r\n            drift_xs_masked = F.pad(drift_xs_masked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if drift_x_scheduler_unmasked == \"constant\" and drift_xs_unmasked == None: \r\n            drift_xs_unmasked = initialize_or_scale(None, drift_x_unmasked, drift_x_end_step_unmasked).to(default_dtype)\r\n            prepend      = torch.zeros(drift_x_start_step_unmasked, dtype=default_dtype, device=drift_xs_unmasked.device)\r\n            drift_xs_unmasked = torch.cat((prepend, drift_xs_unmasked), dim=0)\r\n            drift_xs_unmasked = F.pad(drift_xs_unmasked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if drift_y_scheduler_masked == \"constant\" and drift_ys_masked == None: \r\n            drift_ys_masked = initialize_or_scale(None, drift_y_masked, drift_y_end_step_masked).to(default_dtype)\r\n            prepend      = torch.zeros(drift_y_start_step_masked, dtype=default_dtype, device=drift_ys_masked.device)\r\n            drift_ys_masked = torch.cat((prepend, drift_ys_masked), dim=0)\r\n            drift_ys_masked = F.pad(drift_ys_masked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if drift_y_scheduler_unmasked == \"constant\" and drift_ys_unmasked == None: \r\n            drift_ys_unmasked = initialize_or_scale(None, drift_y_unmasked, drift_y_end_step_unmasked).to(default_dtype)\r\n            prepend      = torch.zeros(drift_y_start_step_unmasked, dtype=default_dtype, device=drift_ys_unmasked.device)\r\n            drift_ys_unmasked = torch.cat((prepend, drift_ys_unmasked), dim=0)\r\n            drift_ys_unmasked = F.pad(drift_ys_unmasked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if lure_x_scheduler_masked == \"constant\" and lure_xs_masked == None: \r\n            lure_xs_masked = initialize_or_scale(None, lure_x_masked, lure_x_end_step_masked).to(default_dtype)\r\n            prepend      = torch.zeros(lure_x_start_step_masked, dtype=default_dtype, device=lure_xs_masked.device)\r\n            lure_xs_masked = torch.cat((prepend, lure_xs_masked), dim=0)\r\n            lure_xs_masked = F.pad(lure_xs_masked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if lure_x_scheduler_unmasked == \"constant\" and lure_xs_unmasked == None: \r\n            lure_xs_unmasked = initialize_or_scale(None, lure_x_unmasked, lure_x_end_step_unmasked).to(default_dtype)\r\n            prepend      = torch.zeros(lure_x_start_step_unmasked, dtype=default_dtype, device=lure_xs_unmasked.device)\r\n            lure_xs_unmasked = torch.cat((prepend, lure_xs_unmasked), dim=0)\r\n            lure_xs_unmasked = F.pad(lure_xs_unmasked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if lure_y_scheduler_masked == \"constant\" and lure_ys_masked == None: \r\n            lure_ys_masked = initialize_or_scale(None, lure_y_masked, lure_y_end_step_masked).to(default_dtype)\r\n            prepend      = torch.zeros(lure_y_start_step_masked, dtype=default_dtype, device=lure_ys_masked.device)\r\n            lure_ys_masked = torch.cat((prepend, lure_ys_masked), dim=0)\r\n            lure_ys_masked = F.pad(lure_ys_masked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if lure_y_scheduler_unmasked == \"constant\" and lure_ys_unmasked == None: \r\n            lure_ys_unmasked = initialize_or_scale(None, lure_y_unmasked, lure_y_end_step_unmasked).to(default_dtype)\r\n            prepend      = torch.zeros(lure_y_start_step_unmasked, dtype=default_dtype, device=lure_ys_unmasked.device)\r\n            lure_ys_unmasked = torch.cat((prepend, lure_ys_unmasked), dim=0)\r\n            lure_ys_unmasked = F.pad(lure_ys_unmasked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        \r\n        guides = {\r\n            \"guide_mode\"                        : guide_mode,\r\n        \r\n            \"guide_masked\"                      : guide_masked,\r\n            \"guide_unmasked\"                    : guide_unmasked,\r\n            \"mask\"                              : mask,\r\n            \"unmask\"                            : unmask,\r\n            \"mask_sync\"                         : mask_sync,\r\n            \"mask_lure_x\"                       : mask_lure_x,\r\n            \"mask_lure_y\"                       : mask_lure_y,\r\n        \r\n            \"weight_masked\"                     : weight_masked,\r\n            \"weight_unmasked\"                   : weight_unmasked,\r\n            \"weight_scheduler_masked\"           : weight_scheduler_masked,\r\n            \"weight_scheduler_unmasked\"         : weight_scheduler_unmasked,\r\n            \"start_step_masked\"                 : weight_start_step_masked,\r\n            \"start_step_unmasked\"               : weight_start_step_unmasked,\r\n            \"end_step_masked\"                   : weight_end_step_masked,\r\n            \"end_step_unmasked\"                 : weight_end_step_unmasked,\r\n            \r\n            \"weights_masked\"                    : weights_masked,\r\n            \"weights_unmasked\"                  : weights_unmasked,\r\n            \r\n            \"weight_masked_sync\"                : sync_masked,\r\n            \"weight_unmasked_sync\"              : sync_unmasked,\r\n            \"weight_scheduler_masked_sync\"      : sync_scheduler_masked,\r\n            \"weight_scheduler_unmasked_sync\"    : sync_scheduler_unmasked,\r\n            \"start_step_masked_sync\"            : sync_start_step_masked,\r\n            \"start_step_unmasked_sync\"          : sync_start_step_unmasked,\r\n            \"end_step_masked_sync\"              : sync_end_step_masked,\r\n            \"end_step_unmasked_sync\"            : sync_end_step_unmasked,\r\n            \r\n            \"weights_masked_sync\"               : syncs_masked,\r\n            \"weights_unmasked_sync\"             : syncs_unmasked,\r\n            \r\n            \"drift_x_data\"                      : drift_x_data,\r\n            \"drift_x_sync\"                      : drift_x_sync,\r\n            \"drift_y_data\"                      : drift_y_data,\r\n            \"drift_y_sync\"                      : drift_y_sync,\r\n            \"drift_y_guide\"                     : drift_y_guide,\r\n            \r\n            \"weight_masked_drift_x\"             : drift_x_masked,\r\n            \"weight_unmasked_drift_x\"           : drift_x_unmasked,\r\n            \"weight_scheduler_masked_drift_x\"   : drift_x_scheduler_masked,\r\n            \"weight_scheduler_unmasked_drift_x\" : drift_x_scheduler_unmasked,\r\n            \"start_step_masked_drift_x\"         : drift_x_start_step_masked,\r\n            \"start_step_unmasked_drift_x\"       : drift_x_start_step_unmasked,\r\n            \"end_step_masked_drift_x\"           : drift_x_end_step_masked,\r\n            \"end_step_unmasked_drift_x\"         : drift_x_end_step_unmasked,\r\n            \r\n            \"weights_masked_drift_x\"            : drift_xs_masked,\r\n            \"weights_unmasked_drift_x\"          : drift_xs_unmasked,\r\n            \r\n            \r\n            \"weight_masked_drift_y\"             : drift_y_masked,\r\n            \"weight_unmasked_drift_y\"           : drift_y_unmasked,\r\n            \"weight_scheduler_masked_drift_y\"   : drift_y_scheduler_masked,\r\n            \"weight_scheduler_unmasked_drift_y\" : drift_y_scheduler_unmasked,\r\n            \"start_step_masked_drift_y\"         : drift_y_start_step_masked,\r\n            \"start_step_unmasked_drift_y\"       : drift_y_start_step_unmasked,\r\n            \"end_step_masked_drift_y\"           : drift_y_end_step_masked,\r\n            \"end_step_unmasked_drift_y\"         : drift_y_end_step_unmasked,\r\n            \r\n            \"weights_masked_drift_y\"            : drift_ys_masked,\r\n            \"weights_unmasked_drift_y\"          : drift_ys_unmasked,\r\n            \r\n            \"weight_masked_lure_x\"              : lure_x_masked,\r\n            \"weight_unmasked_lure_x\"            : lure_x_unmasked,\r\n            \"weight_scheduler_masked_lure_x\"    : lure_x_scheduler_masked,\r\n            \"weight_scheduler_unmasked_lure_x\"  : lure_x_scheduler_unmasked,\r\n            \"start_step_masked_lure_x\"          : lure_x_start_step_masked,\r\n            \"start_step_unmasked_lure_x\"        : lure_x_start_step_unmasked,\r\n            \"end_step_masked_lure_x\"            : lure_x_end_step_masked,\r\n            \"end_step_unmasked_lure_x\"          : lure_x_end_step_unmasked,\r\n            \r\n            \"weights_masked_lure_x\"             : lure_xs_masked,\r\n            \"weights_unmasked_lure_x\"           : lure_xs_unmasked,\r\n            \r\n            \r\n            \"weight_masked_lure_y\"              : lure_y_masked,\r\n            \"weight_unmasked_lure_y\"            : lure_y_unmasked,\r\n            \"weight_scheduler_masked_lure_y\"    : lure_y_scheduler_masked,\r\n            \"weight_scheduler_unmasked_lure_y\"  : lure_y_scheduler_unmasked,\r\n            \"start_step_masked_lure_y\"          : lure_y_start_step_masked,\r\n            \"start_step_unmasked_lure_y\"        : lure_y_start_step_unmasked,\r\n            \"end_step_masked_lure_y\"            : lure_y_end_step_masked,\r\n            \"end_step_unmasked_lure_y\"          : lure_y_end_step_unmasked,\r\n            \r\n            \"weights_masked_lure_y\"             : lure_ys_masked,\r\n            \"weights_unmasked_lure_y\"           : lure_ys_unmasked,\r\n            \r\n            \"sync_lure_iter\"                    : lure_iter,\r\n            \"sync_lure_sequence\"                : lure_sequence,\r\n\r\n            \"cutoff_masked\"                     : cutoff_masked,\r\n            \"cutoff_unmasked\"                   : cutoff_unmasked\r\n        }\r\n        \r\n        \r\n        return (guides, )\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownGuide_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"guide_mode\":           (GUIDE_MODE_NAMES_BETA_SIMPLE,       {\"default\": 'epsilon',                                                      \"tooltip\": \"Recommended: epsilon or mean/mean_std with sampler_mode = standard, and unsample/resample with sampler_mode = unsample/resample. Epsilon_dynamic_mean, etc. are only used with two latent inputs and a mask. Blend/hard_light/mean/mean_std etc. require low strengths, start with 0.01-0.02.\"}),\r\n                    \"channelwise_mode\":     (\"BOOLEAN\",                                   {\"default\": True}),\r\n                    \"projection_mode\":      (\"BOOLEAN\",                                   {\"default\": True}),\r\n                    \"weight\":               (\"FLOAT\",                                     {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"cutoff\":               (\"FLOAT\",                                     {\"default\": 1.0,  \"min\": 0.0,    \"max\": 1.0,   \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"weight_scheduler\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"start_step\":           (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step\":             (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":          (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide\":                (\"LATENT\", ),\r\n                    \"mask\":                 (\"MASK\", ),\r\n                    \"weights\":              (\"SIGMAS\", ),\r\n                    }  \r\n                }\r\n        \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            weight_scheduler          = \"constant\",\r\n            weight_scheduler_unmasked = \"constant\",\r\n            start_step                = 0,\r\n            start_step_unmasked       = 0,\r\n            end_step                  = 30,\r\n            end_step_unmasked         = 30,\r\n            cutoff                    = 1.0,\r\n            cutoff_unmasked           = 1.0,\r\n            guide                     = None,\r\n            guide_unmasked            = None,\r\n            weight                    = 0.0,\r\n            weight_unmasked           = 0.0,\r\n\r\n            guide_mode                = \"epsilon\",\r\n            channelwise_mode          = False,\r\n            projection_mode           = False,\r\n            weights                   = None,\r\n            weights_unmasked          = None,\r\n            mask                      = None,\r\n            unmask                    = None,\r\n            invert_mask               = False,\r\n            ):\r\n        \r\n        CG = ClownGuides_Beta()\r\n        \r\n        mask = 1-mask if mask is not None else None\r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        if guide is not None:\r\n            raw_x = guide.get('state_info', {}).get('raw_x', None)\r\n            \r\n            if False: # raw_x is not None:\r\n                guide          = {'samples': guide['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide          = {'samples': guide['samples'].clone()}\r\n                \r\n        if guide_unmasked is not None:\r\n            raw_x = guide_unmasked.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_unmasked = {'samples': guide_unmasked['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_unmasked = {'samples': guide_unmasked['samples'].clone()}\r\n        \r\n        guides, = CG.main(\r\n            weight_scheduler_masked   = weight_scheduler,\r\n            weight_scheduler_unmasked = weight_scheduler_unmasked,\r\n            start_step_masked         = start_step,\r\n            start_step_unmasked       = start_step_unmasked,\r\n            end_step_masked           = end_step,\r\n            end_step_unmasked         = end_step_unmasked,\r\n            cutoff_masked             = cutoff,\r\n            cutoff_unmasked           = cutoff_unmasked,\r\n            guide_masked              = guide,\r\n            guide_unmasked            = guide_unmasked,\r\n            weight_masked             = weight,\r\n            weight_unmasked           = weight_unmasked,\r\n\r\n            guide_mode                = guide_mode,\r\n            channelwise_mode          = channelwise_mode,\r\n            projection_mode           = projection_mode,\r\n            weights_masked            = weights,\r\n            weights_unmasked          = weights_unmasked,\r\n            mask                      = mask,\r\n            unmask                    = unmask,\r\n            invert_mask               = invert_mask\r\n        )\r\n\r\n        return (guides, )\r\n\r\n\r\n        #return (guides[0], )\r\n\r\n\r\n\r\n\r\nclass ClownGuides_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"guide_mode\":                  (GUIDE_MODE_NAMES_BETA_SIMPLE,                {\"default\": 'epsilon',                                                      \"tooltip\": \"Recommended: epsilon or mean/mean_std with sampler_mode = standard, and unsample/resample with sampler_mode = unsample/resample. Epsilon_dynamic_mean, etc. are only used with two latent inputs and a mask. Blend/hard_light/mean/mean_std etc. require low strengths, start with 0.01-0.02.\"}),\r\n                    \"channelwise_mode\":            (\"BOOLEAN\",                                   {\"default\": True}),\r\n                    \"projection_mode\":             (\"BOOLEAN\",                                   {\"default\": True}),\r\n                    \"weight_masked\":               (\"FLOAT\",                                     {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"weight_unmasked\":             (\"FLOAT\",                                     {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                    \"cutoff_masked\":               (\"FLOAT\",                                     {\"default\": 1.0,  \"min\": 0.0,    \"max\": 1.0,   \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"cutoff_unmasked\":             (\"FLOAT\",                                     {\"default\": 1.0,  \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"weight_scheduler_masked\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"weight_scheduler_unmasked\":   ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"start_step_masked\":           (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"start_step_unmasked\":         (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step_masked\":             (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"end_step_unmasked\":           (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":                 (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide_masked\":                (\"LATENT\", ),\r\n                    \"guide_unmasked\":              (\"LATENT\", ),\r\n                    \"mask\":                        (\"MASK\", ),\r\n                    \"weights_masked\":              (\"SIGMAS\", ),\r\n                    \"weights_unmasked\":            (\"SIGMAS\", ),\r\n                    }  \r\n                }\r\n        \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            weight_scheduler_masked   = \"constant\",\r\n            weight_scheduler_unmasked = \"constant\",\r\n            start_step_masked         = 0,\r\n            start_step_unmasked       = 0,\r\n            end_step_masked           = 30,\r\n            end_step_unmasked         = 30,\r\n            cutoff_masked             = 1.0,\r\n            cutoff_unmasked           = 1.0,\r\n            guide_masked              = None,\r\n            guide_unmasked            = None,\r\n            weight_masked             = 0.0,\r\n            weight_unmasked           = 0.0,\r\n\r\n            guide_mode                = \"epsilon\",\r\n            channelwise_mode          = False,\r\n            projection_mode           = False,\r\n            weights_masked            = None,\r\n            weights_unmasked          = None,\r\n            mask                      = None,\r\n            unmask                    = None,\r\n            invert_mask               = False,\r\n            ):\r\n\r\n        default_dtype = torch.float64\r\n        \r\n        if end_step_masked   == -1:\r\n            end_step_masked   = MAX_STEPS\r\n        if end_step_unmasked == -1:\r\n            end_step_unmasked = MAX_STEPS\r\n        \r\n        if guide_masked is None:\r\n            weight_scheduler_masked = \"constant\"\r\n            start_step_masked       = 0\r\n            end_step_masked         = 30\r\n            cutoff_masked           = 1.0\r\n            guide_masked            = None\r\n            weight_masked           = 0.0\r\n            weights_masked          = None\r\n            #mask                    = None\r\n        \r\n        if guide_unmasked is None:\r\n            weight_scheduler_unmasked = \"constant\"\r\n            start_step_unmasked       = 0\r\n            end_step_unmasked         = 30\r\n            cutoff_unmasked           = 1.0\r\n            guide_unmasked            = None\r\n            weight_unmasked           = 0.0\r\n            weights_unmasked          = None\r\n            #unmask                    = None\r\n        \r\n        if guide_masked is not None:\r\n            raw_x = guide_masked.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_masked   = {'samples': guide_masked['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_masked   = {'samples': guide_masked['samples'].clone()}\r\n        \r\n        if guide_unmasked is not None:\r\n            raw_x = guide_unmasked.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_unmasked = {'samples': guide_unmasked['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_unmasked = {'samples': guide_unmasked['samples'].clone()}\r\n        \r\n        if invert_mask and mask is not None:\r\n            mask = 1-mask\r\n                \r\n        if projection_mode:\r\n            guide_mode = guide_mode + \"_projection\"\r\n        \r\n        if channelwise_mode:\r\n            guide_mode = guide_mode + \"_cw\"\r\n            \r\n        if guide_mode == \"unsample_cw\":\r\n            guide_mode = \"unsample\"\r\n        if guide_mode == \"resample_cw\":\r\n            guide_mode = \"resample\"\r\n        \r\n        if weight_scheduler_masked == \"constant\" and weights_masked == None: \r\n            weights_masked = initialize_or_scale(None, weight_masked, end_step_masked).to(default_dtype)\r\n            prepend      = torch.zeros(start_step_masked, dtype=default_dtype, device=weights_masked.device)\r\n            weights_masked = torch.cat((prepend, weights_masked), dim=0)\r\n            weights_masked = F.pad(weights_masked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if weight_scheduler_unmasked == \"constant\" and weights_unmasked == None: \r\n            weights_unmasked = initialize_or_scale(None, weight_unmasked, end_step_unmasked).to(default_dtype)\r\n            prepend      = torch.zeros(start_step_unmasked, dtype=default_dtype, device=weights_unmasked.device)\r\n            weights_unmasked = torch.cat((prepend, weights_unmasked), dim=0)\r\n            weights_unmasked = F.pad(weights_unmasked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        guides = {\r\n            \"guide_mode\"                : guide_mode,\r\n            \"weight_masked\"             : weight_masked,\r\n            \"weight_unmasked\"           : weight_unmasked,\r\n            \"weights_masked\"            : weights_masked,\r\n            \"weights_unmasked\"          : weights_unmasked,\r\n            \"guide_masked\"              : guide_masked,\r\n            \"guide_unmasked\"            : guide_unmasked,\r\n            \"mask\"                      : mask,\r\n            \"unmask\"                    : unmask,\r\n\r\n            \"weight_scheduler_masked\"   : weight_scheduler_masked,\r\n            \"weight_scheduler_unmasked\" : weight_scheduler_unmasked,\r\n            \"start_step_masked\"         : start_step_masked,\r\n            \"start_step_unmasked\"       : start_step_unmasked,\r\n            \"end_step_masked\"           : end_step_masked,\r\n            \"end_step_unmasked\"         : end_step_unmasked,\r\n            \"cutoff_masked\"             : cutoff_masked,\r\n            \"cutoff_unmasked\"           : cutoff_unmasked\r\n        }\r\n        \r\n        \r\n        return (guides, )\r\n\r\n\r\n\r\n\r\nclass ClownGuidesAB_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"guide_mode\":         (GUIDE_MODE_NAMES_BETA_SIMPLE,                {\"default\": 'epsilon',                                                      \"tooltip\": \"Recommended: epsilon or mean/mean_std with sampler_mode = standard, and unsample/resample with sampler_mode = unsample/resample. Epsilon_dynamic_mean, etc. are only used with two latent inputs and a mask. Blend/hard_light/mean/mean_std etc. require low strengths, start with 0.01-0.02.\"}),\r\n                    \"channelwise_mode\":   (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    \"projection_mode\":    (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    \"weight_A\":           (\"FLOAT\",                                     {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"weight_B\":           (\"FLOAT\",                                     {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                    \"cutoff_A\":           (\"FLOAT\",                                     {\"default\": 1.0,  \"min\": 0.0,    \"max\": 1.0,   \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"cutoff_B\":           (\"FLOAT\",                                     {\"default\": 1.0,  \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"weight_scheduler_A\": ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"weight_scheduler_B\": ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"start_step_A\":       (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"start_step_B\":       (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step_A\":         (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"end_step_B\":         (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_masks\":       (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide_A\":            (\"LATENT\", ),\r\n                    \"guide_B\":            (\"LATENT\", ),\r\n                    \"mask_A\":             (\"MASK\", ),\r\n                    \"mask_B\":             (\"MASK\", ),\r\n                    \"weights_A\":          (\"SIGMAS\", ),\r\n                    \"weights_B\":          (\"SIGMAS\", ),\r\n                    }  \r\n                }\r\n        \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            weight_scheduler_A = \"constant\",\r\n            weight_scheduler_B = \"constant\",\r\n            start_step_A       = 0,\r\n            start_step_B       = 0,\r\n            end_step_A         = 30,\r\n            end_step_B         = 30,\r\n            cutoff_A           = 1.0,\r\n            cutoff_B           = 1.0,\r\n            guide_A            = None,\r\n            guide_B            = None,\r\n            weight_A           = 0.0,\r\n            weight_B           = 0.0,\r\n\r\n            guide_mode         = \"epsilon\",\r\n            channelwise_mode   = False,\r\n            projection_mode    = False,\r\n            weights_A          = None,\r\n            weights_B          = None,\r\n            mask_A             = None,\r\n            mask_B             = None,\r\n            invert_masks       : bool = False,\r\n            ):\r\n        \r\n        default_dtype = torch.float64\r\n        \r\n        if end_step_A == -1:\r\n            end_step_A = MAX_STEPS\r\n        if end_step_B == -1:\r\n            end_step_B = MAX_STEPS\r\n        \r\n        if guide_A is not None:\r\n            raw_x = guide_A.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_A          = {'samples': guide_A['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_A          = {'samples': guide_A['samples'].clone()}\r\n                \r\n        if guide_B is not None:\r\n            raw_x = guide_B.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_B = {'samples': guide_B['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_B = {'samples': guide_B['samples'].clone()}\r\n        \r\n        if guide_A is None:\r\n            guide_A  = guide_B\r\n            guide_B  = None\r\n            mask_A   = mask_B\r\n            mask_B   = None\r\n            weight_B = 0.0\r\n            \r\n        if guide_B is None:\r\n            weight_B = 0.0\r\n            \r\n        if mask_A is None and mask_B is not None:\r\n            mask_A = 1-mask_B\r\n                        \r\n        if projection_mode:\r\n            guide_mode = guide_mode + \"_projection\"\r\n        \r\n        if channelwise_mode:\r\n            guide_mode = guide_mode + \"_cw\"\r\n            \r\n        if guide_mode == \"unsample_cw\":\r\n            guide_mode = \"unsample\"\r\n        if guide_mode == \"resample_cw\":\r\n            guide_mode = \"resample\"\r\n        \r\n        if weight_scheduler_A == \"constant\" and weights_A == None: \r\n            weights_A = initialize_or_scale(None, weight_A, end_step_A).to(default_dtype)\r\n            prepend      = torch.zeros(start_step_A, dtype=default_dtype, device=weights_A.device)\r\n            weights_A = torch.cat((prepend, weights_A), dim=0)\r\n            weights_A = F.pad(weights_A, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if weight_scheduler_B == \"constant\" and weights_B == None: \r\n            weights_B = initialize_or_scale(None, weight_B, end_step_B).to(default_dtype)\r\n            prepend      = torch.zeros(start_step_B, dtype=default_dtype, device=weights_B.device)\r\n            weights_B = torch.cat((prepend, weights_B), dim=0)\r\n            weights_B = F.pad(weights_B, (0, MAX_STEPS), value=0.0)\r\n            \r\n        if invert_masks:\r\n            mask_A = 1-mask_A if mask_A is not None else None\r\n            mask_B = 1-mask_B if mask_B is not None else None\r\n    \r\n        guides = {\r\n            \"guide_mode\"                : guide_mode,\r\n            \"weight_masked\"             : weight_A,\r\n            \"weight_unmasked\"           : weight_B,\r\n            \"weights_masked\"            : weights_A,\r\n            \"weights_unmasked\"          : weights_B,\r\n            \"guide_masked\"              : guide_A,\r\n            \"guide_unmasked\"            : guide_B,\r\n            \"mask\"                      : mask_A,\r\n            \"unmask\"                    : mask_B,\r\n\r\n            \"weight_scheduler_masked\"   : weight_scheduler_A,\r\n            \"weight_scheduler_unmasked\" : weight_scheduler_B,\r\n            \"start_step_masked\"         : start_step_A,\r\n            \"start_step_unmasked\"       : start_step_B,\r\n            \"end_step_masked\"           : end_step_A,\r\n            \"end_step_unmasked\"         : end_step_B,\r\n            \"cutoff_masked\"             : cutoff_A,\r\n            \"cutoff_unmasked\"           : cutoff_B\r\n        }\r\n        \r\n        return (guides, )\r\n    \r\n\r\n\r\nclass ClownOptions_Combine:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"options\": (\"OPTIONS\",),\r\n            },\r\n        }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self, options, **kwargs):\r\n        options_mgr = OptionsManager(options, **kwargs)\r\n        return (options_mgr.as_dict(),)\r\n\r\n\r\n\r\nclass ClownOptions_Frameweights:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"config_name\": (FRAME_WEIGHTS_CONFIG_NAMES, {\"default\": \"frame_weights\", \"tooltip\": \"Apply to specific type of per-frame weights.\"}),\r\n                \"dynamics\": (FRAME_WEIGHTS_DYNAMICS_NAMES, {\"default\": \"ease_out\", \"tooltip\": \"The function type used for the dynamic period. constant: no change, linear: steady change, ease_out: starts fast, ease_in: starts slow\"}),\r\n                \"schedule\": (FRAME_WEIGHTS_SCHEDULE_NAMES, {\"default\": \"moderate_early\", \"tooltip\": \"fast_early: fast change starts immediately, slow_late: slow change starts later\"}),\r\n                \"scale\": (\"FLOAT\", {\"default\": 0.5, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01, \"tooltip\": \"The amount of change over the course of the frame weights. 1.0 means that the guides have no influence by the end.\"}),\r\n                \"reverse\": (\"BOOLEAN\", {\"default\": False, \"tooltip\": \"Reverse the frame weights\"}),\r\n            },\r\n            \"optional\": {\r\n                \"frame_weights\": (\"SIGMAS\", {\"tooltip\": \"Overrides all other settings EXCEPT reverse.\"}),\r\n                \"custom_string\": (\"STRING\", {\"tooltip\": \"Overrides all other settings EXCEPT reverse.\", \"multiline\": True}),\r\n                \"options\": (\"OPTIONS\",),\r\n            },\r\n        }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self,\r\n            config_name,\r\n            dynamics,\r\n            schedule,\r\n            scale,\r\n            reverse,\r\n            frame_weights = None,\r\n            custom_string = None,\r\n            options       = None,\r\n            ):\r\n        \r\n        options_mgr = OptionsManager(options if options is not None else {})\r\n\r\n        frame_weights_mgr = options_mgr.get(\"frame_weights_mgr\")\r\n        if frame_weights_mgr is None:\r\n            frame_weights_mgr = FrameWeightsManager()\r\n\r\n        if custom_string is not None and custom_string.strip() == \"\":\r\n            custom_string = None\r\n        \r\n        frame_weights_mgr.add_weight_config(\r\n            config_name,\r\n            dynamics=dynamics,\r\n            schedule=schedule,\r\n            scale=scale,\r\n            is_reversed=reverse,\r\n            frame_weights=frame_weights,\r\n            custom_string=custom_string\r\n        )\r\n        \r\n        options_mgr.update(\"frame_weights_mgr\", frame_weights_mgr)\r\n        \r\n        return (options_mgr.as_dict(),)\r\n\r\n\r\nclass SharkOptions_GuiderInput:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {\"guider\": (\"GUIDER\", ),\r\n                    },\r\n                \"optional\":\r\n                    {\"options\": (\"OPTIONS\", ),\r\n                    }\r\n                }\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self, guider, options=None):\r\n        options_mgr = OptionsManager(options if options is not None else {})\r\n        \r\n        if isinstance(guider, dict):\r\n            guider = guider.get('samples', None)\r\n            \r\n        if isinstance(guider, torch.Tensor):\r\n            guider = guider.detach().cpu()\r\n        \r\n        if options_mgr is None:\r\n            options_mgr = OptionsManager()\r\n            \r\n        options_mgr.update(\"guider\", guider)\r\n        \r\n        return (options_mgr.as_dict(), )\r\n\r\n\r\n\r\n\r\n\r\nclass ClownGuide_AdaIN_MMDiT_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"weight\":           (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide by multiplying all other weights by this value.\"}),\r\n                    \"weight_scheduler\": ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"double_blocks\"   : (\"STRING\",                                    {\"default\": \"\", \"multiline\": True}),\r\n                    \"double_weights\"  : (\"STRING\",                                    {\"default\": \"\", \"multiline\": True}),\r\n                    \"single_blocks\"   : (\"STRING\",                                    {\"default\": \"20\", \"multiline\": True}),\r\n                    \"single_weights\"  : (\"STRING\",                                    {\"default\": \"0.5\", \"multiline\": True}),\r\n                    \"start_step\":       (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step\":         (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":      (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide\":            (\"LATENT\", ),\r\n                    \"mask\":             (\"MASK\", ),\r\n                    \"weights\":          (\"SIGMAS\", ),\r\n                    \"guides\":           (\"GUIDES\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            weight           = 1.0,\r\n            weight_scheduler = \"constant\",\r\n            double_weights   = \"0.1\",\r\n            single_weights   = \"0.0\", \r\n            double_blocks    = \"all\",\r\n            single_blocks    = \"all\", \r\n            start_step       = 0,\r\n            end_step         = 15,\r\n            invert_mask      = False,\r\n            \r\n            guide            = None,\r\n            mask             = None,\r\n            weights          = None,\r\n            guides           = None,\r\n            ):\r\n        \r\n        default_dtype = torch.float64\r\n        \r\n        mask = 1-mask if mask is not None else None\r\n        \r\n        double_weights = parse_range_string(double_weights)\r\n        single_weights = parse_range_string(single_weights)\r\n        \r\n        if len(double_weights) == 0:\r\n            double_weights.append(0.0)\r\n        if len(single_weights) == 0:\r\n            single_weights.append(0.0)\r\n            \r\n        if len(double_weights) == 1:\r\n            double_weights = double_weights * 100\r\n        if len(single_weights) == 1:\r\n            single_weights = single_weights * 100\r\n            \r\n        if type(double_weights[0]) == int:\r\n            double_weights = [float(val) for val in double_weights]\r\n        if type(single_weights[0]) == int:\r\n            single_weights = [float(val) for val in single_weights]\r\n        \r\n        if double_blocks == \"all\":\r\n            double_blocks  = [val for val in range(100)]\r\n            if len(double_weights) == 1:\r\n                double_weights = [double_weights[0]] * 100\r\n        else:\r\n            double_blocks  = parse_range_string(double_blocks)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(double_blocks, double_weights):\r\n                weights_expanded[b] = w\r\n            double_weights = weights_expanded\r\n            \r\n        \r\n        if single_blocks == \"all\":\r\n            single_blocks = [val for val in range(100)]\r\n            if len(single_weights) == 1:\r\n                single_weights = [single_weights[0]] * 100\r\n        else:\r\n            single_blocks  = parse_range_string(single_blocks)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(single_blocks, single_weights):\r\n                weights_expanded[b] = w\r\n            single_weights = weights_expanded\r\n        \r\n        \r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        if guide is not None:\r\n            raw_x = guide.get('state_info', {}).get('raw_x', None)\r\n            if raw_x is not None:\r\n                guide          = {'samples': guide['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide          = {'samples': guide['samples'].clone()}\r\n        \r\n        if weight_scheduler == \"constant\": # and weights == None: \r\n            weights = initialize_or_scale(None, weight, end_step).to(default_dtype)\r\n            prepend = torch.zeros(start_step).to(weights)\r\n            weights = torch.cat([prepend, weights])\r\n            weights = F.pad(weights, (0, MAX_STEPS), value=0.0)\r\n        \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        \r\n        guides['weight_adain']           = weight\r\n        guides['weights_adain']          = weights\r\n        \r\n        guides['blocks_adain_mmdit'] = {\r\n            \"double_weights\": double_weights,\r\n            \"single_weights\": single_weights,\r\n            \"double_blocks\" : double_blocks,\r\n            \"single_blocks\" : single_blocks,\r\n        }\r\n        \r\n        guides['guide_adain']            = guide\r\n        guides['mask_adain']             = mask\r\n\r\n        guides['weight_scheduler_adain'] = weight_scheduler\r\n        guides['start_step_adain']       = start_step\r\n        guides['end_step_adain']         = end_step\r\n        \r\n        return (guides, )\r\n\r\n\r\n\r\n\r\n\r\nclass ClownGuide_AttnInj_MMDiT_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"weight\":           (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide by multiplying all other weights by this value.\"}),\r\n                    \"weight_scheduler\": ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"double_blocks\"   : (\"STRING\",                                    {\"default\": \"0,1,3\", \"multiline\": True}),\r\n                    \"double_weights\"  : (\"STRING\",                                    {\"default\": \"1.0\", \"multiline\": True}),\r\n                    \"single_blocks\"   : (\"STRING\",                                    {\"default\": \"20\", \"multiline\": True}),\r\n                    \"single_weights\"  : (\"STRING\",                                    {\"default\": \"0.5\", \"multiline\": True}),\r\n                    \r\n                    \"img_q\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"img_k\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"img_v\":            (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n\r\n                    \"txt_q\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"txt_k\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"txt_v\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n\r\n                    \"img_q_norm\":       (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"img_k_norm\":       (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"img_v_norm\":       (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n\r\n                    \"txt_q_norm\":       (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"txt_k_norm\":       (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"txt_v_norm\":       (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n\r\n                    \"start_step\":       (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step\":         (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":      (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide\":            (\"LATENT\", ),\r\n                    \"mask\":             (\"MASK\", ),\r\n                    \"weights\":          (\"SIGMAS\", ),\r\n                    \"guides\":           (\"GUIDES\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            weight           = 1.0,\r\n            weight_scheduler = \"constant\",\r\n            double_weights   = \"0.1\",\r\n            single_weights   = \"0.0\", \r\n            double_blocks    = \"all\",\r\n            single_blocks    = \"all\", \r\n            \r\n            img_q            = 0.0,\r\n            img_k            = 0.0,\r\n            img_v            = 0.0,\r\n            \r\n            txt_q            = 0.0,\r\n            txt_k            = 0.0,\r\n            txt_v            = 0.0,\r\n            \r\n            img_q_norm       = 0.0,\r\n            img_k_norm       = 0.0,\r\n            img_v_norm       = 0.0,\r\n            \r\n            txt_q_norm       = 0.0,\r\n            txt_k_norm       = 0.0,\r\n            txt_v_norm       = 0.0,\r\n            \r\n            start_step       = 0,\r\n            end_step         = 15,\r\n            invert_mask      = False,\r\n            \r\n            guide            = None,\r\n            mask             = None,\r\n            weights          = None,\r\n            guides           = None,\r\n            ):\r\n        \r\n        default_dtype = torch.float64\r\n        \r\n        mask = 1-mask if mask is not None else None\r\n        \r\n        double_weights = parse_range_string(double_weights)\r\n        single_weights = parse_range_string(single_weights)\r\n        \r\n        if len(double_weights) == 0:\r\n            double_weights.append(0.0)\r\n        if len(single_weights) == 0:\r\n            single_weights.append(0.0)\r\n        \r\n        if len(double_weights) == 1:\r\n            double_weights = double_weights * 100\r\n        if len(single_weights) == 1:\r\n            single_weights = single_weights * 100\r\n        \r\n        if type(double_weights[0]) == int:\r\n            double_weights = [float(val) for val in double_weights]\r\n        if type(single_weights[0]) == int:\r\n            single_weights = [float(val) for val in single_weights]\r\n        \r\n        if double_blocks == \"all\":\r\n            double_blocks  = [val for val in range(100)]\r\n            if len(double_weights) == 1:\r\n                double_weights = [double_weights[0]] * 100\r\n        else:\r\n            double_blocks  = parse_range_string(double_blocks)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(double_blocks, double_weights):\r\n                weights_expanded[b] = w\r\n            double_weights = weights_expanded\r\n            \r\n        \r\n        if single_blocks == \"all\":\r\n            single_blocks = [val for val in range(100)]\r\n            if len(single_weights) == 1:\r\n                single_weights = [single_weights[0]] * 100\r\n        else:\r\n            single_blocks  = parse_range_string(single_blocks)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(single_blocks, single_weights):\r\n                weights_expanded[b] = w\r\n            single_weights = weights_expanded\r\n        \r\n        \r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        if guide is not None:\r\n            raw_x = guide.get('state_info', {}).get('raw_x', None)\r\n            if raw_x is not None:\r\n                guide          = {'samples': guide['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide          = {'samples': guide['samples'].clone()}\r\n        \r\n        if weight_scheduler == \"constant\": # and weights == None: \r\n            weights = initialize_or_scale(None, weight, end_step).to(default_dtype)\r\n            prepend = torch.zeros(start_step).to(weights)\r\n            weights = torch.cat([prepend, weights])\r\n            weights = F.pad(weights, (0, MAX_STEPS), value=0.0)\r\n        \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        \r\n        guides['weight_attninj']           = weight\r\n        guides['weights_attninj']          = weights\r\n        \r\n        guides['blocks_attninj_mmdit'] = {\r\n            \"double_weights\": double_weights,\r\n            \"single_weights\": single_weights,\r\n            \"double_blocks\" : double_blocks,\r\n            \"single_blocks\" : single_blocks,\r\n        }\r\n        \r\n        guides['blocks_attninj_qkv'] = {\r\n            \"img_q\": img_q,\r\n            \"img_k\": img_k,\r\n            \"img_v\": img_v,\r\n            \"txt_q\": txt_q,\r\n            \"txt_k\": txt_k,\r\n            \"txt_v\": txt_v,\r\n            \r\n            \"img_q_norm\": img_q_norm,\r\n            \"img_k_norm\": img_k_norm,\r\n            \"img_v_norm\": img_v_norm,\r\n            \"txt_q_norm\": txt_q_norm,\r\n            \"txt_k_norm\": txt_k_norm,\r\n            \"txt_v_norm\": txt_v_norm,\r\n        }\r\n        \r\n        guides['guide_attninj']            = guide\r\n        guides['mask_attninj']             = mask\r\n\r\n        guides['weight_scheduler_attninj'] = weight_scheduler\r\n        guides['start_step_attninj']       = start_step\r\n        guides['end_step_attninj']         = end_step\r\n        \r\n        return (guides, )\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownGuide_StyleNorm_Advanced_HiDream:\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"weight\":           (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide by multiplying all other weights by this value.\"}),\r\n                    \"weight_scheduler\": ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \r\n                    \"double_blocks\"   : (\"STRING\",                                    {\"default\": \"all\", \"multiline\": True}),\r\n                    \"double_weights\"  : (\"STRING\",                                    {\"default\": \"1.0\", \"multiline\": True}),\r\n                    \"single_blocks\"   : (\"STRING\",                                    {\"default\": \"all\", \"multiline\": True}),\r\n                    \"single_weights\"  : (\"STRING\",                                    {\"default\": \"1.0\", \"multiline\": True}),\r\n\r\n                    \"mode\": ([\"scattersort\", \"AdaIN\"], {\"default\": \"scattersort\"},),\r\n                    \"noise_mode\": ([\"direct\", \"update\", \"smart\", \"recon\", \"bonanza\"], {\"default\": \"smart\"},),\r\n\r\n                    #\"shared_experts\":          (\"BOOLEAN\", {\"default\": False}),\r\n                    \r\n                    \"ff_1\"                   : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"ff_1_silu\"              : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"ff_3\"                   : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"ff_13\"                  : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"ff_2\"                   : (\"BOOLEAN\", {\"default\": False}),\r\n                    \r\n                    \"moe_gate\"               : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"topk_weight\"            : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"moe_ff_1\"               : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"moe_ff_1_silu\"          : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"moe_ff_3\"               : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"moe_ff_13\"              : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"moe_ff_2\"               : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"moe_sum\"                : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"moe_out\"                : (\"BOOLEAN\", {\"default\": False}),\r\n\r\n                    \"double_img_io\":           (\"BOOLEAN\", {\"default\": False}),\r\n                    \"double_img_norm0\":        (\"BOOLEAN\", {\"default\": False}),\r\n                    \"double_img_attn\":         (\"BOOLEAN\", {\"default\": False}),\r\n                    \"double_img_attn_gated\":   (\"BOOLEAN\", {\"default\": False}),\r\n                    \"double_img\":              (\"BOOLEAN\", {\"default\": False}),\r\n                    \"double_img_norm1\":        (\"BOOLEAN\", {\"default\": False}),\r\n                    \"double_img_ff_i\":         (\"BOOLEAN\", {\"default\": False}),\r\n\r\n                    \"double_txt_io\":           (\"BOOLEAN\", {\"default\": False}),\r\n                    \"double_txt_norm0\":        (\"BOOLEAN\", {\"default\": False}),\r\n                    \"double_txt_attn\":         (\"BOOLEAN\", {\"default\": False}),\r\n                    \"double_txt_attn_gated\":   (\"BOOLEAN\", {\"default\": False}),\r\n                    \"double_txt\":              (\"BOOLEAN\", {\"default\": False}),\r\n                    \"double_txt_norm1\":        (\"BOOLEAN\", {\"default\": False}),\r\n                    \"double_txt_ff_t\":         (\"BOOLEAN\", {\"default\": False}),\r\n\r\n                    \"single_img_io\":           (\"BOOLEAN\", {\"default\": False}),\r\n                    \"single_img_norm0\":        (\"BOOLEAN\", {\"default\": False}),\r\n                    \"single_img_attn\":         (\"BOOLEAN\", {\"default\": False}),\r\n                    \"single_img_attn_gated\":   (\"BOOLEAN\", {\"default\": False}),\r\n                    \"single_img\":              (\"BOOLEAN\", {\"default\": False}),\r\n                    \"single_img_norm1\":        (\"BOOLEAN\", {\"default\": False}),\r\n                    \"single_img_ff_i\":         (\"BOOLEAN\", {\"default\": False}),\r\n                    \r\n                    \"attn_img_q_norm\"       : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"attn_img_k_norm\"       : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"attn_img_v_norm\"       : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"attn_txt_q_norm\"       : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"attn_txt_k_norm\"       : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"attn_txt_v_norm\"       : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"attn_img_double\"       : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"attn_txt_double\"       : (\"BOOLEAN\", {\"default\": False}),\r\n                    \"attn_img_single\"       : (\"BOOLEAN\", {\"default\": False}),\r\n                    \r\n                    \"proj_out\"           : (\"BOOLEAN\", {\"default\": False}),\r\n                    \r\n                    \"start_step\":       (\"INT\",      {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step\":         (\"INT\",      {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":      (\"BOOLEAN\",  {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide\":            (\"LATENT\", ),\r\n                    \"mask\":             (\"MASK\", ),\r\n                    \"weights\":          (\"SIGMAS\", ),\r\n                    \"guides\":           (\"GUIDES\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n    EXPERIMENTAL = True\r\n\r\n    def main(self,\r\n            weight           = 1.0,\r\n            weight_scheduler = \"constant\",\r\n            mode             = \"scattersort\",\r\n            noise_mode       = \"smart\",\r\n            double_weights   = \"0.1\",\r\n            single_weights   = \"0.0\", \r\n            double_blocks    = \"all\",\r\n            single_blocks    = \"all\", \r\n            start_step       = 0,\r\n            end_step         = 15,\r\n            invert_mask      = False,\r\n\r\n            moe_gate                = False,\r\n            topk_weight             = False,\r\n            moe_out                 = False,\r\n            moe_sum                 = False,\r\n            ff_1                    = False,\r\n            ff_1_silu               = False,\r\n            ff_3                    = False,\r\n            ff_13                   = False,\r\n            ff_2                    = False,\r\n            shared_experts          = False,\r\n            \r\n            moe_ff_1                = False,\r\n            moe_ff_1_silu           = False,\r\n            moe_ff_3                = False,\r\n            moe_ff_13               = False,\r\n            moe_ff_2                = False,\r\n\r\n            double_img_io           = False,\r\n            double_img_norm0        = False,\r\n            double_img_attn         = False,\r\n            double_img_norm1        = False,\r\n            double_img_attn_gated   = False,\r\n            double_img              = False,\r\n            double_img_ff_i         = False,\r\n\r\n            double_txt_io           = False,\r\n            double_txt_norm0        = False,\r\n            double_txt_attn         = False,\r\n            double_txt_attn_gated   = False,\r\n            double_txt              = False,\r\n            double_txt_norm1        = False,\r\n            double_txt_ff_t         = False,\r\n\r\n            single_img_io           = False,\r\n            single_img_norm0        = False,\r\n            single_img_attn         = False,\r\n            single_img_attn_gated   = False,\r\n            single_img              = False,\r\n            single_img_norm1        = False,\r\n            single_img_ff_i         = False,\r\n            \r\n            attn_img_q_norm         = False,\r\n            attn_img_k_norm         = False,\r\n            attn_img_v_norm         = False,\r\n            attn_txt_q_norm         = False,\r\n            attn_txt_k_norm         = False,\r\n            attn_txt_v_norm         = False,\r\n            attn_img_single         = False,\r\n            attn_img_double         = False,\r\n            attn_txt_double         = False,\r\n            \r\n            proj_out             = False,\r\n\r\n            guide            = None,\r\n            mask             = None,\r\n            weights          = None,\r\n            guides           = None,\r\n            ):\r\n        \r\n        default_dtype = torch.float64\r\n        \r\n        mask = 1-mask if mask is not None else None\r\n        \r\n        double_weights = parse_range_string(double_weights)\r\n        single_weights = parse_range_string(single_weights)\r\n        \r\n        if len(double_weights) == 0:\r\n            double_weights.append(0.0)\r\n        if len(single_weights) == 0:\r\n            single_weights.append(0.0)\r\n            \r\n        if len(double_weights) == 1:\r\n            double_weights = double_weights * 100\r\n        if len(single_weights) == 1:\r\n            single_weights = single_weights * 100\r\n            \r\n        if type(double_weights[0]) == int:\r\n            double_weights = [float(val) for val in double_weights]\r\n        if type(single_weights[0]) == int:\r\n            single_weights = [float(val) for val in single_weights]\r\n        \r\n        if double_blocks == \"all\":\r\n            double_blocks  = [val for val in range(100)]\r\n            if len(double_weights) == 1:\r\n                double_weights = [double_weights[0]] * 100\r\n        else:\r\n            double_blocks  = parse_range_string(double_blocks)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(double_blocks, double_weights):\r\n                weights_expanded[b] = w\r\n            double_weights = weights_expanded\r\n            \r\n        \r\n        if single_blocks == \"all\":\r\n            single_blocks = [val for val in range(100)]\r\n            if len(single_weights) == 1:\r\n                single_weights = [single_weights[0]] * 100\r\n        else:\r\n            single_blocks  = parse_range_string(single_blocks)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(single_blocks, single_weights):\r\n                weights_expanded[b] = w\r\n            single_weights = weights_expanded\r\n        \r\n        \r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        if guide is not None:\r\n            raw_x = guide.get('state_info', {}).get('raw_x', None)\r\n            if raw_x is not None:\r\n                guide          = {'samples': guide['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide          = {'samples': guide['samples'].clone()}\r\n        \r\n        if weight_scheduler == \"constant\": # and weights == None: \r\n            weights = initialize_or_scale(None, weight, end_step).to(default_dtype)\r\n            prepend = torch.zeros(start_step).to(weights)\r\n            weights = torch.cat([prepend, weights])\r\n            weights = F.pad(weights, (0, MAX_STEPS), value=0.0)\r\n        \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        \r\n        guides['weight_adain']           = weight\r\n        guides['weights_adain']          = weights\r\n        \r\n        guides['blocks_adain_mmdit'] = {\r\n            \"double_weights\": double_weights,\r\n            \"single_weights\": single_weights,\r\n            \"double_blocks\" : double_blocks,\r\n            \"single_blocks\" : single_blocks,\r\n        }\r\n        guides['sort_and_scatter'] = {\r\n            \"mode\"                  : mode,\r\n            \"noise_mode\"            : noise_mode,\r\n\r\n            \"moe_gate\"              : moe_gate,\r\n            \"topk_weight\"           : topk_weight,\r\n            \"moe_sum\"               : moe_sum,\r\n            \"moe_out\"               : moe_out,\r\n\r\n            \"ff_1\"                  : ff_1,\r\n            \"ff_1_silu\"             : ff_1_silu,\r\n            \"ff_3\"                  : ff_3,\r\n            \"ff_13\"                 : ff_13,\r\n            \"ff_2\"                  : ff_2,\r\n            \r\n            \"moe_ff_1\"              : moe_ff_1,\r\n            \"moe_ff_1_silu\"         : moe_ff_1_silu,\r\n            \"moe_ff_3\"              : moe_ff_3,\r\n            \"moe_ff_13\"             : moe_ff_13,\r\n            \"moe_ff_2\"              : moe_ff_2,\r\n            \r\n            \"shared_experts\"        : shared_experts,\r\n\r\n            \"double_img_io\"         : double_img_io,\r\n            \"double_img_norm0\"      : double_img_norm0,\r\n            \"double_img_attn\"       : double_img_attn,\r\n            \"double_img_norm1\"      : double_img_norm1,\r\n            \"double_img_attn_gated\" : double_img_attn_gated,\r\n            \"double_img\"            : double_img,\r\n            \"double_img_ff_i\"       : double_img_ff_i,\r\n\r\n            \"double_txt_io\"         : double_txt_io,\r\n            \"double_txt_norm0\"      : double_txt_norm0,\r\n            \"double_txt_attn\"       : double_txt_attn,\r\n            \"double_txt_attn_gated\" : double_txt_attn_gated,\r\n            \"double_txt\"            : double_txt,\r\n            \"double_txt_norm1\"      : double_txt_norm1,\r\n            \"double_txt_ff_t\"       : double_txt_ff_t,\r\n\r\n            \"single_img_io\"         : single_img_io,\r\n            \"single_img_norm0\"      : single_img_norm0,\r\n            \"single_img_attn\"       : single_img_attn,\r\n            \"single_img_attn_gated\" : single_img_attn_gated,\r\n            \"single_img\"            : single_img,\r\n            \"single_img_norm1\"      : single_img_norm1,\r\n            \"single_img_ff_i\"       : single_img_ff_i,\r\n            \r\n            \"attn_img_q_norm\"       : attn_img_q_norm,\r\n            \"attn_img_k_norm\"       : attn_img_k_norm,\r\n            \"attn_img_v_norm\"       : attn_img_v_norm,\r\n            \"attn_txt_q_norm\"       : attn_txt_q_norm,\r\n            \"attn_txt_k_norm\"       : attn_txt_k_norm,\r\n            \"attn_txt_v_norm\"       : attn_txt_v_norm,\r\n            \"attn_img_single\"       : attn_img_single,\r\n            \"attn_img_double\"       : attn_img_double,\r\n            \r\n            \"proj_out\"           : proj_out,\r\n        }\r\n        \r\n        guides['guide_adain']            = guide\r\n        guides['mask_adain']             = mask\r\n\r\n        guides['weight_scheduler_adain'] = weight_scheduler\r\n        guides['start_step_adain']       = start_step\r\n        guides['end_step_adain']         = end_step\r\n        \r\n        return (guides, )\r\n\r\n\r\n\r\n\r\nfrom ..style_transfer import StyleMMDiT_Model, StyleUNet_Model, DEFAULT_BLOCK_WEIGHTS_MMDIT, DEFAULT_ATTN_WEIGHTS_MMDIT, DEFAULT_BASE_WEIGHTS_MMDIT\r\n\r\nSTYLE_MODES = [\r\n    \"none\", \r\n    #\"sinkhornsort\",\r\n    \"scattersort_dir\", \r\n    \"scattersort_dir2\",\r\n    \"scattersort\", \r\n    \"tiled_scattersort\",\r\n    \"AdaIN\", \r\n    \"tiled_AdaIN\", \r\n    \"WCT\",\r\n    \"WCT2\",\r\n    \"injection\",\r\n]\r\n\r\nclass ClownStyle_Boost:\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"noise_mode\":           ([\"direct\", \"update\", \"smart\", \"recon\", \"bonanza\"], {\"default\": \"update\"},),\r\n                    \"recon_lure\":           (STYLE_MODES,    {\"default\": \"WCT\", \"tooltip\": \"Only used if noise_mode = recon. Can increase the strength of the style.\"},),\r\n                    \"datashock\":            (STYLE_MODES,    {\"default\": \"scattersort\", \"tooltip\": \"Will drastically increase the strength at low denoise levels. Use with img2img workflows.\"},),\r\n                    \"datashock_weight\":     (\"FLOAT\",        {\"default\": 1.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide by multiplying all other weights by this value.\"}),\r\n                    \"datashock_start_step\": (\"INT\",          {\"default\": 0, \"min\": 0, \"max\": 10000, \"step\": 1, \"tooltip\": \"Start step for data shock.\"}),\r\n                    \"datashock_end_step\"  : (\"INT\",          {\"default\": 1, \"min\": 1, \"max\": 10000, \"step\": 1, \"tooltip\": \"End step for data shock.\"}),\r\n\r\n                    \"tile_h\" :     (\"INT\",   {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n                    \"tile_w\" :     (\"INT\",   {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guides\": (\"GUIDES\", ),\r\n                    #\"datashock_weights\": (\"SIGMAS\",),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            noise_mode  = \"update\",\r\n            recon_lure  = \"default\",\r\n            datashock  = None,\r\n            datashock_weight = 1.0,\r\n            datashock_start_step  = None,\r\n            datashock_end_step    = None,\r\n            tile_h = 0,\r\n            tile_w = 0,\r\n            guides      = None,\r\n            ):\r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        \r\n        StyleMMDiT = guides.get('StyleMMDiT')\r\n        \r\n        if StyleMMDiT is None:\r\n            StyleMMDiT = StyleMMDiT_Model()\r\n            \r\n            weights = {\r\n                \"h_tile\"  : tile_h // 16,\r\n                \"w_tile\"  : tile_w // 16,\r\n            }\r\n            StyleMMDiT.set_weights(**weights)\r\n        \r\n        StyleMMDiT.noise_mode = noise_mode\r\n        StyleMMDiT.recon_lure = recon_lure\r\n        StyleMMDiT.data_shock = datashock\r\n        StyleMMDiT.data_shock_weight = datashock_weight\r\n        StyleMMDiT.data_shock_start_step = datashock_start_step\r\n        StyleMMDiT.data_shock_end_step   = datashock_end_step\r\n        \r\n        guides['StyleMMDiT'] = StyleMMDiT\r\n        return (guides,)\r\n        \r\n        #guides['StyleMMDiT'].noise_mode = noise_mode\r\n        #guides['StyleMMDiT'].recon_lure = recon_lure\r\n        #guides['StyleMMDiT'].data_shock = datashock\r\n        #guides['StyleMMDiT'].data_shock_start_step = datashock_start_step\r\n        #guides['StyleMMDiT'].data_shock_end_step   = datashock_end_step\r\n        #return (guides, )\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownStyle_MMDiT:\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"mode\":        (STYLE_MODES, {\"default\": \"scattersort\"},),\r\n\r\n                    \"proj_in\":     (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"proj_out\":    (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"tile_h\" :     (\"INT\",   {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n                    \"tile_w\" :     (\"INT\",   {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n\r\n                    #\"start_step\": (\"INT\", {\"default\": 0, \"min\": 16, \"max\": 10000, \"step\": 1, \"tooltip\": \"Start step for data shock.\"}),\r\n                    #\"end_step\"  : (\"INT\", {\"default\": 1, \"min\": 16, \"max\": 10000, \"step\": 1, \"tooltip\": \"End step for data shock.\"}),\r\n\r\n                    \"invert_mask\": (\"BOOLEAN\", {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"positive\" :   (\"CONDITIONING\", ),\r\n                    \"negative\" :   (\"CONDITIONING\", ),\r\n                    \"guide\":       (\"LATENT\", ),\r\n                    \"mask\":        (\"MASK\", ),\r\n                    \"blocks\":      (\"BLOCKS\", ),\r\n                    \"guides\":      (\"GUIDES\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            mode        = \"scattersort\",\r\n\r\n            proj_in     = 0.0,\r\n            proj_out    = 0.0,\r\n            tile_h      = 128,\r\n            tile_w      = 128,\r\n            invert_mask = False,\r\n            positive    = None,\r\n            negative    = None,\r\n            guide       = None,\r\n            mask        = None,\r\n            blocks      = None,\r\n            guides      = None,\r\n            ):\r\n        \r\n        #mask = 1-mask if mask is not None else None\r\n\r\n        if guide is not None:\r\n            raw_x = guide.get('state_info', {}).get('raw_x', None)\r\n            if raw_x is not None:\r\n                guide = {'samples': guide['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide = {'samples': guide['samples'].clone()}\r\n        \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        blocks = copy.deepcopy(blocks) if blocks is not None else {}\r\n\r\n        StyleMMDiT = blocks.get('StyleMMDiT')\r\n        \r\n        if StyleMMDiT is None:\r\n            StyleMMDiT = StyleMMDiT_Model()\r\n        \r\n        weights = {\r\n            \"proj_in\" : proj_in,\r\n            \"proj_out\": proj_out,\r\n            \r\n            \"h_tile\"  : tile_h // 16,\r\n            \"w_tile\"  : tile_w // 16,\r\n        }\r\n\r\n        StyleMMDiT.set_mode(mode)\r\n        StyleMMDiT.set_weights(**weights)\r\n        StyleMMDiT.set_conditioning(positive, negative)\r\n        StyleMMDiT.mask = [mask]\r\n        StyleMMDiT.guides = [guide]\r\n        \r\n        StyleMMDiT_ = guides.get('StyleMMDiT')\r\n        if StyleMMDiT_ is not None:\r\n            StyleMMDiT_.merge_weights(StyleMMDiT)\r\n        else:\r\n            StyleMMDiT_ = StyleMMDiT\r\n\r\n        guides['StyleMMDiT'] = StyleMMDiT_\r\n\r\n        return (guides, )\r\n\r\n\r\n\r\nclass ClownStyle_Block_MMDiT:\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"mode\":          (STYLE_MODES, {\"default\": \"scattersort\"},),\r\n                    \"apply_to\":      ([\"img\", \"img+txt\",\"img,txt\", \"txt\",], {\"default\": \"img+txt\"},),\r\n                    \"block_type\":    ([\"double\", \"double,single\", \"single\"], {\"default\": \"single\"},),\r\n                    \"block_list\":    (\"STRING\", {\"default\": \"all\", \"multiline\": True}),\r\n                    \"block_weights\": (\"STRING\", {\"default\": \"1.0\", \"multiline\": True}),\r\n                    \r\n                    \"attn_norm\":     (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"attn_norm_mod\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"attn\":          (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"attn_gated\":    (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"attn_res\":      (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"ff_norm\":       (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"ff_norm_mod\":   (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"ff\":            (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"ff_gated\":      (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"ff_res\":        (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"tile_h\":        (\"INT\",    {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n                    \"tile_w\":        (\"INT\",    {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n\r\n                    \"invert_mask\":   (\"BOOLEAN\",{\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"mask\":        (\"MASK\", ),\r\n                    \"blocks\":      (\"BLOCKS\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"BLOCKS\",)\r\n    RETURN_NAMES = (\"blocks\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            mode        = \"scattersort\",\r\n            noise_mode  = \"update\",\r\n            apply_to    = \"joint\",\r\n            block_type  = \"double\",\r\n            block_list    = \"all\",\r\n            block_weights = \"1.0\",\r\n            \r\n            attn_norm     = 0.0,\r\n            attn_norm_mod = 0.0,\r\n            attn          = 0.0,\r\n            attn_gated    = 0.0,\r\n            attn_res      = 0.0,\r\n            ff_norm       = 0.0,\r\n            ff_norm_mod   = 0.0,\r\n            ff            = 0.0,\r\n            ff_gated      = 0.0,\r\n            ff_res        = 0.0,\r\n\r\n            tile_h      = 128,\r\n            tile_w      = 128,\r\n\r\n            invert_mask = False,\r\n\r\n            Attn        = None,\r\n            MoE         = None,\r\n            FF          = None,\r\n\r\n            mask        = None,\r\n            blocks      = None,\r\n            ):\r\n        \r\n        #mask = 1-mask if mask is not None else None\r\n\r\n        blocks = copy.deepcopy(blocks) if blocks is not None else {}\r\n        \r\n        block_weights = parse_range_string(block_weights)\r\n        \r\n        if len(block_weights) == 0:\r\n            block_weights.append(0.0)\r\n            \r\n        if len(block_weights) == 1:\r\n            block_weights = block_weights * 100\r\n            \r\n        if type(block_weights[0]) == int:\r\n            block_weights = [float(val) for val in block_weights]\r\n        \r\n        if    \"all\" in block_list:\r\n            block_list = [val for val in range(100)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif \"even\" in block_list:\r\n            block_list = [val for val in range(0, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif  \"odd\" in block_list:\r\n            block_list = [val for val in range(1, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        else:\r\n            block_list  = parse_range_string_int(block_list)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(block_list, block_weights):\r\n                weights_expanded[b] = w\r\n            block_weights = weights_expanded\r\n        \r\n        StyleMMDiT = blocks.get('StyleMMDiT')\r\n        if StyleMMDiT is None:\r\n            StyleMMDiT = StyleMMDiT_Model()\r\n        \r\n        weights = {\r\n            \"attn_norm\"    : attn_norm,\r\n            \"attn_norm_mod\": attn_norm_mod,\r\n            \"attn\"         : attn,\r\n            \"attn_gated\"   : attn_gated,\r\n            \"attn_res\"     : attn_res,\r\n            \"ff_norm\"      : ff_norm,\r\n            \"ff_norm_mod\"  : ff_norm_mod,\r\n            \"ff\"           : ff,\r\n            \"ff_gated\"     : ff_gated,\r\n            \"ff_res\"       : ff_res,\r\n            \r\n            \"h_tile\"       : tile_h // 16,\r\n            \"w_tile\"       : tile_w // 16,\r\n        }\r\n        \r\n        block_types = block_type.split(\",\")\r\n        \r\n        for block_type in block_types:\r\n        \r\n            if   block_type == \"double\":\r\n                style_blocks = StyleMMDiT.double_blocks\r\n            elif block_type == \"single\":\r\n                style_blocks = StyleMMDiT.single_blocks\r\n                \r\n            for bid in block_list:\r\n                block = style_blocks[bid]\r\n                scaled_weights = {\r\n                    k: (v * block_weights[bid]) if isinstance(v, float) else v\r\n                    for k, v in weights.items()\r\n                }\r\n\r\n                if \"img\" in apply_to  or block_type == \"single\":\r\n                    block.img.set_mode(mode)\r\n                    block.img.set_weights(**scaled_weights)\r\n                    block.img.apply_to = [apply_to]\r\n\r\n                if \"txt\" in apply_to and block_type == \"double\":\r\n                    mode = \"scattersort\" if mode == \"tiled_scattersort\" else mode\r\n                    mode = \"AdaIN\" if mode == \"tiled_AdaIN\" else mode\r\n                    block.txt.set_mode(mode)\r\n                    block.txt.set_weights(**scaled_weights)\r\n                    block.txt.apply_to = [apply_to]\r\n                \r\n                block.img.apply_to = [apply_to]\r\n                if hasattr(block, \"txt\"):\r\n                    block.txt.apply_to = [apply_to]\r\n                \r\n                block.mask = [mask]\r\n\r\n        blocks['StyleMMDiT'] = StyleMMDiT\r\n\r\n        return (blocks, )\r\n\r\n\r\n\r\nclass ClownStyle_Attn_MMDiT:\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"mode\":          (STYLE_MODES, {\"default\": \"scattersort\"},),\r\n                    \"apply_to\":      ([\"img\",\"img+txt\",\"img,txt\",\"txt\"], {\"default\": \"img+txt\"},),\r\n                    \"block_type\":    ([\"double\", \"double,single\", \"single\"], {\"default\": \"single\"},),\r\n                    \"block_list\":    (\"STRING\", {\"default\": \"all\", \"multiline\": True}),\r\n                    \"block_weights\": (\"STRING\", {\"default\": \"1.0\", \"multiline\": True}),\r\n                    \r\n                    \"q_proj\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"k_proj\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"v_proj\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"q_norm\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"k_norm\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"out\":    (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"tile_h\": (\"INT\",   {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n                    \"tile_w\": (\"INT\",   {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n\r\n                    \"invert_mask\":   (\"BOOLEAN\", {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"mask\":   (\"MASK\", ),\r\n                    \"blocks\": (\"BLOCKS\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"BLOCKS\",)\r\n    RETURN_NAMES = (\"blocks\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            mode        = \"scattersort\",\r\n            noise_mode  = \"update\",\r\n            apply_to    = \"joint\",\r\n            block_type  = \"double\",\r\n            block_list    = \"all\",\r\n            block_weights = \"1.0\",\r\n            \r\n            q_proj = 0.0,\r\n            k_proj = 0.0,\r\n            v_proj = 0.0,\r\n            q_norm = 0.0,\r\n            k_norm = 0.0,\r\n            out    = 0.0,\r\n            \r\n            tile_h = 128,\r\n            tile_w = 128,\r\n\r\n            invert_mask = False,\r\n\r\n            mask        = None,\r\n            blocks      = None,\r\n            ):\r\n        \r\n        #mask = 1-mask if mask is not None else None\r\n\r\n        blocks = copy.deepcopy(blocks) if blocks is not None else {}\r\n        \r\n        block_weights = parse_range_string(block_weights)\r\n        \r\n        if len(block_weights) == 0:\r\n            block_weights.append(0.0)\r\n            \r\n        if len(block_weights) == 1:\r\n            block_weights = block_weights * 100\r\n            \r\n        if type(block_weights[0]) == int:\r\n            block_weights = [float(val) for val in block_weights]\r\n        \r\n        if \"all\" in block_list:\r\n            block_list = [val for val in range(100)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif \"even\" in block_list:\r\n            block_list = [val for val in range(0, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif \"odd\" in block_list:\r\n            block_list = [val for val in range(1, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        else:\r\n            block_list  = parse_range_string_int(block_list)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(block_list, block_weights):\r\n                weights_expanded[b] = w\r\n            block_weights = weights_expanded\r\n        \r\n        StyleMMDiT = blocks.get('StyleMMDiT')\r\n        if StyleMMDiT is None:\r\n            StyleMMDiT = StyleMMDiT_Model()\r\n        \r\n        weights = {\r\n            \"q_proj\": q_proj,\r\n            \"k_proj\": k_proj,\r\n            \"v_proj\": v_proj,\r\n            \"q_norm\": q_norm,\r\n            \"k_norm\": k_norm,\r\n            \"out\"   : out,\r\n            \r\n            \"h_tile\": tile_h // 16,\r\n            \"w_tile\": tile_w // 16,\r\n        }\r\n        \r\n        block_types = block_type.split(\",\")\r\n        \r\n        for block_type in block_types:\r\n            \r\n            if   block_type == \"double\":\r\n                style_blocks = StyleMMDiT.double_blocks\r\n            elif block_type == \"single\":\r\n                style_blocks = StyleMMDiT.single_blocks\r\n            \r\n            for bid in block_list:\r\n                block = style_blocks[bid]\r\n                scaled_weights = {\r\n                    k: (v * block_weights[bid]) if isinstance(v, float) else v\r\n                    for k, v in weights.items()\r\n                }\r\n\r\n                if \"img\" in apply_to  or block_type == \"single\":\r\n                    block.img.ATTN.set_mode(mode)\r\n                    block.img.ATTN.set_weights(**scaled_weights)\r\n                    block.img.ATTN.apply_to = [apply_to]\r\n\r\n                if \"txt\" in apply_to and block_type == \"double\":\r\n                    mode = \"scattersort\" if mode == \"tiled_scattersort\" else mode\r\n                    mode = \"AdaIN\" if mode == \"tiled_AdaIN\" else mode\r\n                    block.txt.ATTN.set_mode(mode)\r\n                    block.txt.ATTN.set_weights(**scaled_weights)\r\n                    block.txt.ATTN.apply_to = [apply_to]\r\n                \r\n                block.img.ATTN.apply_to = [apply_to]\r\n                if hasattr(block, \"txt\"):\r\n                    block.txt.ATTN.apply_to = [apply_to]\r\n                \r\n                block.attn_mask = [mask]\r\n\r\n        blocks['StyleMMDiT'] = StyleMMDiT\r\n\r\n        return (blocks, )\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownStyle_UNet:\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"mode\":        (STYLE_MODES, {\"default\": \"scattersort\"},),\r\n\r\n                    \"proj_in\":     (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"proj_out\":    (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"tile_h\" :     (\"INT\",   {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n                    \"tile_w\" :     (\"INT\",   {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n\r\n                    #\"start_step\": (\"INT\", {\"default\": 0, \"min\": 16, \"max\": 10000, \"step\": 1, \"tooltip\": \"Start step for data shock.\"}),\r\n                    #\"end_step\"  : (\"INT\", {\"default\": 1, \"min\": 16, \"max\": 10000, \"step\": 1, \"tooltip\": \"End step for data shock.\"}),\r\n\r\n                    \"invert_mask\": (\"BOOLEAN\", {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"positive\" :   (\"CONDITIONING\", ),\r\n                    \"negative\" :   (\"CONDITIONING\", ),\r\n                    \"guide\":       (\"LATENT\", ),\r\n                    \"mask\":        (\"MASK\", ),\r\n                    \"blocks\":      (\"BLOCKS\", ),\r\n                    \"guides\":      (\"GUIDES\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            mode        = \"scattersort\",\r\n\r\n            proj_in     = 0.0,\r\n            proj_out    = 0.0,\r\n            tile_h      = 128,\r\n            tile_w      = 128,\r\n            invert_mask = False,\r\n            positive    = None,\r\n            negative    = None,\r\n            guide       = None,\r\n            mask        = None,\r\n            blocks      = None,\r\n            guides      = None,\r\n            ):\r\n        \r\n        #mask = 1-mask if mask is not None else None\r\n\r\n        if guide is not None:\r\n            raw_x = guide.get('state_info', {}).get('raw_x', None)\r\n            if raw_x is not None:\r\n                guide = {'samples': guide['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide = {'samples': guide['samples'].clone()}\r\n        \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        blocks = copy.deepcopy(blocks) if blocks is not None else {}\r\n\r\n        StyleMMDiT = blocks.get('StyleMMDiT')\r\n        \r\n        if StyleMMDiT is None:\r\n            StyleMMDiT = StyleUNet_Model()\r\n        \r\n        weights = {\r\n            \"proj_in\" : proj_in,\r\n            \"proj_out\": proj_out,\r\n            \r\n            \"h_tile\"  : tile_h // 8,\r\n            \"w_tile\"  : tile_w // 8,\r\n        }\r\n\r\n        StyleMMDiT.set_mode(mode)\r\n        StyleMMDiT.set_weights(**weights)\r\n        StyleMMDiT.set_conditioning(positive, negative)\r\n        StyleMMDiT.mask = [mask]\r\n        StyleMMDiT.guides = [guide]\r\n        \r\n        StyleMMDiT_ = guides.get('StyleMMDiT')\r\n        if StyleMMDiT_ is not None:\r\n            StyleMMDiT_.merge_weights(StyleMMDiT)\r\n        else:\r\n            StyleMMDiT_ = StyleMMDiT\r\n\r\n        guides['StyleMMDiT'] = StyleMMDiT_\r\n\r\n        return (guides, )\r\n\r\n\r\n\r\n\r\nUNET_BLOCK_TYPES = [\r\n    \"input\", \r\n    \"middle\", \r\n    \"output\",\r\n    \"input,middle\",\r\n    \"input,output\",\r\n    \"middle,output\",\r\n    \"input,middle,output\",\r\n]\r\n\r\nclass ClownStyle_Block_UNet:\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"mode\":          (STYLE_MODES, {\"default\": \"scattersort\"},),\r\n                    #\"apply_to\":      ([\"img\", \"img+txt\",\"img,txt\", \"txt\",], {\"default\": \"img+txt\"},),\r\n                    \"block_type\":    (UNET_BLOCK_TYPES, {\"default\": \"input\"},),\r\n                    \"block_list\":    (\"STRING\", {\"default\": \"all\", \"multiline\": True}),\r\n                    \"block_weights\": (\"STRING\", {\"default\": \"1.0\", \"multiline\": True}),\r\n                    \r\n                    \"resample\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"res\":      (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"spatial\":  (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"tile_h\":        (\"INT\",    {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n                    \"tile_w\":        (\"INT\",    {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n\r\n                    \"invert_mask\":   (\"BOOLEAN\",{\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"mask\":        (\"MASK\", ),\r\n                    \"blocks\":      (\"BLOCKS\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"BLOCKS\",)\r\n    RETURN_NAMES = (\"blocks\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            mode        = \"scattersort\",\r\n            noise_mode  = \"update\",\r\n            apply_to    = \"\",\r\n            block_type  = \"input\",\r\n            block_list    = \"all\",\r\n            block_weights = \"1.0\",\r\n            \r\n            resample = 0.0,\r\n            res      = 0.0,\r\n            spatial  = 0.0,\r\n\r\n            tile_h      = 128,\r\n            tile_w      = 128,\r\n\r\n            invert_mask = False,\r\n\r\n            mask        = None,\r\n            blocks      = None,\r\n            ):\r\n        \r\n        #mask = 1-mask if mask is not None else None\r\n\r\n        blocks = copy.deepcopy(blocks) if blocks is not None else {}\r\n        \r\n        block_weights = parse_range_string(block_weights)\r\n        \r\n        if len(block_weights) == 0:\r\n            block_weights.append(0.0)\r\n            \r\n        if len(block_weights) == 1:\r\n            block_weights = block_weights * 100\r\n            \r\n        if type(block_weights[0]) == int:\r\n            block_weights = [float(val) for val in block_weights]\r\n        \r\n        if    \"all\" in block_list:\r\n            block_list = [val for val in range(100)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif \"even\" in block_list:\r\n            block_list = [val for val in range(0, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif  \"odd\" in block_list:\r\n            block_list = [val for val in range(1, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        else:\r\n            block_list  = parse_range_string_int(block_list)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(block_list, block_weights):\r\n                weights_expanded[b] = w\r\n            block_weights = weights_expanded\r\n        \r\n        StyleMMDiT = blocks.get('StyleMMDiT')\r\n        if StyleMMDiT is None:\r\n            StyleMMDiT = StyleUNet_Model()\r\n        \r\n        weights = {\r\n            \"resample\": resample,\r\n            \"res\":      res,\r\n            \"spatial\":  spatial,\r\n\r\n            \"h_tile\"       : tile_h // 16,\r\n            \"w_tile\"       : tile_w // 16,\r\n        }\r\n        \r\n        block_types = block_type.split(\",\")\r\n        \r\n        for block_type in block_types:\r\n        \r\n            if   block_type == \"input\":\r\n                style_blocks = StyleMMDiT.input_blocks\r\n            elif block_type == \"middle\":\r\n                style_blocks = StyleMMDiT.middle_blocks\r\n            elif block_type == \"output\":\r\n                style_blocks = StyleMMDiT.output_blocks\r\n                \r\n            for bid in block_list:\r\n                block = style_blocks[bid]\r\n                scaled_weights = {\r\n                    k: (v * block_weights[bid]) if isinstance(v, float) else v\r\n                    for k, v in weights.items()\r\n                }\r\n\r\n                block.set_mode(mode)\r\n                block.set_weights(**scaled_weights)\r\n                block.apply_to = [apply_to]\r\n\r\n                block.mask = [mask]\r\n\r\n        blocks['StyleMMDiT'] = StyleMMDiT\r\n\r\n        return (blocks, )\r\n\r\n\r\n\r\n\r\nclass ClownStyle_Attn_UNet:\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"mode\":          (STYLE_MODES, {\"default\": \"scattersort\"},),\r\n                    \"apply_to\":      ([\"self\",\"self,cross\",\"cross\"], {\"default\": \"self\"},),\r\n                    \"block_type\":    (UNET_BLOCK_TYPES, {\"default\": \"input\"},),\r\n                    \"block_list\":    (\"STRING\", {\"default\": \"all\", \"multiline\": True}),\r\n                    \"block_weights\": (\"STRING\", {\"default\": \"1.0\", \"multiline\": True}),\r\n                    \r\n                    \"q_proj\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"k_proj\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"v_proj\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"out\":    (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"tile_h\": (\"INT\",   {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n                    \"tile_w\": (\"INT\",   {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n\r\n                    \"invert_mask\":   (\"BOOLEAN\", {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"mask\":   (\"MASK\", ),\r\n                    \"blocks\": (\"BLOCKS\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"BLOCKS\",)\r\n    RETURN_NAMES = (\"blocks\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            mode        = \"scattersort\",\r\n            noise_mode  = \"update\",\r\n            apply_to    = \"self\",\r\n            block_type  = \"input\",\r\n            block_list    = \"all\",\r\n            block_weights = \"1.0\",\r\n            \r\n            q_proj = 0.0,\r\n            k_proj = 0.0,\r\n            v_proj = 0.0,\r\n\r\n            out    = 0.0,\r\n            \r\n            tile_h = 128,\r\n            tile_w = 128,\r\n\r\n            invert_mask = False,\r\n\r\n            mask        = None,\r\n            blocks      = None,\r\n            ):\r\n        \r\n        #mask = 1-mask if mask is not None else None\r\n\r\n        blocks = copy.deepcopy(blocks) if blocks is not None else {}\r\n        \r\n        block_weights = parse_range_string(block_weights)\r\n        \r\n        if len(block_weights) == 0:\r\n            block_weights.append(0.0)\r\n            \r\n        if len(block_weights) == 1:\r\n            block_weights = block_weights * 100\r\n            \r\n        if type(block_weights[0]) == int:\r\n            block_weights = [float(val) for val in block_weights]\r\n        \r\n        if \"all\" in block_list:\r\n            block_list = [val for val in range(100)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif \"even\" in block_list:\r\n            block_list = [val for val in range(0, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif \"odd\" in block_list:\r\n            block_list = [val for val in range(1, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        else:\r\n            block_list  = parse_range_string_int(block_list)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(block_list, block_weights):\r\n                weights_expanded[b] = w\r\n            block_weights = weights_expanded\r\n        \r\n        StyleMMDiT = blocks.get('StyleMMDiT')\r\n        if StyleMMDiT is None:\r\n            StyleMMDiT = StyleUNet_Model()\r\n        \r\n        weights = {\r\n            \"q_proj\": q_proj,\r\n            \"k_proj\": k_proj,\r\n            \"v_proj\": v_proj,\r\n            \"out\"   : out,\r\n            \r\n            \"h_tile\": tile_h // 8,\r\n            \"w_tile\": tile_w // 8,\r\n        }\r\n        \r\n        block_types = block_type.split(\",\")\r\n        \r\n        for block_type in block_types:\r\n            \r\n            if   block_type == \"input\":\r\n                style_blocks = StyleMMDiT.input_blocks\r\n            elif block_type == \"middle\":\r\n                style_blocks = StyleMMDiT.middle_blocks\r\n            elif block_type == \"output\":\r\n                style_blocks = StyleMMDiT.output_blocks\r\n            \r\n            for bid in block_list:\r\n                block = style_blocks[bid]\r\n                scaled_weights = {\r\n                    k: (v * block_weights[bid]) if isinstance(v, float) else v\r\n                    for k, v in weights.items()\r\n                }\r\n\r\n                #for tfmr_block in block.spatial_block.TFMR:\r\n                tfmr_block = block.spatial_block.TFMR\r\n                if \"self\" in apply_to:\r\n                    tfmr_block.ATTN1.set_mode(mode)\r\n                    tfmr_block.ATTN1.set_weights(**scaled_weights)\r\n                    tfmr_block.ATTN1.apply_to = [apply_to]\r\n\r\n                if \"cross\" in apply_to:\r\n                    tfmr_block.ATTN2.set_mode(mode)\r\n                    tfmr_block.ATTN2.set_weights(**scaled_weights)\r\n                    tfmr_block.ATTN2.apply_to = [apply_to]\r\n                \r\n                block.attn_mask = [mask]\r\n\r\n        blocks['StyleMMDiT'] = StyleMMDiT\r\n\r\n        return (blocks, )\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownStyle_ResBlock_UNet:\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"mode\":          (STYLE_MODES, {\"default\": \"scattersort\"},),\r\n                    #\"apply_to\":      ([\"img\", \"img+txt\",\"img,txt\", \"txt\",], {\"default\": \"img+txt\"},),\r\n                    \"block_type\":    (UNET_BLOCK_TYPES, {\"default\": \"input\"},),\r\n                    \"block_list\":    (\"STRING\", {\"default\": \"all\", \"multiline\": True}),\r\n                    \"block_weights\": (\"STRING\", {\"default\": \"1.0\", \"multiline\": True}),\r\n                    \r\n                    \"in_norm\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"in_silu\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"in_conv\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"emb_silu\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"emb_linear\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"emb_res\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"out_norm\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"out_silu\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"out_conv\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"residual\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"tile_h\":        (\"INT\",    {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n                    \"tile_w\":        (\"INT\",    {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n\r\n                    \"invert_mask\":   (\"BOOLEAN\",{\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"mask\":        (\"MASK\", ),\r\n                    \"blocks\":      (\"BLOCKS\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"BLOCKS\",)\r\n    RETURN_NAMES = (\"blocks\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            mode        = \"scattersort\",\r\n            noise_mode  = \"update\",\r\n            apply_to    = \"\",\r\n            block_type  = \"input\",\r\n            block_list    = \"all\",\r\n            block_weights = \"1.0\",\r\n            \r\n            in_norm  = 0.0,\r\n            in_silu  = 0.0,\r\n            in_conv  = 0.0,\r\n            \r\n            emb_silu   = 0.0,\r\n            emb_linear = 0.0,\r\n            emb_res    = 0.0,\r\n            \r\n            out_norm = 0.0,\r\n            out_silu = 0.0,\r\n            out_conv = 0.0,\r\n            \r\n            residual = 0.0,\r\n\r\n            tile_h      = 128,\r\n            tile_w      = 128,\r\n\r\n            invert_mask = False,\r\n\r\n            mask        = None,\r\n            blocks      = None,\r\n            ):\r\n        \r\n        #mask = 1-mask if mask is not None else None\r\n\r\n        blocks = copy.deepcopy(blocks) if blocks is not None else {}\r\n        \r\n        block_weights = parse_range_string(block_weights)\r\n        \r\n        if len(block_weights) == 0:\r\n            block_weights.append(0.0)\r\n            \r\n        if len(block_weights) == 1:\r\n            block_weights = block_weights * 100\r\n            \r\n        if type(block_weights[0]) == int:\r\n            block_weights = [float(val) for val in block_weights]\r\n        \r\n        if    \"all\" in block_list:\r\n            block_list = [val for val in range(100)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif \"even\" in block_list:\r\n            block_list = [val for val in range(0, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif  \"odd\" in block_list:\r\n            block_list = [val for val in range(1, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        else:\r\n            block_list  = parse_range_string_int(block_list)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(block_list, block_weights):\r\n                weights_expanded[b] = w\r\n            block_weights = weights_expanded\r\n        \r\n        StyleMMDiT = blocks.get('StyleMMDiT')\r\n        if StyleMMDiT is None:\r\n            StyleMMDiT = StyleUNet_Model()\r\n        \r\n        weights = {\r\n            \"in_norm\": in_norm,\r\n            \"in_silu\": in_silu,\r\n            \"in_conv\": in_conv,\r\n            \r\n            \"emb_silu\":   emb_silu,\r\n            \"emb_linear\": emb_linear,\r\n            \"emb_res\": emb_res,\r\n            \r\n            \"out_norm\": out_norm,\r\n            \"out_silu\": out_silu,\r\n            \"out_conv\": out_conv,\r\n            \r\n            \"residual\": residual,\r\n\r\n            \"h_tile\": tile_h // 8,\r\n            \"w_tile\": tile_w // 8,\r\n        }\r\n        \r\n        block_types = block_type.split(\",\")\r\n        \r\n        for block_type in block_types:\r\n        \r\n            if   block_type == \"input\":\r\n                style_blocks = StyleMMDiT.input_blocks\r\n            elif block_type == \"middle\":\r\n                style_blocks = StyleMMDiT.middle_blocks\r\n            elif block_type == \"output\":\r\n                style_blocks = StyleMMDiT.output_blocks\r\n                \r\n            for bid in block_list:\r\n                block = style_blocks[bid]\r\n                scaled_weights = {\r\n                    k: (v * block_weights[bid]) if isinstance(v, float) else v\r\n                    for k, v in weights.items()\r\n                }\r\n\r\n                block.res_block.set_mode(mode)\r\n                block.res_block.set_weights(**scaled_weights)\r\n                block.res_block.apply_to = [apply_to]\r\n                block.res_block.mask = [mask]\r\n\r\n        blocks['StyleMMDiT'] = StyleMMDiT\r\n\r\n        return (blocks, )\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownStyle_SpatialBlock_UNet:\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"mode\":          (STYLE_MODES, {\"default\": \"scattersort\"},),\r\n                    #\"apply_to\":      ([\"img\", \"img+txt\",\"img,txt\", \"txt\",], {\"default\": \"img+txt\"},),\r\n                    \"block_type\":    (UNET_BLOCK_TYPES, {\"default\": \"input\"},),\r\n                    \"block_list\":    (\"STRING\", {\"default\": \"all\", \"multiline\": True}),\r\n                    \"block_weights\": (\"STRING\", {\"default\": \"1.0\", \"multiline\": True}),\r\n                    \r\n                    \"norm_in\":     (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"proj_in\":     (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"transformer_block\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"transformer\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"proj_out\":    (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"res\":         (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"tile_h\":      (\"INT\",    {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n                    \"tile_w\":      (\"INT\",    {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n\r\n                    \"invert_mask\": (\"BOOLEAN\",{\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"mask\":        (\"MASK\", ),\r\n                    \"blocks\":      (\"BLOCKS\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"BLOCKS\",)\r\n    RETURN_NAMES = (\"blocks\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            mode        = \"scattersort\",\r\n            noise_mode  = \"update\",\r\n            apply_to    = \"\",\r\n            block_type  = \"input\",\r\n            block_list    = \"all\",\r\n            block_weights = \"1.0\",\r\n            \r\n            norm_in     = 0.0,\r\n            proj_in     = 0.0,\r\n            transformer_block = 0.0,\r\n            transformer = 0.0,\r\n            proj_out    = 0.0,\r\n            res         = 0.0,\r\n\r\n            tile_h      = 128,\r\n            tile_w      = 128,\r\n\r\n            invert_mask = False,\r\n\r\n            mask        = None,\r\n            blocks      = None,\r\n            ):\r\n        \r\n        spatial_norm_in     = norm_in\r\n        spatial_proj_in     = proj_in\r\n        spatial_transformer_block = transformer_block\r\n        spatial_transformer = transformer\r\n        spatial_proj_out    = proj_out\r\n        spatial_res         = res\r\n        \r\n        #mask = 1-mask if mask is not None else None\r\n\r\n        blocks = copy.deepcopy(blocks) if blocks is not None else {}\r\n        \r\n        block_weights = parse_range_string(block_weights)\r\n        \r\n        if len(block_weights) == 0:\r\n            block_weights.append(0.0)\r\n            \r\n        if len(block_weights) == 1:\r\n            block_weights = block_weights * 100\r\n            \r\n        if type(block_weights[0]) == int:\r\n            block_weights = [float(val) for val in block_weights]\r\n        \r\n        if    \"all\" in block_list:\r\n            block_list = [val for val in range(100)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif \"even\" in block_list:\r\n            block_list = [val for val in range(0, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif  \"odd\" in block_list:\r\n            block_list = [val for val in range(1, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        else:\r\n            block_list  = parse_range_string_int(block_list)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(block_list, block_weights):\r\n                weights_expanded[b] = w\r\n            block_weights = weights_expanded\r\n        \r\n        StyleMMDiT = blocks.get('StyleMMDiT')\r\n        if StyleMMDiT is None:\r\n            StyleMMDiT = StyleUNet_Model()\r\n        \r\n        weights = {\r\n            \"spatial_norm_in\"    : spatial_norm_in,\r\n            \"spatial_proj_in\"    : spatial_proj_in,\r\n            \"spatial_transformer_block\": spatial_transformer_block,\r\n            \"spatial_transformer\": spatial_transformer,\r\n            \"spatial_proj_out\"   : spatial_proj_out,\r\n            \"spatial_res\"        : spatial_res,\r\n\r\n            \"h_tile\": tile_h // 8,\r\n            \"w_tile\": tile_w // 8,\r\n        }\r\n        \r\n        block_types = block_type.split(\",\")\r\n        \r\n        for block_type in block_types:\r\n        \r\n            if   block_type == \"input\":\r\n                style_blocks = StyleMMDiT.input_blocks\r\n            elif block_type == \"middle\":\r\n                style_blocks = StyleMMDiT.middle_blocks\r\n            elif block_type == \"output\":\r\n                style_blocks = StyleMMDiT.output_blocks\r\n                \r\n            for bid in block_list:\r\n                block = style_blocks[bid]\r\n                scaled_weights = {\r\n                    k: (v * block_weights[bid]) if isinstance(v, float) else v\r\n                    for k, v in weights.items()\r\n                }\r\n\r\n                block.spatial_block.set_mode(mode)\r\n                block.spatial_block.set_weights(**scaled_weights)\r\n                block.spatial_block.apply_to = [apply_to]\r\n\r\n                block.spatial_block.mask = [mask]\r\n\r\n        blocks['StyleMMDiT'] = StyleMMDiT\r\n\r\n        return (blocks, )\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownStyle_TransformerBlock_UNet:\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"mode\":          (STYLE_MODES, {\"default\": \"scattersort\"},),\r\n                    #\"apply_to\":      ([\"img\", \"img+txt\",\"img,txt\", \"txt\",], {\"default\": \"img+txt\"},),\r\n                    \"block_type\":    (UNET_BLOCK_TYPES, {\"default\": \"input\"},),\r\n                    \"block_list\":    (\"STRING\", {\"default\": \"all\", \"multiline\": True}),\r\n                    \"block_weights\": (\"STRING\", {\"default\": \"1.0\", \"multiline\": True}),\r\n                    \r\n                    \"norm1\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"norm2\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"norm3\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \r\n                    \"self_attn\":  (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"cross_attn\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"ff\":         (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"self_attn_res\":  (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"cross_attn_res\": (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n                    \"ff_res\":         (\"FLOAT\",  {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Strength of effect on layer; skips extra calculation if set to 0.0. Skips interpolation if set to 1.0.\"}),\r\n\r\n                    \"tile_h\":        (\"INT\",    {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n                    \"tile_w\":        (\"INT\",    {\"default\": 128, \"min\": 16, \"max\": 10000, \"step\": 16, \"tooltip\": \"Tile size for tiled modes. Lower values will transfer composition more effectively. Dimensions of image must be divisible by this value.\"}),\r\n\r\n                    \"invert_mask\":   (\"BOOLEAN\",{\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"mask\":        (\"MASK\", ),\r\n                    \"blocks\":      (\"BLOCKS\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"BLOCKS\",)\r\n    RETURN_NAMES = (\"blocks\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            mode        = \"scattersort\",\r\n            noise_mode  = \"update\",\r\n            apply_to    = \"\",\r\n            block_type  = \"input\",\r\n            block_list    = \"all\",\r\n            block_weights = \"1.0\",\r\n            \r\n            norm1      = 0.0,\r\n            norm2      = 0.0,\r\n            norm3      = 0.0,\r\n            \r\n            self_attn  = 0.0,\r\n            cross_attn = 0.0,\r\n            ff         = 0.0,\r\n            \r\n            self_attn_res  = 0.0,\r\n            cross_attn_res = 0.0,\r\n            ff_res         = 0.0,\r\n\r\n            tile_h      = 128,\r\n            tile_w      = 128,\r\n\r\n            invert_mask = False,\r\n\r\n            mask        = None,\r\n            blocks      = None,\r\n            ):\r\n        \r\n        #mask = 1-mask if mask is not None else None\r\n\r\n        blocks = copy.deepcopy(blocks) if blocks is not None else {}\r\n        \r\n        block_weights = parse_range_string(block_weights)\r\n        \r\n        if len(block_weights) == 0:\r\n            block_weights.append(0.0)\r\n            \r\n        if len(block_weights) == 1:\r\n            block_weights = block_weights * 100\r\n            \r\n        if type(block_weights[0]) == int:\r\n            block_weights = [float(val) for val in block_weights]\r\n        \r\n        if    \"all\" in block_list:\r\n            block_list = [val for val in range(100)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif \"even\" in block_list:\r\n            block_list = [val for val in range(0, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        elif  \"odd\" in block_list:\r\n            block_list = [val for val in range(1, 100, 2)]\r\n            if len(block_weights) == 1:\r\n                block_weights = [block_weights[0]] * 100\r\n        else:\r\n            block_list  = parse_range_string_int(block_list)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(block_list, block_weights):\r\n                weights_expanded[b] = w\r\n            block_weights = weights_expanded\r\n        \r\n        StyleMMDiT = blocks.get('StyleMMDiT')\r\n        if StyleMMDiT is None:\r\n            StyleMMDiT = StyleUNet_Model()\r\n        \r\n        weights = {\r\n            \"norm1\"     : norm1,\r\n            \"norm2\"     : norm2,\r\n            \"norm3\"     : norm3,\r\n            \r\n            \"self_attn\" : self_attn,\r\n            \"cross_attn\": cross_attn,\r\n            \"ff\"        : ff,\r\n            \r\n            \"self_attn_res\" : self_attn_res,\r\n            \"cross_attn_res\": cross_attn_res,\r\n            \"ff_res\"        : ff_res,\r\n\r\n            \"h_tile\": tile_h // 8,\r\n            \"w_tile\": tile_w // 8,\r\n        }\r\n        \r\n        block_types = block_type.split(\",\")\r\n        \r\n        for block_type in block_types:\r\n        \r\n            if   block_type == \"input\":\r\n                style_blocks = StyleMMDiT.input_blocks\r\n            elif block_type == \"middle\":\r\n                style_blocks = StyleMMDiT.middle_blocks\r\n            elif block_type == \"output\":\r\n                style_blocks = StyleMMDiT.output_blocks\r\n                \r\n            for bid in block_list:\r\n                block = style_blocks[bid]\r\n                scaled_weights = {\r\n                    k: (v * block_weights[bid]) if isinstance(v, float) else v\r\n                    for k, v in weights.items()\r\n                }\r\n\r\n                block.spatial_block.TFMR.set_mode(mode)\r\n                block.spatial_block.TFMR.set_weights(**scaled_weights)\r\n                block.spatial_block.TFMR.apply_to = [apply_to]\r\n\r\n                block.spatial_block.TFMR.mask = [mask]\r\n\r\n        blocks['StyleMMDiT'] = StyleMMDiT\r\n\r\n        return (blocks, )\r\n\r\n\r\n\r\n"
  },
  {
    "path": "chroma/layers.py",
    "content": "import torch\nfrom torch import Tensor, nn\n\n#from comfy.ldm.flux.math import attention\nfrom comfy.ldm.flux.layers import (\n    MLPEmbedder,\n    RMSNorm,\n    QKNorm,\n    SelfAttention,\n    ModulationOut,\n)\n\nfrom .math import attention, rope, apply_rope\n\nclass ChromaModulationOut(ModulationOut):\n    @classmethod\n    def from_offset(cls, tensor: torch.Tensor, offset: int = 0) -> ModulationOut:\n        return cls(\n            shift=tensor[:, offset : offset + 1, :],\n            scale=tensor[:, offset + 1 : offset + 2, :],\n            gate=tensor[:, offset + 2 : offset + 3, :],\n        )\n\n\n\n\nclass Approximator(nn.Module):\n    def __init__(self, in_dim: int, out_dim: int, hidden_dim: int, n_layers = 5, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.in_proj = operations.Linear(in_dim, hidden_dim, bias=True, dtype=dtype, device=device)\n        self.layers = nn.ModuleList([MLPEmbedder(hidden_dim, hidden_dim, dtype=dtype, device=device, operations=operations) for x in range( n_layers)])\n        self.norms = nn.ModuleList([RMSNorm(hidden_dim, dtype=dtype, device=device, operations=operations) for x in range( n_layers)])\n        self.out_proj = operations.Linear(hidden_dim, out_dim, dtype=dtype, device=device)\n\n    @property\n    def device(self):\n        # Get the device of the module (assumes all parameters are on the same device)\n        return next(self.parameters()).device\n\n    def forward(self, x: Tensor) -> Tensor:\n        x = self.in_proj(x)\n\n        for layer, norms in zip(self.layers, self.norms):\n            x = x + layer(norms(x))\n\n        x = self.out_proj(x)\n\n        return x\n\n\nclass ReChromaDoubleStreamBlock(nn.Module):\n    def __init__(self, hidden_size: int, num_heads: int, mlp_ratio: float, qkv_bias: bool = False, flipped_img_txt=False, dtype=None, device=None, operations=None):\n        super().__init__()\n\n        mlp_hidden_dim = int(hidden_size * mlp_ratio)\n        self.num_heads = num_heads\n        self.hidden_size = hidden_size\n        self.img_norm1 = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n        self.img_attn = SelfAttention(dim=hidden_size, num_heads=num_heads, qkv_bias=qkv_bias, dtype=dtype, device=device, operations=operations)\n\n        self.img_norm2 = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n        self.img_mlp = nn.Sequential(\n            operations.Linear(hidden_size, mlp_hidden_dim, bias=True, dtype=dtype, device=device),\n            nn.GELU(approximate=\"tanh\"),\n            operations.Linear(mlp_hidden_dim, hidden_size, bias=True, dtype=dtype, device=device),\n        )\n\n        self.txt_norm1 = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n        self.txt_attn = SelfAttention(dim=hidden_size, num_heads=num_heads, qkv_bias=qkv_bias, dtype=dtype, device=device, operations=operations)\n\n        self.txt_norm2 = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n        self.txt_mlp = nn.Sequential(\n            operations.Linear(hidden_size, mlp_hidden_dim, bias=True, dtype=dtype, device=device),\n            nn.GELU(approximate=\"tanh\"),\n            operations.Linear(mlp_hidden_dim, hidden_size, bias=True, dtype=dtype, device=device),\n        )\n        self.flipped_img_txt = flipped_img_txt\n\n    def forward(self, img: Tensor, txt: Tensor, pe: Tensor, vec: Tensor, attn_mask=None):\n        (img_mod1, img_mod2), (txt_mod1, txt_mod2) = vec\n\n        # prepare image for attention\n        img_modulated = self.img_norm1(img)\n        img_modulated = (1 + img_mod1.scale) * img_modulated + img_mod1.shift\n        img_qkv = self.img_attn.qkv(img_modulated)\n        img_q, img_k, img_v = img_qkv.view(img_qkv.shape[0], img_qkv.shape[1], 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)\n        img_q, img_k = self.img_attn.norm(img_q, img_k, img_v)\n\n        # prepare txt for attention\n        txt_modulated = self.txt_norm1(txt)\n        txt_modulated = (1 + txt_mod1.scale) * txt_modulated + txt_mod1.shift\n        txt_qkv = self.txt_attn.qkv(txt_modulated)\n        txt_q, txt_k, txt_v = txt_qkv.view(txt_qkv.shape[0], txt_qkv.shape[1], 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)\n        txt_q, txt_k = self.txt_attn.norm(txt_q, txt_k, txt_v)\n\n        # run actual attention\n        attn = attention(torch.cat((txt_q, img_q), dim=2),\n                         torch.cat((txt_k, img_k), dim=2),\n                         torch.cat((txt_v, img_v), dim=2),\n                         pe=pe, mask=attn_mask)\n\n        txt_attn, img_attn = attn[:, : txt.shape[1]], attn[:, txt.shape[1] :]\n\n        # calculate the img bloks\n        img = img + img_mod1.gate * self.img_attn.proj(img_attn)\n        img = img + img_mod2.gate * self.img_mlp((1 + img_mod2.scale) * self.img_norm2(img) + img_mod2.shift)\n\n        # calculate the txt bloks\n        txt += txt_mod1.gate * self.txt_attn.proj(txt_attn)\n        txt += txt_mod2.gate * self.txt_mlp((1 + txt_mod2.scale) * self.txt_norm2(txt) + txt_mod2.shift)\n\n        if txt.dtype == torch.float16:\n            txt = torch.nan_to_num(txt, nan=0.0, posinf=65504, neginf=-65504)\n\n        return img, txt\n\n\nclass ReChromaSingleStreamBlock(nn.Module):\n    \"\"\"\n    A DiT block with parallel linear layers as described in\n    https://arxiv.org/abs/2302.05442 and adapted modulation interface.\n    \"\"\"\n\n    def __init__(\n        self,\n        hidden_size: int,\n        num_heads: int,\n        mlp_ratio: float = 4.0,\n        qk_scale: float = None,\n        dtype=None,\n        device=None,\n        operations=None\n    ):\n        super().__init__()\n        self.hidden_dim = hidden_size\n        self.num_heads = num_heads\n        head_dim = hidden_size // num_heads\n        self.scale = qk_scale or head_dim**-0.5\n\n        self.mlp_hidden_dim = int(hidden_size * mlp_ratio)\n        # qkv and mlp_in\n        self.linear1 = operations.Linear(hidden_size, hidden_size * 3 + self.mlp_hidden_dim, dtype=dtype, device=device)\n        # proj and mlp_out\n        self.linear2 = operations.Linear(hidden_size + self.mlp_hidden_dim, hidden_size, dtype=dtype, device=device)\n\n        self.norm = QKNorm(head_dim, dtype=dtype, device=device, operations=operations)\n\n        self.hidden_size = hidden_size\n        self.pre_norm = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n\n        self.mlp_act = nn.GELU(approximate=\"tanh\")\n\n    def forward(self, x: Tensor, pe: Tensor, vec: Tensor, attn_mask=None) -> Tensor:\n        mod = vec\n        x_mod = (1 + mod.scale) * self.pre_norm(x) + mod.shift\n        qkv, mlp = torch.split(self.linear1(x_mod), [3 * self.hidden_size, self.mlp_hidden_dim], dim=-1)\n\n        q, k, v = qkv.view(qkv.shape[0], qkv.shape[1], 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)\n        q, k = self.norm(q, k, v)\n\n        # compute attention\n        attn = attention(q, k, v, pe=pe, mask=attn_mask)\n        # compute activation in mlp stream, cat again and run second linear layer\n        output = self.linear2(torch.cat((attn, self.mlp_act(mlp)), 2))\n        x += mod.gate * output\n        if x.dtype == torch.float16:\n            x = torch.nan_to_num(x, nan=0.0, posinf=65504, neginf=-65504)\n        return x\n\n\nclass LastLayer(nn.Module):\n    def __init__(self, hidden_size: int, patch_size: int, out_channels: int, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.norm_final = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n        self.linear = operations.Linear(hidden_size, out_channels, bias=True, dtype=dtype, device=device)\n\n    def forward(self, x: Tensor, vec: Tensor) -> Tensor:\n        shift, scale = vec\n        shift = shift.squeeze(1)\n        scale = scale.squeeze(1)\n        x = (1 + scale[:, None, :]) * self.norm_final(x) + shift[:, None, :]\n        x = self.linear(x)\n        return x\n"
  },
  {
    "path": "chroma/math.py",
    "content": "import torch\nfrom einops import rearrange\nfrom torch import Tensor\nfrom comfy.ldm.modules.attention import attention_pytorch\n\nimport comfy.model_management\n\ndef attention(q: Tensor, k: Tensor, v: Tensor, pe: Tensor, mask=None) -> Tensor:\n    q, k = apply_rope(q, k, pe)\n\n    heads = q.shape[1]\n    x = attention_pytorch(q, k, v, heads, skip_reshape=True, mask=mask)\n    #if mask is not None:\n    #    x = attention_pytorch(q, k, v, heads, skip_reshape=True, mask=mask)\n    #else:\n    #    from comfy.ldm.modules.attention import optimized_attention\n    #    x = optimized_attention(q, k, v, heads, skip_reshape=True, mask=None)\n    return x\n\n\ndef rope(pos: Tensor, dim: int, theta: int) -> Tensor:\n    assert dim % 2 == 0\n    if comfy.model_management.is_device_mps(pos.device) or comfy.model_management.is_intel_xpu() or comfy.model_management.is_directml_enabled():\n        device = torch.device(\"cpu\")\n    else:\n        device = pos.device\n\n    scale = torch.linspace(0, (dim - 2) / dim, steps=dim//2, dtype=torch.float64, device=device)\n    omega = 1.0 / (theta**scale)\n    out = torch.einsum(\"...n,d->...nd\", pos.to(dtype=torch.float32, device=device), omega)\n    out = torch.stack([torch.cos(out), -torch.sin(out), torch.sin(out), torch.cos(out)], dim=-1)\n    out = rearrange(out, \"b n d (i j) -> b n d i j\", i=2, j=2)\n    return out.to(dtype=torch.float32, device=pos.device)\n\n\ndef apply_rope(xq: Tensor, xk: Tensor, freqs_cis: Tensor):\n    xq_ = xq.float().reshape(*xq.shape[:-1], -1, 1, 2)\n    xk_ = xk.float().reshape(*xk.shape[:-1], -1, 1, 2)\n    xq_out = freqs_cis[..., 0] * xq_[..., 0] + freqs_cis[..., 1] * xq_[..., 1]\n    xk_out = freqs_cis[..., 0] * xk_[..., 0] + freqs_cis[..., 1] * xk_[..., 1]\n    return xq_out.reshape(*xq.shape).type_as(xq), xk_out.reshape(*xk.shape).type_as(xk)\n\n\n"
  },
  {
    "path": "chroma/model.py",
    "content": "#Original code can be found on: https://github.com/black-forest-labs/flux\n\nfrom dataclasses import dataclass\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import Tensor, nn\nfrom einops import rearrange, repeat\nimport comfy.ldm.common_dit\n\nfrom ..helper import ExtraOptions\n\nfrom ..latents import tile_latent, untile_latent, gaussian_blur_2d, median_blur_2d\nfrom ..style_transfer import apply_scattersort_masked, apply_scattersort_tiled, adain_seq_inplace, adain_patchwise_row_batch_med, adain_patchwise_row_batch\n\nfrom comfy.ldm.flux.layers import (\n    EmbedND,\n    timestep_embedding,\n)\n\nfrom .layers import (\n    ReChromaDoubleStreamBlock,\n    LastLayer,\n    ReChromaSingleStreamBlock,\n    Approximator,\n    ChromaModulationOut,\n)\n\n\n@dataclass\nclass ChromaParams:\n    in_channels        : int\n    out_channels       : int\n    context_in_dim     : int\n    hidden_size        : int\n    mlp_ratio          : float\n    num_heads          : int\n    depth              : int\n    depth_single_blocks: int\n    axes_dim           : list\n    theta              : int\n    patch_size         : int\n    qkv_bias           : bool\n    in_dim             : int\n    out_dim            : int\n    hidden_dim         : int\n    n_layers           : int\n\n\n\n\nclass ReChroma(nn.Module):\n    \"\"\"\n    Transformer model for flow matching on sequences.\n    \"\"\"\n\n    def __init__(self, image_model=None, final_layer=True, dtype=None, device=None, operations=None, **kwargs):\n        super().__init__()\n        self.dtype        = dtype\n        params            = ChromaParams(**kwargs)\n        self.params       = params\n        self.patch_size   = params.patch_size\n        self.in_channels  = params.in_channels\n        self.out_channels = params.out_channels\n        if params.hidden_size % params.num_heads != 0:\n            raise ValueError(\n                f\"Hidden size {params.hidden_size} must be divisible by num_heads {params.num_heads}\"\n            )\n        pe_dim = params.hidden_size // params.num_heads\n        if sum(params.axes_dim) != pe_dim:\n            raise ValueError(f\"Got {params.axes_dim} but expected positional dim {pe_dim}\")\n        self.hidden_size = params.hidden_size\n        self.num_heads   = params.num_heads\n        self.in_dim      = params.in_dim\n        self.out_dim     = params.out_dim\n        self.hidden_dim  = params.hidden_dim\n        self.n_layers    = params.n_layers\n        self.pe_embedder = EmbedND(dim=pe_dim, theta=params.theta, axes_dim=params.axes_dim)\n        self.img_in      = operations.Linear(self.in_channels, self.hidden_size, bias=True, dtype=dtype, device=device)\n        self.txt_in      = operations.Linear(params.context_in_dim, self.hidden_size, dtype=dtype, device=device)\n        # set as nn identity for now, will overwrite it later.\n        self.distilled_guidance_layer = Approximator(\n                    in_dim=self.in_dim,\n                    hidden_dim=self.hidden_dim,\n                    out_dim=self.out_dim,\n                    n_layers=self.n_layers,\n                    dtype=dtype, device=device, operations=operations\n                )\n\n\n        self.double_blocks = nn.ModuleList(\n            [\n                ReChromaDoubleStreamBlock(\n                    self.hidden_size,\n                    self.num_heads,\n                    mlp_ratio=params.mlp_ratio,\n                    qkv_bias=params.qkv_bias,\n                    dtype=dtype, device=device, operations=operations\n                )\n                for _ in range(params.depth)\n            ]\n        )\n\n        self.single_blocks = nn.ModuleList(\n            [\n                ReChromaSingleStreamBlock(self.hidden_size, self.num_heads, mlp_ratio=params.mlp_ratio, dtype=dtype, device=device, operations=operations)\n                for _ in range(params.depth_single_blocks)\n            ]\n        )\n\n        if final_layer:\n            self.final_layer = LastLayer(self.hidden_size, 1, self.out_channels, dtype=dtype, device=device, operations=operations)\n\n        self.skip_mmdit = []\n        self.skip_dit = []\n        self.lite = False\n\n    def get_modulations(self, tensor: torch.Tensor, block_type: str, *, idx: int = 0):\n        # This function slices up the modulations tensor which has the following layout:\n        #   single     : num_single_blocks * 3 elements\n        #   double_img : num_double_blocks * 6 elements\n        #   double_txt : num_double_blocks * 6 elements\n        #   final      : 2 elements\n        if block_type == \"final\":\n            return (tensor[:, -2:-1, :], tensor[:, -1:, :])\n        single_block_count = self.params.depth_single_blocks\n        double_block_count = self.params.depth\n        offset = 3 * idx\n        if block_type == \"single\":\n            return ChromaModulationOut.from_offset(tensor, offset)\n        # Double block modulations are 6 elements so we double 3 * idx.\n        offset *= 2\n        if block_type in {\"double_img\", \"double_txt\"}:\n            # Advance past the single block modulations.\n            offset += 3 * single_block_count\n            if block_type == \"double_txt\":\n                # Advance past the double block img modulations.\n                offset += 6 * double_block_count\n            return (\n                ChromaModulationOut.from_offset(tensor, offset),\n                ChromaModulationOut.from_offset(tensor, offset + 3),\n            )\n        raise ValueError(\"Bad block_type\")\n\n\n    def forward_blocks(\n        self,\n        img       : Tensor,\n        img_ids   : Tensor,\n        txt       : Tensor,\n        txt_ids   : Tensor,\n        timesteps : Tensor,\n        guidance  : Tensor  = None,\n        control             = None,\n        update_cross_attn   = None,\n        transformer_options ={},\n        attn_mask : Tensor  = None,\n        UNCOND : bool = False,\n    ) -> Tensor:\n        patches_replace = transformer_options.get(\"patches_replace\", {})\n        if img.ndim != 3 or txt.ndim != 3:\n            raise ValueError(\"Input img and txt tensors must have 3 dimensions.\")\n\n        # running on sequences img\n        img = self.img_in(img)\n\n        # distilled vector guidance\n        mod_index_length = 344\n        distill_timestep = timestep_embedding(timesteps.detach().clone(), 16).to(img.device, img.dtype)\n        # guidance = guidance *\n        distil_guidance = timestep_embedding(guidance.detach().clone(), 16).to(img.device, img.dtype)\n\n        # get all modulation index\n        modulation_index = timestep_embedding(torch.arange(mod_index_length), 32).to(img.device, img.dtype)\n        # we need to broadcast the modulation index here so each batch has all of the index\n        modulation_index = modulation_index.unsqueeze(0).repeat(img.shape[0], 1, 1).to(img.device, img.dtype)\n        # and we need to broadcast timestep and guidance along too\n        timestep_guidance = torch.cat([distill_timestep, distil_guidance], dim=1).unsqueeze(1).repeat(1, mod_index_length, 1).to(img.dtype).to(img.device, img.dtype)\n        # then and only then we could concatenate it together\n        input_vec = torch.cat([timestep_guidance, modulation_index], dim=-1).to(img.device, img.dtype)\n\n        mod_vectors = self.distilled_guidance_layer(input_vec)\n\n        txt = self.txt_in(txt)\n\n        ids = torch.cat((txt_ids, img_ids), dim=1)\n        pe = self.pe_embedder(ids)\n        \n        weight    = -1 * transformer_options.get(\"regional_conditioning_weight\", 0.0)\n        floor     = -1 * transformer_options.get(\"regional_conditioning_floor\",  0.0)\n        mask_zero = None\n        mask = None\n        \n        text_len = txt.shape[1] # mask_obj[0].text_len\n        \n        if not UNCOND and 'AttnMask' in transformer_options: # and weight != 0:\n            AttnMask = transformer_options['AttnMask']\n            mask = transformer_options['AttnMask'].attn_mask.mask.to('cuda')\n            if mask_zero is None:\n                mask_zero = torch.ones_like(mask)\n                img_len = transformer_options['AttnMask'].img_len\n                #mask_zero[:text_len, :text_len] = mask[:text_len, :text_len]\n                mask_zero[:text_len, :] = mask[:text_len, :]\n                mask_zero[:, :text_len] = mask[:, :text_len]\n            if weight == 0:\n                mask = None\n            \n        if UNCOND and 'AttnMask_neg' in transformer_options: # and weight != 0:\n            AttnMask = transformer_options['AttnMask_neg']\n            if mask_zero is None:\n                mask_zero = torch.ones_like(mask)\n                img_len = transformer_options['AttnMask_neg'].img_len\n                #mask_zero[:text_len, :text_len] = mask[:text_len, :text_len]\n                mask_zero[:text_len, :] = mask[:text_len, :]\n                mask_zero[:, :text_len] = mask[:, :text_len]\n            if weight == 0:\n                mask = None\n            \n        elif UNCOND and 'AttnMask' in transformer_options:\n            AttnMask = transformer_options['AttnMask']\n            mask = transformer_options['AttnMask'].attn_mask.mask.to('cuda')\n            if mask_zero is None:\n                mask_zero = torch.ones_like(mask)\n                img_len = transformer_options['AttnMask'].img_len\n                #mask_zero[:text_len, :text_len] = mask[:text_len, :text_len]\n                mask_zero[:text_len, :] = mask[:text_len, :]\n                mask_zero[:, :text_len] = mask[:, :text_len]\n            if weight == 0:\n                mask = None\n\n        if mask is not None and not type(mask[0][0].item()) == bool:\n            mask = mask.to(img.dtype)\n        if mask_zero is not None and not type(mask_zero[0][0].item()) == bool:\n            mask_zero = mask_zero.to(img.dtype)\n\n        total_layers = len(self.double_blocks) + len(self.single_blocks)\n        \n        attn_mask = mask if attn_mask is None else attn_mask\n        \n        blocks_replace = patches_replace.get(\"dit\", {})\n        for i, block in enumerate(self.double_blocks):\n            if i not in self.skip_mmdit:\n                double_mod = (\n                    self.get_modulations(mod_vectors, \"double_img\", idx=i),\n                    self.get_modulations(mod_vectors, \"double_txt\", idx=i),\n                )\n                if (\"double_block\", i) in blocks_replace:\n                    def block_wrap(args):\n                        out = {}\n                        out[\"img\"], out[\"txt\"] = block( img       = args[\"img\"],\n                                                        txt       = args[\"txt\"],\n                                                        vec       = args[\"vec\"],\n                                                        pe        = args[\"pe\"],\n                                                        attn_mask = args.get(\"attn_mask\"))\n                        return out\n\n                    out = blocks_replace[(\"double_block\", i)]({ \"img\"             : img,\n                                                                \"txt\"             : txt,\n                                                                \"vec\"             : double_mod,\n                                                                \"pe\"              : pe,\n                                                                \"attn_mask\"       : attn_mask},\n                                                                {\"original_block\" : block_wrap})\n                    txt = out[\"txt\"] \n                    img = out[\"img\"]\n                else:\n\n                    if   weight > 0 and mask is not None and     weight  <=      i/total_layers:\n                        img, txt = block(img=img, txt=txt, vec=double_mod, pe=pe, attn_mask=mask_zero)\n                        \n                    elif (weight < 0 and mask is not None and abs(weight) <= (1 - i/total_layers)):\n                        img_tmpZ, txt_tmpZ = img.clone(), txt.clone()\n\n                        img_tmpZ, txt = block(img=img_tmpZ, txt=txt_tmpZ, vec=double_mod, pe=pe, attn_mask=mask)\n                        img, txt_tmpZ = block(img=img     , txt=txt     , vec=double_mod, pe=pe, attn_mask=mask_zero)\n                        \n                    elif floor > 0 and mask is not None and     floor  >=      i/total_layers:\n                        mask_tmp = mask.clone()\n                        mask_tmp[text_len:, text_len:] = 1.0\n                        img, txt = block(img=img, txt=txt, vec=double_mod, pe=pe, attn_mask=mask_tmp)\n                        \n                    elif floor < 0 and mask is not None and abs(floor) >= (1 - i/total_layers):\n                        mask_tmp = mask.clone()\n                        mask_tmp[text_len:, text_len:] = 1.0\n                        img, txt = block(img=img, txt=txt, vec=double_mod, pe=pe, attn_mask=mask_tmp)\n                        \n                    elif update_cross_attn is not None and update_cross_attn['skip_cross_attn']:\n                        print(\"update_cross_attn not yet implemented for Chroma.\", flush=True)\n                        #img, txt_init = block(img, img_masks, txt, clip, rope, mask, update_cross_attn=update_cross_attn)\n                    \n                    else:\n                        img, txt = block(img=img, txt=txt, vec=double_mod, pe=pe, attn_mask=attn_mask)\n\n                    #img, txt = block(img=img, txt=txt, vec=double_mod, pe=pe, attn_mask=attn_mask)\n\n                if control is not None: # Controlnet\n                    control_i = control.get(\"input\")\n                    if i < len(control_i):\n                        add = control_i[i]\n                        if add is not None:\n                            img += add\n\n        img = torch.cat((txt, img), 1)\n\n        for i, block in enumerate(self.single_blocks):\n            if i not in self.skip_dit:\n                single_mod = self.get_modulations(mod_vectors, \"single\", idx=i)\n                if (\"single_block\", i) in blocks_replace:\n                    def block_wrap(args):\n                        out = {}\n                        out[\"img\"] = block( args[\"img\"],\n                                            vec=args[\"vec\"],\n                                            pe=args[\"pe\"],\n                                            attn_mask=args.get(\"attn_mask\"))\n                        return out\n\n                    out = blocks_replace[(\"single_block\", i)]({ \"img\"             : img,\n                                                                \"vec\"             : single_mod,\n                                                                \"pe\"              : pe,\n                                                                \"attn_mask\"       : attn_mask},\n                                                                {\"original_block\" : block_wrap})\n                    img = out[\"img\"]\n                else:\n\n                    if   weight > 0 and mask is not None and     weight  <=      (i+len(self.double_blocks))/total_layers:\n                        img = block(img, vec=single_mod, pe=pe, attn_mask=mask_zero)\n                        \n                    elif weight < 0 and mask is not None and abs(weight) <= (1 - (i+len(self.double_blocks))/total_layers):\n                        img = block(img, vec=single_mod, pe=pe, attn_mask=mask_zero)\n                        \n                    elif floor > 0 and mask is not None and     floor  >=      (i+len(self.double_blocks))/total_layers:\n                        mask_tmp = mask.clone()\n                        mask_tmp[text_len:, text_len:] = 1.0\n                        img = block(img, vec=single_mod, pe=pe, attn_mask=mask_tmp)\n                        \n                    elif floor < 0 and mask is not None and abs(floor) >= (1 - (i+len(self.double_blocks))/total_layers):\n                        mask_tmp = mask.clone()\n                        mask_tmp[text_len:, text_len:] = 1.0\n                        img = block(img, vec=single_mod, pe=pe, attn_mask=mask_tmp)\n                        \n                    else:\n                        img = block(img, vec=single_mod, pe=pe, attn_mask=attn_mask)\n\n                if control is not None: # Controlnet\n                    control_o = control.get(\"output\")\n                    if i < len(control_o):\n                        add = control_o[i]\n                        if add is not None:\n                            img[:, txt.shape[1] :, ...] += add\n\n        img = img[:, txt.shape[1] :, ...]\n        final_mod = self.get_modulations(mod_vectors, \"final\")\n        img = self.final_layer(img, vec=final_mod)  # (N, T, patch_size ** 2 * out_channels)\n        return img\n\n    def forward_chroma_depr(self, x, timestep, context, guidance, control=None, transformer_options={}, **kwargs):\n        bs, c, h, w = x.shape\n        patch_size = 2\n        x = comfy.ldm.common_dit.pad_to_patch_size(x, (patch_size, patch_size))\n\n        img = rearrange(x, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=patch_size, pw=patch_size)\n\n        h_len = ((h + (patch_size // 2)) // patch_size)\n        w_len = ((w + (patch_size // 2)) // patch_size)\n        img_ids = torch.zeros((h_len, w_len, 3), device=x.device, dtype=x.dtype)\n        img_ids[:, :, 1] = img_ids[:, :, 1] + torch.linspace(0, h_len - 1, steps=h_len, device=x.device, dtype=x.dtype).unsqueeze(1)\n        img_ids[:, :, 2] = img_ids[:, :, 2] + torch.linspace(0, w_len - 1, steps=w_len, device=x.device, dtype=x.dtype).unsqueeze(0)\n        img_ids = repeat(img_ids, \"h w c -> b (h w) c\", b=bs)\n\n        txt_ids = torch.zeros((bs, context.shape[1], 3), device=x.device, dtype=x.dtype)\n        out = self.forward_orig(img, img_ids, context, txt_ids, timestep, guidance, control, transformer_options, attn_mask=kwargs.get(\"attention_mask\", None))\n        return rearrange(out, \"b (h w) (c ph pw) -> b c (h ph) (w pw)\", h=h_len, w=w_len, ph=2, pw=2)[:,:,:h,:w]\n\n    \n    def _get_img_ids(self, x, bs, h_len, w_len, h_start, h_end, w_start, w_end):\n        img_ids          = torch.zeros(  (h_len,   w_len, 3),              device=x.device, dtype=x.dtype)\n        img_ids[..., 1] += torch.linspace(h_start, h_end - 1, steps=h_len, device=x.device, dtype=x.dtype)[:, None]\n        img_ids[..., 2] += torch.linspace(w_start, w_end - 1, steps=w_len, device=x.device, dtype=x.dtype)[None, :]\n        img_ids          = repeat(img_ids, \"h w c -> b (h w) c\", b=bs)\n        return img_ids\n\n\n\n    def forward(self,\n                x,\n                timestep,\n                context,\n                #y,\n                guidance,\n                control             = None,\n                transformer_options = {},\n                **kwargs\n                ):\n        x_orig = x.clone()\n        SIGMA = timestep[0].unsqueeze(0)\n        update_cross_attn = transformer_options.get(\"update_cross_attn\")\n        EO = transformer_options.get(\"ExtraOptions\", ExtraOptions(\"\"))\n        if EO is not None:\n            EO.mute = True\n\n        y0_style_pos        = transformer_options.get(\"y0_style_pos\")\n        y0_style_neg        = transformer_options.get(\"y0_style_neg\")\n\n        y0_style_pos_weight    = transformer_options.get(\"y0_style_pos_weight\", 0.0)\n        y0_style_pos_synweight = transformer_options.get(\"y0_style_pos_synweight\", 0.0)\n        y0_style_pos_synweight *= y0_style_pos_weight\n\n        y0_style_neg_weight    = transformer_options.get(\"y0_style_neg_weight\", 0.0)\n        y0_style_neg_synweight = transformer_options.get(\"y0_style_neg_synweight\", 0.0)\n        y0_style_neg_synweight *= y0_style_neg_weight\n        \n        weight    = -1 * transformer_options.get(\"regional_conditioning_weight\", 0.0)\n        floor     = -1 * transformer_options.get(\"regional_conditioning_floor\",  0.0)\n        \n        freqsep_lowpass_method = transformer_options.get(\"freqsep_lowpass_method\")\n        freqsep_sigma          = transformer_options.get(\"freqsep_sigma\")\n        freqsep_kernel_size    = transformer_options.get(\"freqsep_kernel_size\")\n        freqsep_inner_kernel_size    = transformer_options.get(\"freqsep_inner_kernel_size\")\n        freqsep_stride    = transformer_options.get(\"freqsep_stride\")\n        \n        freqsep_lowpass_weight = transformer_options.get(\"freqsep_lowpass_weight\")\n        freqsep_highpass_weight= transformer_options.get(\"freqsep_highpass_weight\")\n        freqsep_mask           = transformer_options.get(\"freqsep_mask\")\n        \n        out_list = []\n        for i in range(len(transformer_options['cond_or_uncond'])):\n            UNCOND = transformer_options['cond_or_uncond'][i] == 1\n            \n            if update_cross_attn is not None:\n                update_cross_attn['UNCOND'] = UNCOND\n                \n            img = x\n            bs, c, h, w = x.shape\n            patch_size  = 2\n            img           = comfy.ldm.common_dit.pad_to_patch_size(img, (patch_size, patch_size))    # 1,16,192,192\n            \n            transformer_options['original_shape'] = img.shape\n            transformer_options['patch_size']     = patch_size\n\n            h_len = ((h + (patch_size // 2)) // patch_size) # h_len 96\n            w_len = ((w + (patch_size // 2)) // patch_size) # w_len 96\n\n            img = rearrange(img, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=patch_size, pw=patch_size) # img 1,9216,64     1,16,128,128 -> 1,4096,64\n\n            context_tmp = None\n            \n            if not UNCOND and 'AttnMask' in transformer_options: # and weight != 0:\n                AttnMask = transformer_options['AttnMask']\n                mask = transformer_options['AttnMask'].attn_mask.mask.to('cuda')\n\n                if weight == 0:\n                    context_tmp = transformer_options['RegContext'].context.to(context.dtype).to(context.device)\n                    mask = None\n                else:\n                    context_tmp = transformer_options['RegContext'].context.to(context.dtype).to(context.device)\n                \n            if UNCOND and 'AttnMask_neg' in transformer_options: # and weight != 0:\n                AttnMask = transformer_options['AttnMask_neg']\n                mask = transformer_options['AttnMask_neg'].attn_mask.mask.to('cuda')\n\n                if weight == 0:\n                    context_tmp = transformer_options['RegContext_neg'].context.to(context.dtype).to(context.device)\n                    mask = None\n                else:\n                    context_tmp = transformer_options['RegContext_neg'].context.to(context.dtype).to(context.device)\n\n            elif UNCOND and 'AttnMask' in transformer_options:\n                AttnMask = transformer_options['AttnMask']\n                mask = transformer_options['AttnMask'].attn_mask.mask.to('cuda')\n                A       = context\n                B       = transformer_options['RegContext'].context\n                context_tmp = A.repeat(1,    (B.shape[1] // A.shape[1]) + 1, 1)[:,   :B.shape[1], :]\n\n            if context_tmp is None:\n                context_tmp = context[i][None,...].clone()\n\n            txt_ids      = torch.zeros((bs, context_tmp.shape[1], 3), device=img.device, dtype=img.dtype)      # txt_ids        1, 256,3\n            img_ids_orig = self._get_img_ids(img, bs, h_len, w_len, 0, h_len, 0, w_len)                  # img_ids_orig = 1,9216,3\n\n\n            out_tmp = self.forward_blocks(img       [i][None,...].clone(), \n                                        img_ids_orig[i][None,...].clone(), \n                                        context_tmp,\n                                        txt_ids     [i][None,...].clone(), \n                                        timestep    [i][None,...].clone(), \n                                        #y           [i][None,...].clone(),\n                                        guidance    [i][None,...].clone(),\n                                        control, \n                                        update_cross_attn=update_cross_attn,\n                                        transformer_options=transformer_options,\n                                        UNCOND = UNCOND,\n                                        )  # context 1,256,4096   y 1,768\n            out_list.append(out_tmp)\n            \n        out = torch.stack(out_list, dim=0).squeeze(dim=1)\n        \n        eps = rearrange(out, \"b (h w) (c ph pw) -> b c (h ph) (w pw)\", h=h_len, w=w_len, ph=2, pw=2)[:,:,:h,:w]\n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        dtype = eps.dtype if self.style_dtype is None else self.style_dtype\n        \n        if y0_style_pos is not None:\n            y0_style_pos_weight    = transformer_options.get(\"y0_style_pos_weight\")\n            y0_style_pos_synweight = transformer_options.get(\"y0_style_pos_synweight\")\n            y0_style_pos_synweight *= y0_style_pos_weight\n            y0_style_pos_mask = transformer_options.get(\"y0_style_pos_mask\")\n            y0_style_pos_mask_edge = transformer_options.get(\"y0_style_pos_mask_edge\")\n\n            y0_style_pos = y0_style_pos.to(dtype)\n            x   = x_orig.clone().to(dtype)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n            \n            denoised_embed = self.Retrojector.embed(denoised)\n            y0_adain_embed = self.Retrojector.embed(y0_style_pos)\n            \n            if transformer_options['y0_style_method'] == \"scattersort\":\n                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')\n                pad = transformer_options.get('y0_style_tile_padding')\n                if pad is not None and tile_h is not None and tile_w is not None:\n                    \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if EO(\"scattersort_median_LP\"):\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=EO(\"scattersort_median_LP\",7))\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=EO(\"scattersort_median_LP\",7))\n                        \n                        denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_LP = apply_scattersort_tiled(denoised_spatial_LP, y0_adain_spatial_LP, tile_h, tile_w, pad)\n                        \n                        denoised_spatial = denoised_spatial_LP + denoised_spatial_HP\n                        denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n                    else:\n                        denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)\n                    \n                    denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n                    \n                else:\n                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_pos_mask, y0_style_pos_mask_edge, h_len, w_len)\n\n\n\n            elif transformer_options['y0_style_method'] == \"AdaIN\":\n                if freqsep_mask is not None:\n                    freqsep_mask = freqsep_mask.view(1, 1, *freqsep_mask.shape[-2:]).float()\n                    freqsep_mask = F.interpolate(freqsep_mask.float(), size=(h_len, w_len), mode='nearest-exact')\n                \n                if hasattr(self, \"adain_tile\"):\n                    tile_h, tile_w = self.adain_tile\n                    \n                    denoised_pretile = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_pretile = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if self.adain_flag:\n                        h_off = tile_h // 2\n                        w_off = tile_w // 2\n                        denoised_pretile = denoised_pretile[:,:,h_off:-h_off, w_off:-w_off]\n                        self.adain_flag = False\n                    else:\n                        h_off = 0\n                        w_off = 0\n                        self.adain_flag = True\n                    \n                    tiles,    orig_shape, grid, strides = tile_latent(denoised_pretile, tile_size=(tile_h,tile_w))\n                    y0_tiles, orig_shape, grid, strides = tile_latent(y0_adain_pretile, tile_size=(tile_h,tile_w))\n                    \n                    tiles_out = []\n                    for i in range(tiles.shape[0]):\n                        tile = tiles[i].unsqueeze(0)\n                        y0_tile = y0_tiles[i].unsqueeze(0)\n                        \n                        tile    = rearrange(tile,    \"b c h w -> b (h w) c\", h=tile_h, w=tile_w)\n                        y0_tile = rearrange(y0_tile, \"b c h w -> b (h w) c\", h=tile_h, w=tile_w)\n                        \n                        tile = adain_seq_inplace(tile, y0_tile)\n                        tiles_out.append(rearrange(tile, \"b (h w) c -> b c h w\", h=tile_h, w=tile_w))\n                    \n                    tiles_out_tensor = torch.cat(tiles_out, dim=0)\n                    tiles_out_tensor = untile_latent(tiles_out_tensor, orig_shape, grid, strides)\n\n                    if h_off == 0:\n                        denoised_pretile = tiles_out_tensor\n                    else:\n                        denoised_pretile[:,:,h_off:-h_off, w_off:-w_off] = tiles_out_tensor\n                    denoised_embed = rearrange(denoised_pretile, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None and freqsep_lowpass_method.endswith(\"pw\"): #EO(\"adain_pw\"):\n\n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n\n                    if   freqsep_lowpass_method == \"median_pw\":\n                        denoised_spatial_new = adain_patchwise_row_batch_med(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size, use_median_blur=True, lowpass_weight=freqsep_lowpass_weight, highpass_weight=freqsep_highpass_weight)\n                    elif freqsep_lowpass_method == \"gaussian_pw\": \n                        denoised_spatial_new = adain_patchwise_row_batch(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None: \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if   freqsep_lowpass_method == \"median\":\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=freqsep_kernel_size)\n                    elif freqsep_lowpass_method == \"gaussian\":\n                        denoised_spatial_LP = gaussian_blur_2d(denoised_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = gaussian_blur_2d(y0_adain_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                    \n                    if EO(\"adain_fs_uhp\"):\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_ULP = gaussian_blur_2d(denoised_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        y0_adain_spatial_ULP = gaussian_blur_2d(y0_adain_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        \n                        denoised_spatial_UHP = denoised_spatial_HP  - denoised_spatial_ULP\n                        y0_adain_spatial_UHP = y0_adain_spatial_HP  - y0_adain_spatial_ULP\n                        \n                        #denoised_spatial_HP  = y0_adain_spatial_ULP + denoised_spatial_UHP\n                        denoised_spatial_HP  = denoised_spatial_ULP + y0_adain_spatial_UHP\n                    \n                    denoised_spatial_new = freqsep_lowpass_weight * y0_adain_spatial_LP + freqsep_highpass_weight * denoised_spatial_HP\n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                else:\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                self.StyleWCT.set(y0_adain_embed)\n                denoised_embed = self.StyleWCT.get(denoised_embed)\n                \n                if transformer_options.get('y0_standard_guide') is not None:\n                    y0_standard_guide = transformer_options.get('y0_standard_guide')\n                    \n                    y0_standard_guide_embed = self.Retrojector.embed(y0_standard_guide)\n                    f_cs = self.StyleWCT.get(y0_standard_guide_embed)\n                    self.y0_standard_guide = self.Retrojector.unembed(f_cs)\n\n                if transformer_options.get('y0_inv_standard_guide') is not None:\n                    y0_inv_standard_guide = transformer_options.get('y0_inv_standard_guide')\n\n                    y0_inv_standard_guide_embed = self.Retrojector.embed(y0_inv_standard_guide)\n                    f_cs = self.StyleWCT.get(y0_inv_standard_guide_embed)\n                    self.y0_inv_standard_guide = self.Retrojector.unembed(f_cs)\n\n            denoised_approx = self.Retrojector.unembed(denoised_embed)\n            \n            eps = (x - denoised_approx) / sigma\n\n            if not UNCOND:\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_pos_weight * (eps[1] - eps_orig[1])\n                    eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n                else:\n                    eps[0] = eps_orig[0] + y0_style_pos_weight * (eps[0] - eps_orig[0])\n            elif eps.shape[0] == 1 and UNCOND:\n                eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n            \n            eps = eps.float()\n        \n        if y0_style_neg is not None:\n            y0_style_neg_weight    = transformer_options.get(\"y0_style_neg_weight\")\n            y0_style_neg_synweight = transformer_options.get(\"y0_style_neg_synweight\")\n            y0_style_neg_synweight *= y0_style_neg_weight\n            y0_style_neg_mask = transformer_options.get(\"y0_style_neg_mask\")\n            y0_style_neg_mask_edge = transformer_options.get(\"y0_style_neg_mask_edge\")\n            \n            y0_style_neg = y0_style_neg.to(dtype)\n            x   = x_orig.clone().to(dtype)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n\n            denoised_embed = self.Retrojector.embed(denoised)\n            y0_adain_embed = self.Retrojector.embed(y0_style_neg)\n            \n            if transformer_options['y0_style_method'] == \"scattersort\":\n                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')\n                pad = transformer_options.get('y0_style_tile_padding')\n                if pad is not None and tile_h is not None and tile_w is not None:\n                    \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)\n                    \n                    denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n\n                else:\n                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_neg_mask, y0_style_neg_mask_edge, h_len, w_len)\n            \n            \n            elif transformer_options['y0_style_method'] == \"AdaIN\":\n                denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                self.StyleWCT.set(y0_adain_embed)\n                denoised_embed = self.StyleWCT.get(denoised_embed)\n\n            denoised_approx = self.Retrojector.unembed(denoised_embed)\n\n            if UNCOND:\n                eps = (x - denoised_approx) / sigma\n                eps[0] = eps_orig[0] + y0_style_neg_weight * (eps[0] - eps_orig[0])\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n            elif eps.shape[0] == 1 and not UNCOND:\n                eps[0] = eps_orig[0] + y0_style_neg_synweight * (eps[0] - eps_orig[0])\n            \n            eps = eps.float()\n            \n        return eps\n\n\n\n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        dtype = eps.dtype if self.style_dtype is None else self.style_dtype\n        pinv_dtype = torch.float32 if dtype != torch.float64 else dtype\n        W_inv = None\n        \n        \n        #if eps.shape[0] == 2 or (eps.shape[0] == 1): #: and not UNCOND):\n        if y0_style_pos is not None and y0_style_pos_weight != 0.0:\n            y0_style_pos = y0_style_pos.to(dtype)\n            x   = x.to(dtype)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n            \n            img = comfy.ldm.common_dit.pad_to_patch_size(denoised, (self.patch_size, self.patch_size))\n\n            img = rearrange(img, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=patch_size, pw=patch_size) # img 1,9216,64     1,16,128,128 -> 1,4096,64\n\n            img_y0_adain = comfy.ldm.common_dit.pad_to_patch_size(y0_style_pos, (self.patch_size, self.patch_size))\n\n            img_y0_adain = rearrange(img_y0_adain, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=patch_size, pw=patch_size) # img 1,9216,64     1,16,128,128 -> 1,4096,64\n\n            W = self.img_in.weight.data.to(dtype)   # shape [2560, 64]\n            b = self.img_in.bias.data.to(dtype)     # shape [2560]\n            \n            denoised_embed = F.linear(img         .to(W), W, b).to(img)\n            y0_adain_embed = F.linear(img_y0_adain.to(W), W, b).to(img_y0_adain)\n\n            if transformer_options['y0_style_method'] == \"AdaIN\":\n                if freqsep_mask is not None:\n                    freqsep_mask = freqsep_mask.view(1, 1, *freqsep_mask.shape[-2:]).float()\n                    freqsep_mask = F.interpolate(freqsep_mask.float(), size=(h_len, w_len), mode='nearest-exact')\n                \n                if freqsep_lowpass_method is not None and freqsep_lowpass_method.endswith(\"pw\"): #EO(\"adain_pw\"):\n                    \n                    #if self.y0_adain_embed is None or self.y0_adain_embed.shape != y0_adain_embed.shape or torch.norm(self.y0_adain_embed - y0_adain_embed) > 0:\n                    #    self.y0_adain_embed = y0_adain_embed\n                    #    self.adain_pw_cache = None\n                        \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n\n                    if freqsep_lowpass_method == \"median_alt\": \n                        denoised_spatial_new = adain_patchwise_row_batch_medblur(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size, use_median_blur=True)\n                    elif freqsep_lowpass_method == \"median_pw\":\n                        denoised_spatial_new = adain_patchwise_row_batch_realmedblur(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size, use_median_blur=True, lowpass_weight=freqsep_lowpass_weight, highpass_weight=freqsep_highpass_weight)\n                    elif freqsep_lowpass_method == \"gaussian_pw\": \n                        denoised_spatial_new = adain_patchwise_row_batch(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None and freqsep_lowpass_method == \"distribution\": \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n\n                    denoised_spatial_new = adain_patchwise_strict_sortmatch9(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), kernel_size=freqsep_kernel_size, inner_kernel_size=freqsep_inner_kernel_size, mask=freqsep_mask, stride=freqsep_stride)\n\n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n                \n                elif freqsep_lowpass_method is not None: \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if   freqsep_lowpass_method == \"median\":\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=freqsep_kernel_size)\n                    elif freqsep_lowpass_method == \"gaussian\":\n                        denoised_spatial_LP = gaussian_blur_2d(denoised_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = gaussian_blur_2d(y0_adain_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                    \n                    if EO(\"adain_fs_uhp\"):\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_ULP = gaussian_blur_2d(denoised_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        y0_adain_spatial_ULP = gaussian_blur_2d(y0_adain_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        \n                        denoised_spatial_UHP = denoised_spatial_HP  - denoised_spatial_ULP\n                        y0_adain_spatial_UHP = y0_adain_spatial_HP  - y0_adain_spatial_ULP\n                        \n                        #denoised_spatial_HP  = y0_adain_spatial_ULP + denoised_spatial_UHP\n                        denoised_spatial_HP  = denoised_spatial_ULP + y0_adain_spatial_UHP\n                    \n                    denoised_spatial_new = freqsep_lowpass_weight * y0_adain_spatial_LP + freqsep_highpass_weight * denoised_spatial_HP\n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                else:\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                \n                #denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                #for adain_iter in range(EO(\"style_iter\", 0)):\n                #    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                #    denoised_embed = (denoised_embed - b) @ torch.linalg.pinv(W.to(pinv_dtype)).T.to(dtype)\n                #    denoised_embed = F.linear(denoised_embed         .to(W), W, b).to(img)\n                #    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                if self.y0_adain_embed is None or self.y0_adain_embed.shape != y0_adain_embed.shape or torch.norm(self.y0_adain_embed - y0_adain_embed) > 0:\n                    self.y0_adain_embed = y0_adain_embed\n                    \n                    f_s          = y0_adain_embed[0].clone()\n                    self.mu_s    = f_s.mean(dim=0, keepdim=True)\n                    f_s_centered = f_s - self.mu_s\n                    \n                    cov = (f_s_centered.T.double() @ f_s_centered.double()) / (f_s_centered.size(0) - 1)\n\n                    S_eig, U_eig = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    S_eig_sqrt    = S_eig.clamp(min=0).sqrt() # eigenvalues -> singular values\n                    \n                    whiten = U_eig @ torch.diag(S_eig_sqrt) @ U_eig.T\n                    self.y0_color  = whiten.to(f_s_centered)\n\n                for wct_i in range(eps.shape[0]):\n                    f_c          = denoised_embed[wct_i].clone()\n                    mu_c         = f_c.mean(dim=0, keepdim=True)\n                    f_c_centered = f_c - mu_c\n                    \n                    cov = (f_c_centered.T.double() @ f_c_centered.double()) / (f_c_centered.size(0) - 1)\n\n                    S_eig, U_eig  = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    inv_sqrt_eig  = S_eig.clamp(min=0).rsqrt() \n                    \n                    whiten = U_eig @ torch.diag(inv_sqrt_eig) @ U_eig.T\n                    whiten = whiten.to(f_c_centered)\n\n                    f_c_whitened = f_c_centered @ whiten.T\n                    f_cs         = f_c_whitened @ self.y0_color.T + self.mu_s\n                    \n                    denoised_embed[wct_i] = f_cs\n\n            \n            denoised_approx = (denoised_embed - b.to(denoised_embed)) @ torch.linalg.pinv(W).T.to(denoised_embed)\n            denoised_approx = denoised_approx.to(eps)\n            \n            denoised_approx = rearrange(denoised_approx, \"b (h w) (c ph pw) -> b c (h ph) (w pw)\", h=h_len, w=w_len, ph=2, pw=2)[:,:,:h,:w]\n            \n            eps = (x - denoised_approx) / sigma\n            \n            if not UNCOND:\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_pos_weight * (eps[1] - eps_orig[1])\n                    eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n                else:\n                    eps[0] = eps_orig[0] + y0_style_pos_weight * (eps[0] - eps_orig[0])\n            elif eps.shape[0] == 1 and UNCOND:\n                eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n                #if eps.shape[0] == 2:\n                #    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n            \n            eps = eps.float()\n        \n        #if eps.shape[0] == 2 or (eps.shape[0] == 1): # and UNCOND):\n        if y0_style_neg is not None and y0_style_neg_weight != 0.0:\n            y0_style_neg = y0_style_neg.to(dtype)\n            x   = x.to(dtype)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n            \n            img = comfy.ldm.common_dit.pad_to_patch_size(denoised, (self.patch_size, self.patch_size))\n\n            h_len = ((h + (patch_size // 2)) // patch_size) # h_len 96\n            w_len = ((w + (patch_size // 2)) // patch_size) # w_len 96\n            img = rearrange(img, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=patch_size, pw=patch_size) # img 1,9216,64     1,16,128,128 -> 1,4096,64\n            \n            img_y0_adain = comfy.ldm.common_dit.pad_to_patch_size(y0_style_neg, (self.patch_size, self.patch_size))\n\n            img_y0_adain = rearrange(img_y0_adain, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=patch_size, pw=patch_size) # img 1,9216,64     1,16,128,128 -> 1,4096,64\n\n            W = self.img_in.weight.data.to(dtype)   # shape [2560, 64]\n            b = self.img_in.bias.data.to(dtype)     # shape [2560]\n            \n            denoised_embed = F.linear(img         .to(W), W, b).to(img)\n            y0_adain_embed = F.linear(img_y0_adain.to(W), W, b).to(img_y0_adain)\n            \n            if transformer_options['y0_style_method'] == \"AdaIN\":\n                if freqsep_mask is not None:\n                    freqsep_mask = freqsep_mask.view(1, 1, *freqsep_mask.shape[-2:]).float()\n                    freqsep_mask = F.interpolate(freqsep_mask.float(), size=(h_len, w_len), mode='nearest-exact')\n                \n                if freqsep_lowpass_method is not None and freqsep_lowpass_method.endswith(\"pw\"): #EO(\"adain_pw\"):\n                    \n                    #if self.y0_adain_embed is None or self.y0_adain_embed.shape != y0_adain_embed.shape or torch.norm(self.y0_adain_embed - y0_adain_embed) > 0:\n                    #    self.y0_adain_embed = y0_adain_embed\n                    #    self.adain_pw_cache = None\n                        \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n\n                    if freqsep_lowpass_method == \"median_alt\": \n                        denoised_spatial_new = adain_patchwise_row_batch_medblur(denoised_spatial.clone(), y0_adain_spatial.clone(), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size, use_median_blur=True)\n                    elif freqsep_lowpass_method == \"median_pw\":\n                        denoised_spatial_new = adain_patchwise_row_batch_realmedblur(denoised_spatial.clone(), y0_adain_spatial.clone(), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size, use_median_blur=True, lowpass_weight=freqsep_lowpass_weight, highpass_weight=freqsep_highpass_weight)\n                    elif freqsep_lowpass_method == \"gaussian_pw\": \n                        denoised_spatial_new = adain_patchwise_row_batch(denoised_spatial.clone(), y0_adain_spatial.clone(), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None and freqsep_lowpass_method == \"distribution\": \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n\n                    denoised_spatial_new = adain_patchwise_strict_sortmatch9(denoised_spatial.clone(), y0_adain_spatial.clone(), kernel_size=freqsep_kernel_size, inner_kernel_size=freqsep_inner_kernel_size, mask=freqsep_mask, stride=freqsep_stride)\n\n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n                \n                elif freqsep_lowpass_method is not None: \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if   freqsep_lowpass_method == \"median\":\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=freqsep_kernel_size)\n                    elif freqsep_lowpass_method == \"gaussian\":\n                        denoised_spatial_LP = gaussian_blur_2d(denoised_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = gaussian_blur_2d(y0_adain_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                    \n                    if EO(\"adain_fs_uhp\"):\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_ULP = gaussian_blur_2d(denoised_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        y0_adain_spatial_ULP = gaussian_blur_2d(y0_adain_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        \n                        denoised_spatial_UHP = denoised_spatial_HP  - denoised_spatial_ULP\n                        y0_adain_spatial_UHP = y0_adain_spatial_HP  - y0_adain_spatial_ULP\n                        \n                        #denoised_spatial_HP  = y0_adain_spatial_ULP + denoised_spatial_UHP\n                        denoised_spatial_HP  = denoised_spatial_ULP + y0_adain_spatial_UHP\n                    \n                    denoised_spatial_new = freqsep_lowpass_weight * y0_adain_spatial_LP + freqsep_highpass_weight * denoised_spatial_HP\n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                else:\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                \n                #denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                #for adain_iter in range(EO(\"style_iter\", 0)):\n                #    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                #    denoised_embed = (denoised_embed - b) @ torch.linalg.pinv(W.to(pinv_dtype)).T.to(dtype)\n                #    denoised_embed = F.linear(denoised_embed         .to(W), W, b).to(img)\n                #    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                if self.y0_adain_embed is None or self.y0_adain_embed.shape != y0_adain_embed.shape or torch.norm(self.y0_adain_embed - y0_adain_embed) > 0:\n                    self.y0_adain_embed = y0_adain_embed\n                    \n                    f_s          = y0_adain_embed[0].clone()\n                    self.mu_s    = f_s.mean(dim=0, keepdim=True)\n                    f_s_centered = f_s - self.mu_s\n                    \n                    cov = (f_s_centered.T.double() @ f_s_centered.double()) / (f_s_centered.size(0) - 1)\n\n                    S_eig, U_eig = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    S_eig_sqrt    = S_eig.clamp(min=0).sqrt() # eigenvalues -> singular values\n                    \n                    whiten = U_eig @ torch.diag(S_eig_sqrt) @ U_eig.T\n                    self.y0_color  = whiten.to(f_s_centered)\n\n                for wct_i in range(eps.shape[0]):\n                    f_c          = denoised_embed[wct_i].clone()\n                    mu_c         = f_c.mean(dim=0, keepdim=True)\n                    f_c_centered = f_c - mu_c\n                    \n                    cov = (f_c_centered.T.double() @ f_c_centered.double()) / (f_c_centered.size(0) - 1)\n\n                    S_eig, U_eig  = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    inv_sqrt_eig  = S_eig.clamp(min=0).rsqrt() \n                    \n                    whiten = U_eig @ torch.diag(inv_sqrt_eig) @ U_eig.T\n                    whiten = whiten.to(f_c_centered)\n\n                    f_c_whitened = f_c_centered @ whiten.T\n                    f_cs         = f_c_whitened @ self.y0_color.T + self.mu_s\n                    \n                    denoised_embed[wct_i] = f_cs\n\n            denoised_approx = (denoised_embed - b.to(denoised_embed)) @ torch.linalg.pinv(W).T.to(denoised_embed)\n            denoised_approx = denoised_approx.to(eps)\n            \n            denoised_approx = rearrange(denoised_approx, \"b (h w) (c ph pw) -> b c (h ph) (w pw)\", h=h_len, w=w_len, ph=2, pw=2)[:,:,:h,:w]\n            \n            if UNCOND:\n                eps = (x - denoised_approx) / sigma\n                eps[0] = eps_orig[0] + y0_style_neg_weight * (eps[0] - eps_orig[0])\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n            elif eps.shape[0] == 1 and not UNCOND:\n                eps[0] = eps_orig[0] + y0_style_neg_synweight * (eps[0] - eps_orig[0])\n            \n            eps = eps.float()\n            \n        return eps\n\n\n\ndef adain_seq(content: torch.Tensor, style: torch.Tensor, eps: float = 1e-7) -> torch.Tensor:\n    return ((content - content.mean(1, keepdim=True)) / (content.std(1, keepdim=True) + eps)) * (style.std(1, keepdim=True) + eps) + style.mean(1, keepdim=True)\n\n\n\ndef adain_seq_inplace(content: torch.Tensor, style: torch.Tensor, eps: float = 1e-7) -> torch.Tensor:\n    mean_c = content.mean(1, keepdim=True)\n    std_c  = content.std (1, keepdim=True).add_(eps)  # in-place add\n    mean_s = style.mean  (1, keepdim=True)\n    std_s  = style.std   (1, keepdim=True).add_(eps)\n\n    content.sub_(mean_c).div_(std_c).mul_(std_s).add_(mean_s)  # in-place chain\n    return content\n\n\n\n\n\n\n\n\n\ndef gaussian_blur_2d(img: torch.Tensor, sigma: float, kernel_size: int = None) -> torch.Tensor:\n    B, C, H, W = img.shape\n    dtype = img.dtype\n    device = img.device\n\n    if kernel_size is None:\n        kernel_size = int(2 * math.ceil(3 * sigma) + 1)\n\n    if kernel_size % 2 == 0:\n        kernel_size += 1\n\n    coords = torch.arange(kernel_size, dtype=torch.float64) - kernel_size // 2\n    g = torch.exp(-0.5 * (coords / sigma) ** 2)\n    g = g / g.sum()\n\n    kernel_2d = g[:, None] * g[None, :]\n    kernel_2d = kernel_2d.to(dtype=dtype, device=device)\n\n    kernel = kernel_2d.expand(C, 1, kernel_size, kernel_size)\n\n    pad = kernel_size // 2\n    img_padded = F.pad(img, (pad, pad, pad, pad), mode='reflect')\n\n    return F.conv2d(img_padded, kernel, groups=C)\n\n\ndef median_blur_2d(img: torch.Tensor, kernel_size: int = 3) -> torch.Tensor:\n    if kernel_size % 2 == 0:\n        kernel_size += 1\n    pad = kernel_size // 2\n\n    B, C, H, W = img.shape\n    img_padded = F.pad(img, (pad, pad, pad, pad), mode='reflect')\n\n    unfolded = img_padded.unfold(2, kernel_size, 1).unfold(3, kernel_size, 1)\n    # unfolded: [B, C, H, W, kH, kW] → flatten to patches\n    patches = unfolded.contiguous().view(B, C, H, W, -1)\n    median = patches.median(dim=-1).values\n    return median\n\n\ndef adain_patchwise(content: torch.Tensor, style: torch.Tensor, sigma: float = 1.0, kernel_size: int = None, eps: float = 1e-5) -> torch.Tensor:\n\n    B, C, H, W = content.shape\n    device     = content.device\n    dtype      = content.dtype\n\n    if kernel_size is None:\n        kernel_size = int(2 * math.ceil(3 * sigma) + 1)\n    if kernel_size % 2 == 0:\n        kernel_size += 1\n\n    pad    = kernel_size // 2\n    coords = torch.arange(kernel_size, dtype=torch.float64, device=device) - pad\n    gauss  = torch.exp(-0.5 * (coords / sigma) ** 2)\n    gauss /= gauss.sum()\n    kernel_2d = (gauss[:, None] * gauss[None, :]).to(dtype=dtype)\n\n    weight = kernel_2d.view(1, 1, kernel_size, kernel_size)\n\n    content_padded = F.pad(content, (pad, pad, pad, pad), mode='reflect')\n    style_padded   = F.pad(style,   (pad, pad, pad, pad), mode='reflect')\n    result = torch.zeros_like(content)\n\n    for i in range(H):\n        for j in range(W):\n            c_patch = content_padded[:, :, i:i + kernel_size, j:j + kernel_size]\n            s_patch =   style_padded[:, :, i:i + kernel_size, j:j + kernel_size]\n            w = weight.expand_as(c_patch)\n\n            c_mean =  (c_patch              * w).sum(dim=(-1, -2), keepdim=True)\n            c_std  = ((c_patch - c_mean)**2 * w).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n            s_mean =  (s_patch              * w).sum(dim=(-1, -2), keepdim=True)\n            s_std  = ((s_patch - s_mean)**2 * w).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n\n            normed =  (c_patch[:, :, pad:pad+1, pad:pad+1] - c_mean) / c_std\n            stylized = normed * s_std + s_mean\n            result[:, :, i, j] = stylized.squeeze(-1).squeeze(-1)\n\n    return result\n\n\n\n\ndef adain_patchwise_row_batch(content: torch.Tensor, style: torch.Tensor, sigma: float = 1.0, kernel_size: int = None, eps: float = 1e-5) -> torch.Tensor:\n\n    B, C, H, W = content.shape\n    device, dtype = content.device, content.dtype\n\n    if kernel_size is None:\n        kernel_size = int(2 * math.ceil(3 * sigma) + 1)\n    if kernel_size % 2 == 0:\n        kernel_size += 1\n\n    pad = kernel_size // 2\n    coords = torch.arange(kernel_size, dtype=torch.float64, device=device) - pad\n    gauss = torch.exp(-0.5 * (coords / sigma) ** 2)\n    gauss = (gauss / gauss.sum()).to(dtype)\n    kernel_2d = (gauss[:, None] * gauss[None, :])\n\n    weight = kernel_2d.view(1, 1, kernel_size, kernel_size)\n\n    content_padded = F.pad(content, (pad, pad, pad, pad), mode='reflect')\n    style_padded = F.pad(style, (pad, pad, pad, pad), mode='reflect')\n    result = torch.zeros_like(content)\n\n    for i in range(H):\n        c_row_patches = torch.stack([\n            content_padded[:, :, i:i+kernel_size, j:j+kernel_size]\n            for j in range(W)\n        ], dim=0)  # [W, B, C, k, k]\n\n        s_row_patches = torch.stack([\n            style_padded[:, :, i:i+kernel_size, j:j+kernel_size]\n            for j in range(W)\n        ], dim=0)\n\n        w = weight.expand_as(c_row_patches[0])\n\n        c_mean = (c_row_patches * w).sum(dim=(-1, -2), keepdim=True)\n        c_std  = ((c_row_patches - c_mean) ** 2 * w).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n        s_mean = (s_row_patches * w).sum(dim=(-1, -2), keepdim=True)\n        s_std  = ((s_row_patches - s_mean) ** 2 * w).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n\n        center = kernel_size // 2\n        central = c_row_patches[:, :, :, center:center+1, center:center+1]\n        normed = (central - c_mean) / c_std\n        stylized = normed * s_std + s_mean\n\n        result[:, :, i, :] = stylized.squeeze(-1).squeeze(-1).permute(1, 2, 0)  # [B,C,W]\n\n    return result\n\n\n\n\n\n\n\ndef adain_patchwise_row_batch_medblur(content: torch.Tensor, style: torch.Tensor, sigma: float = 1.0, kernel_size: int = None, eps: float = 1e-5, mask: torch.Tensor = None, use_median_blur: bool = False) -> torch.Tensor:\n    B, C, H, W = content.shape\n    device, dtype = content.device, content.dtype\n\n    if kernel_size is None:\n        kernel_size = int(2 * math.ceil(3 * abs(sigma)) + 1)\n    if kernel_size % 2 == 0:\n        kernel_size += 1\n\n    pad = kernel_size // 2\n\n    content_padded = F.pad(content, (pad, pad, pad, pad), mode='reflect')\n    style_padded = F.pad(style, (pad, pad, pad, pad), mode='reflect')\n    result = torch.zeros_like(content)\n\n    scaling = torch.ones((B, 1, H, W), device=device, dtype=dtype)\n    sigma_scale = torch.ones((H, W), device=device, dtype=torch.float32)\n    if mask is not None:\n        with torch.no_grad():\n            padded_mask = F.pad(mask.float(), (pad, pad, pad, pad), mode=\"reflect\")\n            blurred_mask = F.avg_pool2d(padded_mask, kernel_size=kernel_size, stride=1, padding=pad)\n            blurred_mask = blurred_mask[..., pad:-pad, pad:-pad]\n            edge_proximity = blurred_mask * (1.0 - blurred_mask)\n            scaling = 1.0 - (edge_proximity / 0.25).clamp(0.0, 1.0)\n            sigma_scale = scaling[0, 0]  # assuming single-channel mask broadcasted across B, C\n\n    if not use_median_blur:\n        coords = torch.arange(kernel_size, dtype=torch.float64, device=device) - pad\n        base_gauss = torch.exp(-0.5 * (coords / sigma) ** 2)\n        base_gauss = (base_gauss / base_gauss.sum()).to(dtype)\n        gaussian_table = {}\n        for s in sigma_scale.unique():\n            sig = float((sigma * s + eps).clamp(min=1e-3))\n            gauss_local = torch.exp(-0.5 * (coords / sig) ** 2)\n            gauss_local = (gauss_local / gauss_local.sum()).to(dtype)\n            kernel_2d = gauss_local[:, None] * gauss_local[None, :]\n            gaussian_table[s.item()] = kernel_2d\n\n    for i in range(H):\n        row_result = torch.zeros(B, C, W, dtype=dtype, device=device)\n        for j in range(W):\n            c_patch = content_padded[:, :, i:i+kernel_size, j:j+kernel_size]\n            s_patch = style_padded[:, :, i:i+kernel_size, j:j+kernel_size]\n\n            if use_median_blur:\n                c_flat = c_patch.reshape(B, C, -1)\n                s_flat = s_patch.reshape(B, C, -1)\n\n                c_median = c_flat.median(dim=-1, keepdim=True).values\n                s_median = s_flat.median(dim=-1, keepdim=True).values\n\n                c_std = (c_flat - c_median).abs().mean(dim=-1, keepdim=True) + eps\n                s_std = (s_flat - s_median).abs().mean(dim=-1, keepdim=True) + eps\n\n                center = kernel_size // 2\n                central = c_patch[:, :, center, center].unsqueeze(-1)\n\n                normed = (central - c_median) / c_std\n                stylized = normed * s_std + s_median\n            else:\n                k = gaussian_table[float(sigma_scale[i, j].item())]\n                local_weight = k.view(1, 1, kernel_size, kernel_size).expand(B, C, kernel_size, kernel_size)\n\n                c_mean = (c_patch * local_weight).sum(dim=(-1, -2), keepdim=True)\n                c_std = ((c_patch - c_mean) ** 2 * local_weight).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n                s_mean = (s_patch * local_weight).sum(dim=(-1, -2), keepdim=True)\n                s_std = ((s_patch - s_mean) ** 2 * local_weight).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n\n                center = kernel_size // 2\n                central = c_patch[:, :, center:center+1, center:center+1]\n                normed = (central - c_mean) / c_std\n                stylized = normed * s_std + s_mean\n\n            local_scaling = scaling[:, :, i, j].view(B, 1, 1, 1)\n            stylized = central * (1 - local_scaling) + stylized * local_scaling\n\n            row_result[:, :, j] = stylized.squeeze(-1).squeeze(-1)\n        result[:, :, i, :] = row_result\n\n    return result\n\n\n\n\n\n\n\ndef adain_patchwise_row_batch_realmedblur(content: torch.Tensor, style: torch.Tensor, sigma: float = 1.0, kernel_size: int = None, eps: float = 1e-5, mask: torch.Tensor = None, use_median_blur: bool = False, lowpass_weight=1.0, highpass_weight=1.0) -> torch.Tensor:\n    B, C, H, W = content.shape\n    device, dtype = content.device, content.dtype\n\n    if kernel_size is None:\n        kernel_size = int(2 * math.ceil(3 * abs(sigma)) + 1)\n    if kernel_size % 2 == 0:\n        kernel_size += 1\n\n    pad = kernel_size // 2\n\n    content_padded = F.pad(content, (pad, pad, pad, pad), mode='reflect')\n    style_padded = F.pad(style, (pad, pad, pad, pad), mode='reflect')\n    result = torch.zeros_like(content)\n\n    scaling = torch.ones((B, 1, H, W), device=device, dtype=dtype)\n    sigma_scale = torch.ones((H, W), device=device, dtype=torch.float32)\n    if mask is not None:\n        with torch.no_grad():\n            padded_mask = F.pad(mask.float(), (pad, pad, pad, pad), mode=\"reflect\")\n            blurred_mask = F.avg_pool2d(padded_mask, kernel_size=kernel_size, stride=1, padding=pad)\n            blurred_mask = blurred_mask[..., pad:-pad, pad:-pad]\n            edge_proximity = blurred_mask * (1.0 - blurred_mask)\n            scaling = 1.0 - (edge_proximity / 0.25).clamp(0.0, 1.0)\n            sigma_scale = scaling[0, 0]  # assuming single-channel mask broadcasted across B, C\n\n    if not use_median_blur:\n        coords = torch.arange(kernel_size, dtype=torch.float64, device=device) - pad\n        base_gauss = torch.exp(-0.5 * (coords / sigma) ** 2)\n        base_gauss = (base_gauss / base_gauss.sum()).to(dtype)\n        gaussian_table = {}\n        for s in sigma_scale.unique():\n            sig = float((sigma * s + eps).clamp(min=1e-3))\n            gauss_local = torch.exp(-0.5 * (coords / sig) ** 2)\n            gauss_local = (gauss_local / gauss_local.sum()).to(dtype)\n            kernel_2d = gauss_local[:, None] * gauss_local[None, :]\n            gaussian_table[s.item()] = kernel_2d\n\n    for i in range(H):\n        row_result = torch.zeros(B, C, W, dtype=dtype, device=device)\n        for j in range(W):\n            c_patch = content_padded[:, :, i:i+kernel_size, j:j+kernel_size]\n            s_patch = style_padded[:, :, i:i+kernel_size, j:j+kernel_size]\n\n            if use_median_blur:\n                # Median blur with residual restoration\n                unfolded_c = c_patch.reshape(B, C, -1)\n                unfolded_s = s_patch.reshape(B, C, -1)\n\n                c_median = unfolded_c.median(dim=-1, keepdim=True).values\n                s_median = unfolded_s.median(dim=-1, keepdim=True).values\n\n                center = kernel_size // 2\n                central = c_patch[:, :, center, center].view(B, C, 1)\n                residual = central - c_median\n                stylized = lowpass_weight * s_median + residual * highpass_weight\n            else:\n                k = gaussian_table[float(sigma_scale[i, j].item())]\n                local_weight = k.view(1, 1, kernel_size, kernel_size).expand(B, C, kernel_size, kernel_size)\n\n                c_mean = (c_patch * local_weight).sum(dim=(-1, -2), keepdim=True)\n                c_std = ((c_patch - c_mean) ** 2 * local_weight).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n                s_mean = (s_patch * local_weight).sum(dim=(-1, -2), keepdim=True)\n                s_std = ((s_patch - s_mean) ** 2 * local_weight).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n\n                center = kernel_size // 2\n                central = c_patch[:, :, center:center+1, center:center+1]\n                normed = (central - c_mean) / c_std\n                stylized = normed * s_std + s_mean\n\n            local_scaling = scaling[:, :, i, j].view(B, 1, 1)\n            stylized = central * (1 - local_scaling) + stylized * local_scaling\n\n            row_result[:, :, j] = stylized.squeeze(-1)\n        result[:, :, i, :] = row_result\n\n    return result\n\n\n\n\n\n\n\n\n\n\n\n\n\ndef patchwise_sort_transfer9(src: torch.Tensor, ref: torch.Tensor) -> torch.Tensor:\n    \"\"\"\n    src, ref: [B, C, N] where N = K*K\n    Returns: [B, C, N] with values from ref permuted to match the sort-order of src.\n    \"\"\"\n    src_sorted, src_idx  = src.sort(dim=-1)\n    ref_sorted, _        = ref.sort(dim=-1)\n    out = torch.zeros_like(src)\n    out.scatter_(dim=-1, index=src_idx, src=ref_sorted)\n    return out\n\ndef masked_patchwise_sort_transfer9(\n    src       : torch.Tensor,         # [B, C, N]\n    ref       : torch.Tensor,         # [B, C, N]\n    mask_flat : torch.Tensor          # [B, N]  bool\n) -> torch.Tensor:\n    \"\"\"\n    Only rearrange N positions where mask_flat[b] is True... to be implemented fully later. \n    \"\"\"\n    B,C,N = src.shape\n    out = src.clone()\n    for b in range(B):\n        valid = mask_flat[b]             # [N] boolean\n        if valid.sum() == 0:\n            continue\n        sc = src[b,:,valid]              # [C, M]\n        ss = ref[b,:,valid]              # [C, M]\n        sc_s, idx = sc.sort(dim=-1)      # sort the channelz\n        ss_s, _   = ss.sort(dim=-1)\n        res = torch.zeros_like(sc)\n        res.scatter_(dim=-1, index=idx, src=ss_s)\n        out[b,:,valid] = res\n    return out\n\ndef adain_patchwise_strict_sortmatch9(\n    content           : torch.Tensor,        # [B,C,H,W]\n    style             : torch.Tensor,        # [B,C,H,W]\n    kernel_size       : int,\n    inner_kernel_size : int = 1,\n    stride            : int = 1,\n    mask              : torch.Tensor = None  # [B,1,H,W]\n) -> torch.Tensor:\n    B,C,H,W = content.shape\n    assert inner_kernel_size <= kernel_size\n    pad       = kernel_size//2\n    inner_off = (kernel_size - inner_kernel_size)//2\n\n    # reflect-pad\n    cp = F.pad(content, (pad,)*4, mode='reflect')\n    sp = F.pad(style,   (pad,)*4, mode='reflect')\n    out = content.clone()\n\n    if mask is not None:\n        mask = mask[:,0].bool()  # [B,H,W]\n\n    for i in range(0, H, stride):\n        for j in range(0, W, stride):\n            pc = cp[:, :, i:i+kernel_size, j:j+kernel_size]   # [B,C,K,K]\n            ps = sp[:, :, i:i+kernel_size, j:j+kernel_size]\n\n            Bc = pc.reshape(B, C, -1)\n            Bs = ps.reshape(B, C, -1)\n\n            matched_flat = patchwise_sort_transfer9(Bc, Bs)\n            matched = matched_flat.view(B, C, kernel_size, kernel_size)\n\n            y0, x0 = inner_off, inner_off\n            y1, x1 = y0 + inner_kernel_size, x0 + inner_kernel_size\n            inner = matched[:, :, y0:y1, x0:x1]  # [B,C,inner,inner]\n\n            dst_y0 = i + y0 - pad\n            dst_x0 = j + x0 - pad\n            dst_y1 = dst_y0 + inner_kernel_size\n            dst_x1 = dst_x0 + inner_kernel_size\n\n            oy0 = max(dst_y0, 0); ox0 = max(dst_x0, 0)\n            oy1 = min(dst_y1, H); ox1 = min(dst_x1, W)\n\n            iy0 = oy0 - dst_y0; ix0 = ox0 - dst_x0\n            iy1 = iy0 + (oy1 - oy0); ix1 = ix0 + (ox1 - ox0)\n\n            if mask is None:\n                out[:, :, oy0:oy1, ox0:ox1] = inner[:, :, iy0:iy1, ix0:ix1]\n            else:\n                ibm = mask[:, oy0:oy1, ox0:ox1]  # [B,inner,inner]\n                for b in range(B):\n                    sel = ibm[b]  # [inner,inner]   # w/ regard to kernel\n                    if sel.any():\n                        out[b:b+1, :, oy0:oy1, ox0:ox1][:, :,sel]   =   inner[b:b+1, :, iy0:iy1, ix0:ix1][:, :, sel]\n    return out\n\n\n"
  },
  {
    "path": "conditioning.py",
    "content": "import torch\r\nimport torch.nn.functional as F\r\nimport math\r\n\r\nfrom torch  import Tensor\r\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKING, TypeVar, List\r\n\r\nfrom dataclasses import dataclass, field\r\n\r\nimport copy\r\nimport base64\r\nimport pickle # used strictly for serializing conditioning in the ConditioningToBase64 and Base64ToConditioning nodes for API use. (Offloading T5 processing to another machine to avoid model shuffling.)\r\n\r\nimport comfy.supported_models\r\nimport node_helpers\r\nimport gc\r\n\r\n\r\nfrom .sigmas  import get_sigmas\r\n\r\nfrom .helper  import initialize_or_scale, precision_tool, get_res4lyf_scheduler_list, pad_tensor_list_to_max_len\r\nfrom .latents import get_orthogonal, get_collinear\r\nfrom .res4lyf import RESplain\r\nfrom .beta.constants import MAX_STEPS\r\nfrom .attention_masks import FullAttentionMask, FullAttentionMaskHiDream, CrossAttentionMask, SplitAttentionMask, RegionalContext\r\n\r\nfrom .flux.redux import ReReduxImageEncoder\r\n\r\ndef multiply_nested_tensors(structure, scalar):\r\n    if isinstance(structure, torch.Tensor):\r\n        return structure * scalar\r\n    elif isinstance(structure, list):\r\n        return [multiply_nested_tensors(item, scalar) for item in structure]\r\n    elif isinstance(structure, dict):\r\n        return {key: multiply_nested_tensors(value, scalar) for key, value in structure.items()}\r\n    else:\r\n        return structure\r\n\r\n\r\n\r\ndef pad_to_same_tokens(x1, x2, pad_value=0.0):\r\n    T1, T2 = x1.shape[1], x2.shape[1]\r\n    if T1 == T2:\r\n        return x1, x2\r\n    max_T = max(T1, T2)\r\n    x1_padded = F.pad(x1, (0, 0, 0, max_T - T1), value=pad_value)\r\n    x2_padded = F.pad(x2, (0, 0, 0, max_T - T2), value=pad_value)\r\n    return x1_padded, x2_padded\r\n\r\n\r\n\r\nclass ConditioningOrthoCollin:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\": {\r\n            \"conditioning_0\": (\"CONDITIONING\", ), \r\n            \"conditioning_1\": (\"CONDITIONING\", ),\r\n            \"t5_strength\"   : (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n            \"clip_strength\" : (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n            }}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"combine\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n    EXPERIMENTAL = True\r\n\r\n    def combine(self, conditioning_0, conditioning_1, t5_strength, clip_strength):\r\n\r\n        t5_0_1_collin         = get_collinear (conditioning_0[0][0], conditioning_1[0][0])\r\n        t5_1_0_ortho          = get_orthogonal(conditioning_1[0][0], conditioning_0[0][0])\r\n\r\n        t5_combined           = t5_0_1_collin + t5_1_0_ortho\r\n        \r\n        t5_1_0_collin         = get_collinear (conditioning_1[0][0], conditioning_0[0][0])\r\n        t5_0_1_ortho          = get_orthogonal(conditioning_0[0][0], conditioning_1[0][0])\r\n\r\n        t5_B_combined         = t5_1_0_collin + t5_0_1_ortho\r\n\r\n        pooled_0_1_collin     = get_collinear (conditioning_0[0][1]['pooled_output'].unsqueeze(0), conditioning_1[0][1]['pooled_output'].unsqueeze(0)).squeeze(0)\r\n        pooled_1_0_ortho      = get_orthogonal(conditioning_1[0][1]['pooled_output'].unsqueeze(0), conditioning_0[0][1]['pooled_output'].unsqueeze(0)).squeeze(0)\r\n\r\n        pooled_combined       = pooled_0_1_collin + pooled_1_0_ortho\r\n        \r\n        #conditioning_0[0][0] = conditioning_0[0][0] + t5_strength * (t5_combined - conditioning_0[0][0])\r\n        \r\n        #conditioning_0[0][0] = t5_strength * t5_combined + (1-t5_strength) * t5_B_combined\r\n        \r\n        conditioning_0[0][0]  = t5_strength * t5_0_1_collin + (1-t5_strength) * t5_1_0_collin\r\n        \r\n        conditioning_0[0][1]['pooled_output'] = conditioning_0[0][1]['pooled_output'] + clip_strength * (pooled_combined - conditioning_0[0][1]['pooled_output'])\r\n\r\n        return (conditioning_0, )\r\n\r\n\r\n\r\nclass CLIPTextEncodeFluxUnguided:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\": {\r\n            \"clip\"  : (\"CLIP\", ),\r\n            \"clip_l\": (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            \"t5xxl\" : (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            }}\r\n    RETURN_TYPES = (\"CONDITIONING\",\"INT\",\"INT\",)\r\n    RETURN_NAMES = (\"conditioning\", \"clip_l_end\", \"t5xxl_end\",)\r\n    FUNCTION     = \"encode\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n    EXPERIMENTAL = True\r\n\r\n\r\n    def encode(self, clip, clip_l, t5xxl):\r\n        tokens = clip.tokenize(clip_l)\r\n        tokens[\"t5xxl\"] = clip.tokenize(t5xxl)[\"t5xxl\"]\r\n\r\n        clip_l_end=0\r\n        for i in range(len(tokens['l'][0])):\r\n            if tokens['l'][0][i][0] == 49407:\r\n                clip_l_end=i\r\n                break\r\n        t5xxl_end=0\r\n        for i in range(len(tokens['l'][0])):   # bug? should this be t5xxl?\r\n            if tokens['t5xxl'][0][i][0] == 1:\r\n                t5xxl_end=i\r\n                break\r\n\r\n        output = clip.encode_from_tokens(tokens, return_pooled=True, return_dict=True)\r\n        cond = output.pop(\"cond\")\r\n        conditioning = [[cond, output]]\r\n        conditioning[0][1]['clip_l_end'] = clip_l_end\r\n        conditioning[0][1]['t5xxl_end'] = t5xxl_end\r\n        return (conditioning, clip_l_end, t5xxl_end,)\r\n\r\n\r\nclass StyleModelApplyStyle: \r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"conditioning\":       (\"CONDITIONING\", ),\r\n                \"style_model\":        (\"STYLE_MODEL\", ),\r\n                \"clip_vision_output\": (\"CLIP_VISION_OUTPUT\", ),\r\n                \"strength\":           (\"FLOAT\", {\"default\": 1.0, \"min\": -10.0, \"max\": 10.0, \"step\": 0.001}),\r\n            }\r\n    }\r\n        \r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n    DESCRIPTION  = \"Use with Flux Redux.\"\r\n    EXPERIMENTAL = True\r\n\r\n    def main(self, clip_vision_output, style_model, conditioning, strength=1.0):\r\n        c = style_model.model.feature_match(conditioning, clip_vision_output)\r\n        #cond = style_model.get_cond(clip_vision_output).flatten(start_dim=0, end_dim=1).unsqueeze(dim=0)\r\n        #cond = strength * cond\r\n        #c = []\r\n        #for t in conditioning:\r\n        #    n = [torch.cat((t[0], cond), dim=1), t[1].copy()]\r\n        #    c.append(n)\r\n        return (c, )\r\n\r\n\r\nclass ConditioningZeroAndTruncate: \r\n    # needs updating to ensure dims are correct for arbitrary models without hardcoding. \r\n    # vanilla ConditioningZeroOut node doesn't truncate and SD3.5M degrades badly with large embeddings, even if zeroed out, as the negative conditioning\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return { \"required\": {\"conditioning\": (\"CONDITIONING\", )}}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"zero_out\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n    DESCRIPTION  = \"Use for negative conditioning with SD3.5. ConditioningZeroOut does not truncate the embedding, \\\r\n                    which results in severe degradation of image quality with SD3.5 when the token limit is exceeded.\"\r\n\r\n    def zero_out(self, conditioning):\r\n        c = []\r\n        for t in conditioning:\r\n            d = t[1].copy()\r\n            pooled_output = d.get(\"pooled_output\", None)\r\n            if pooled_output is not None:\r\n                d[\"pooled_output\"] = torch.zeros((1,2048), dtype=t[0].dtype, device=t[0].device)\r\n                n = [torch.zeros((1,154,4096), dtype=t[0].dtype, device=t[0].device), d]\r\n            c.append(n)\r\n        return (c, )\r\n\r\n\r\nclass ConditioningTruncate: \r\n    # needs updating to ensure dims are correct for arbitrary models without hardcoding. \r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return { \"required\": {\"conditioning\": (\"CONDITIONING\", )}}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"zero_out\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n    DESCRIPTION  = \"Use for positive conditioning with SD3.5. Tokens beyond 77 result in degradation of image quality.\"\r\n    EXPERIMENTAL = True\r\n\r\n    def zero_out(self, conditioning):\r\n        c = []\r\n        for t in conditioning:\r\n            d = t[1].copy()\r\n            pooled_output = d.get(\"pooled_output\", None)\r\n            if pooled_output is not None:\r\n                d[\"pooled_output\"] = d[\"pooled_output\"][:, :2048]\r\n                n = [t[0][:, :154, :4096], d]\r\n            c.append(n)\r\n        return (c, )\r\n\r\n\r\nclass ConditioningMultiply:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\": {\"conditioning\": (\"CONDITIONING\", ), \r\n                            \"multiplier\": (\"FLOAT\", {\"default\": 1.0, \"min\": -1000000000.0, \"max\": 1000000000.0, \"step\": 0.01})\r\n                            }}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n\r\n    def main(self, conditioning, multiplier):\r\n        c = multiply_nested_tensors(conditioning, multiplier)\r\n        return (c,)\r\n\r\n\r\n\r\nclass ConditioningAdd:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\": {\"conditioning_1\": (\"CONDITIONING\", ), \r\n                            \"conditioning_2\": (\"CONDITIONING\", ), \r\n                            \"multiplier\": (\"FLOAT\", {\"default\": 1.0, \"min\": -1000000000.0, \"max\": 1000000000.0, \"step\": 0.01})\r\n                            }}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n\r\n    def main(self, conditioning_1, conditioning_2, multiplier):\r\n        \r\n        conditioning_1[0][0]                  += multiplier * conditioning_2[0][0]\r\n        conditioning_1[0][1]['pooled_output'] += multiplier * conditioning_2[0][1]['pooled_output'] \r\n        \r\n        return (conditioning_1,)\r\n\r\n\r\n\r\n\r\nclass ConditioningCombine:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\": {\"conditioning_1\": (\"CONDITIONING\", ), \"conditioning_2\": (\"CONDITIONING\", )}}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"combine\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n\r\n    def combine(self, conditioning_1, conditioning_2):\r\n        return (conditioning_1 + conditioning_2, )\r\n\r\n\r\n\r\nclass ConditioningAverage :\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"conditioning_to\":          (\"CONDITIONING\", ), \r\n                \"conditioning_from\":        (\"CONDITIONING\", ),\r\n                \"conditioning_to_strength\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01})\r\n                }\r\n            }\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n    FUNCTION     = \"addWeighted\"\r\n\r\n    def addWeighted(self, conditioning_to, conditioning_from, conditioning_to_strength):\r\n        out = []\r\n\r\n        if len(conditioning_from) > 1:\r\n            RESplain(\"Warning: ConditioningAverage conditioning_from contains more than 1 cond, only the first one will actually be applied to conditioning_to.\")\r\n\r\n        cond_from = conditioning_from[0][0]\r\n        pooled_output_from = conditioning_from[0][1].get(\"pooled_output\", None)\r\n\r\n        for i in range(len(conditioning_to)):\r\n            t1 = conditioning_to[i][0]\r\n            pooled_output_to = conditioning_to[i][1].get(\"pooled_output\", pooled_output_from)\r\n            t0 = cond_from[:,:t1.shape[1]]\r\n            if t0.shape[1] < t1.shape[1]:\r\n                t0 = torch.cat([t0] + [torch.zeros((1, (t1.shape[1] - t0.shape[1]), t1.shape[2]))], dim=1)\r\n\r\n            tw = torch.mul(t1, conditioning_to_strength) + torch.mul(t0, (1.0 - conditioning_to_strength))\r\n            t_to = conditioning_to[i][1].copy()\r\n            if pooled_output_from is not None and pooled_output_to is not None:\r\n                t_to[\"pooled_output\"] = torch.mul(pooled_output_to, conditioning_to_strength) + torch.mul(pooled_output_from, (1.0 - conditioning_to_strength))\r\n            elif pooled_output_from is not None:\r\n                t_to[\"pooled_output\"] = pooled_output_from\r\n\r\n            n = [tw, t_to]\r\n            out.append(n)\r\n        return (out, )\r\n    \r\nclass ConditioningSetTimestepRange:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\": {\"conditioning\": (\"CONDITIONING\", ),\r\n                            \"start\": (\"FLOAT\", {\"default\": 0.0, \"min\": 0.0, \"max\": 1.0, \"step\": 0.001}),\r\n                            \"end\":   (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 1.0, \"step\": 0.001})\r\n                            }}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"set_range\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n\r\n    def set_range(self, conditioning, start, end):\r\n        c = node_helpers.conditioning_set_values(conditioning, {\"start_percent\": start,\r\n                                                                \"end_percent\": end})\r\n        return (c, )\r\n\r\nclass ConditioningAverageScheduler: # don't think this is implemented correctly. needs to be reworked\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n                \"required\": {\r\n                    \"conditioning_0\": (\"CONDITIONING\", ), \r\n                    \"conditioning_1\": (\"CONDITIONING\", ),\r\n                    \"ratio\": (\"SIGMAS\", ),\r\n                    }\r\n            }\r\n    \r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n    EXPERIMENTAL = True\r\n\r\n    @staticmethod\r\n    def addWeighted(conditioning_to, conditioning_from, conditioning_to_strength): #this function borrowed from comfyui\r\n        out = []\r\n\r\n        if len(conditioning_from) > 1:\r\n            RESplain(\"Warning: ConditioningAverage conditioning_from contains more than 1 cond, only the first one will actually be applied to conditioning_to.\")\r\n\r\n        cond_from = conditioning_from[0][0]\r\n        pooled_output_from = conditioning_from[0][1].get(\"pooled_output\", None)\r\n\r\n        for i in range(len(conditioning_to)):\r\n            t1 = conditioning_to[i][0]\r\n            pooled_output_to = conditioning_to[i][1].get(\"pooled_output\", pooled_output_from)\r\n            t0 = cond_from[:,:t1.shape[1]]\r\n            if t0.shape[1] < t1.shape[1]:\r\n                t0 = torch.cat([t0] + [torch.zeros((1, (t1.shape[1] - t0.shape[1]), t1.shape[2]))], dim=1)\r\n\r\n            tw = torch.mul(t1, conditioning_to_strength) + torch.mul(t0, (1.0 - conditioning_to_strength))\r\n            t_to = conditioning_to[i][1].copy()\r\n            if pooled_output_from is not None and pooled_output_to is not None:\r\n                t_to[\"pooled_output\"] = torch.mul(pooled_output_to, conditioning_to_strength) + torch.mul(pooled_output_from, (1.0 - conditioning_to_strength))\r\n            elif pooled_output_from is not None:\r\n                t_to[\"pooled_output\"] = pooled_output_from\r\n\r\n            n = [tw, t_to]\r\n            out.append(n)\r\n        return out\r\n\r\n    @staticmethod\r\n    def create_percent_array(steps):\r\n        step_size = 1.0 / steps\r\n        return [{\"start_percent\": i * step_size, \"end_percent\": (i + 1) * step_size} for i in range(steps)]\r\n\r\n    def main(self, conditioning_0, conditioning_1, ratio):\r\n        steps = len(ratio)\r\n\r\n        percents = self.create_percent_array(steps)\r\n\r\n        cond = []\r\n        for i in range(steps):\r\n            average = self.addWeighted(conditioning_0, conditioning_1, ratio[i].item())\r\n            cond += node_helpers.conditioning_set_values(average, {\"start_percent\": percents[i][\"start_percent\"], \"end_percent\": percents[i][\"end_percent\"]})\r\n\r\n        return (cond,)\r\n\r\n\r\n\r\nclass StableCascade_StageB_Conditioning64:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": { \r\n                \"conditioning\": (\"CONDITIONING\",),\r\n                \"stage_c\":      (\"LATENT\",),\r\n                }\r\n            }\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"set_prior\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n\r\n    @precision_tool.cast_tensor\r\n    def set_prior(self, conditioning, stage_c):\r\n        c = []\r\n        for t in conditioning:\r\n            d = t[1].copy()\r\n            d['stable_cascade_prior'] = stage_c['samples']\r\n            n = [t[0], d]\r\n            c.append(n)\r\n        return (c, )\r\n\r\n\r\n\r\nclass Conditioning_Recast64:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\": { \"cond_0\": (\"CONDITIONING\",),\r\n                            },\r\n                \"optional\": { \"cond_1\": (\"CONDITIONING\",),}\r\n                }\r\n    RETURN_TYPES = (\"CONDITIONING\",\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"cond_0_recast\",\"cond_1_recast\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/precision\"\r\n    EXPERIMENTAL = True\r\n\r\n    @precision_tool.cast_tensor\r\n    def main(self, cond_0, cond_1 = None):\r\n        cond_0[0][0] = cond_0[0][0].to(torch.float64)\r\n        if 'pooled_output' in cond_0[0][1]:\r\n            cond_0[0][1][\"pooled_output\"] = cond_0[0][1][\"pooled_output\"].to(torch.float64)\r\n        \r\n        if cond_1 is not None:\r\n            cond_1[0][0] = cond_1[0][0].to(torch.float64)\r\n            if 'pooled_output' in cond_0[0][1]:\r\n                cond_1[0][1][\"pooled_output\"] = cond_1[0][1][\"pooled_output\"].to(torch.float64)\r\n\r\n        return (cond_0, cond_1,)\r\n\r\n\r\nclass ConditioningToBase64:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"conditioning\": (\"CONDITIONING\",),\r\n            },\r\n            \"hidden\": {\r\n                \"unique_id\": \"UNIQUE_ID\",\r\n                \"extra_pnginfo\": \"EXTRA_PNGINFO\",\r\n            },\r\n        }\r\n\r\n    RETURN_TYPES   = (\"STRING\",)\r\n    RETURN_NAMES   = (\"string\",)\r\n    FUNCTION       = \"notify\"\r\n    OUTPUT_NODE    = True\r\n    OUTPUT_IS_LIST = (True,)\r\n    CATEGORY       = \"RES4LYF/utilities\"\r\n\r\n    def notify(self, unique_id=None, extra_pnginfo=None, conditioning=None):\r\n        \r\n        conditioning_pickle = pickle.dumps(conditioning)\r\n        conditioning_base64 = base64.b64encode(conditioning_pickle).decode('utf-8')\r\n        text = [conditioning_base64]\r\n        \r\n        if unique_id is not None and extra_pnginfo is not None:\r\n            if not isinstance(extra_pnginfo, list):\r\n                RESplain(\"Error: extra_pnginfo is not a list\")\r\n            elif (\r\n                not isinstance(extra_pnginfo[0], dict)\r\n                or \"workflow\" not in extra_pnginfo[0]\r\n            ):\r\n                RESplain(\"Error: extra_pnginfo[0] is not a dict or missing 'workflow' key\")\r\n            else:\r\n                workflow = extra_pnginfo[0][\"workflow\"]\r\n                node = next(\r\n                    (x for x in workflow[\"nodes\"] if str(x[\"id\"]) == str(unique_id[0])),\r\n                    None,\r\n                )\r\n                if node:\r\n                    node[\"widgets_values\"] = [text]\r\n\r\n        return {\"ui\": {\"text\": text}, \"result\": (text,)}\r\n\r\nclass Base64ToConditioning:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"data\": (\"STRING\", {\"default\": \"\"}),\r\n            }\r\n        }\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/utilities\"\r\n\r\n    def main(self, data):\r\n        conditioning_pickle = base64.b64decode(data)\r\n        conditioning = pickle.loads(conditioning_pickle)\r\n        return (conditioning,)\r\n\r\n\r\n\r\n\r\nclass ConditioningDownsampleT5:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": { \r\n                \"conditioning\": (\"CONDITIONING\",),\r\n                \"token_limit\" : (\"INT\", {'default': 128, 'min': 1, 'max': 16384}),\r\n            },\r\n            \"optional\": {\r\n            }\r\n        }\r\n        \r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n    EXPERIMENTAL = True\r\n\r\n    def main(self, conditioning, token_limit):\r\n        \r\n        conditioning[0][0] = downsample_tokens(conditioning[0][0], token_limit)\r\n        return (conditioning, )\r\n\r\n\r\n\r\n\r\n\"\"\"class ConditioningBatch4:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": { \r\n                \"conditioning_0\": (\"CONDITIONING\",),\r\n                },\r\n            \"optional\": {\r\n                \"conditioning_1\": (\"CONDITIONING\",),\r\n                \"conditioning_2\": (\"CONDITIONING\",),\r\n                \"conditioning_3\": (\"CONDITIONING\",),\r\n            }\r\n            }\r\n        \r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n\r\n    def main(self, conditioning_0, conditioning_1=None, conditioning_2=None, conditioning_3=None, ):\r\n        c = copy.deepcopy(conditioning_0)\r\n        \r\n        if conditioning_1 is not None:\r\n            c.append(conditioning_1[0])\r\n            \r\n        if conditioning_2 is not None:\r\n            c.append(conditioning_2[0])\r\n            \r\n        if conditioning_3 is not None:\r\n            c.append(conditioning_3[0])\r\n\r\n        return (c, )\"\"\"\r\n\r\n\r\nclass ConditioningBatch4:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": { \r\n                \"conditioning_0\": (\"CONDITIONING\",),\r\n                },\r\n            \"optional\": {\r\n                \"conditioning_1\": (\"CONDITIONING\",),\r\n                \"conditioning_2\": (\"CONDITIONING\",),\r\n                \"conditioning_3\": (\"CONDITIONING\",),\r\n            }\r\n            }\r\n        \r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n\r\n    def main(self, conditioning_0, conditioning_1=None, conditioning_2=None, conditioning_3=None, ):\r\n        c = []\r\n        c.append(conditioning_0)\r\n        \r\n        if conditioning_1 is not None:\r\n            c.append(conditioning_1)\r\n            \r\n        if conditioning_2 is not None:\r\n            c.append(conditioning_2)\r\n            \r\n        if conditioning_3 is not None:\r\n            c.append(conditioning_3)\r\n\r\n        return (c, )\r\n\r\n\r\n\r\nclass ConditioningBatch8:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": { \r\n                \"conditioning_0\": (\"CONDITIONING\",),\r\n                },\r\n            \"optional\": {\r\n                \"conditioning_1\": (\"CONDITIONING\",),\r\n                \"conditioning_2\": (\"CONDITIONING\",),\r\n                \"conditioning_3\": (\"CONDITIONING\",),\r\n                \"conditioning_4\": (\"CONDITIONING\",),\r\n                \"conditioning_5\": (\"CONDITIONING\",),\r\n                \"conditioning_6\": (\"CONDITIONING\",),\r\n                \"conditioning_7\": (\"CONDITIONING\",),\r\n            }\r\n            }\r\n        \r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n\r\n    def main(self, conditioning_0, conditioning_1=None, conditioning_2=None, conditioning_3=None, conditioning_4=None, conditioning_5=None, conditioning_6=None, conditioning_7=None, ):\r\n        c = []\r\n        c.append(conditioning_0)\r\n        \r\n        if conditioning_1 is not None:\r\n            c.append(conditioning_1)\r\n            \r\n        if conditioning_2 is not None:\r\n            c.append(conditioning_2)\r\n            \r\n        if conditioning_3 is not None:\r\n            c.append(conditioning_3)\r\n            \r\n        if conditioning_4 is not None:\r\n            c.append(conditioning_4)\r\n            \r\n        if conditioning_5 is not None:\r\n            c.append(conditioning_5)\r\n            \r\n        if conditioning_6 is not None:\r\n            c.append(conditioning_6)\r\n            \r\n        if conditioning_7 is not None:\r\n            c.append(conditioning_7)\r\n            \r\n        return (c, )\r\n\r\n\r\n\r\nclass EmptyConditioningGenerator:\r\n    def __init__(self, model=None, conditioning=None, device=None, dtype=None):\r\n        \"\"\" device, dtype currently unused \"\"\"\r\n        if model is not None:\r\n                    \r\n            self.device = device\r\n            self.dtype  = dtype\r\n        \r\n            import comfy.supported_models\r\n            self.model_config = model.model.model_config\r\n\r\n            self.llama3_shape = None\r\n            self.pooled_len    = 0\r\n\r\n            if isinstance(self.model_config, comfy.supported_models.SD3):\r\n                self.text_len_base = 154\r\n                self.text_channels = 4096\r\n                self.pooled_len    = 2048\r\n            elif isinstance(self.model_config, (comfy.supported_models.Flux, comfy.supported_models.FluxSchnell, comfy.supported_models.Chroma)):\r\n                self.text_len_base = 256\r\n                self.text_channels = 4096\r\n                self.pooled_len    = 768\r\n            elif isinstance(self.model_config, comfy.supported_models.AuraFlow):\r\n                self.text_len_base = 256\r\n                self.text_channels = 2048\r\n                #self.pooled_len    = 1\r\n            elif isinstance(self.model_config, comfy.supported_models.Stable_Cascade_C):\r\n                self.text_len_base = 77\r\n                self.text_channels = 1280\r\n                self.pooled_len    = 1280\r\n            elif isinstance(self.model_config, comfy.supported_models.WAN21_T2V) or isinstance(self.model_config, comfy.supported_models.WAN21_I2V):\r\n                self.text_len_base = 512\r\n                self.text_channels = 5120 # sometimes needs to be 4096, like when initializing in samplers_py in shark?\r\n                #self.pooled_len    = 1\r\n            elif isinstance(self.model_config, comfy.supported_models.HiDream):\r\n                self.text_len_base = 128\r\n                self.text_channels = 4096 # sometimes needs to be 4096, like when initializing in samplers_py in shark?\r\n                self.pooled_len    = 2048\r\n                self.llama3_shape  = torch.Size([1,32,128,4096])\r\n            elif isinstance(self.model_config, comfy.supported_models.LTXV):\r\n                self.text_len_base = 128\r\n                self.text_channels = 4096\r\n                #self.pooled_len    = 1\r\n            elif isinstance(self.model_config, comfy.supported_models.SD15):\r\n                self.text_len_base = 77\r\n                self.text_channels = 768\r\n                self.pooled_len    = 768\r\n            elif isinstance(self.model_config, comfy.supported_models.SDXL):\r\n                self.text_len_base = 77\r\n                self.text_channels = 2048\r\n                self.pooled_len    = 1280\r\n            elif isinstance(self.model_config, comfy.supported_models.HunyuanVideo) or \\\r\n                isinstance (self.model_config, comfy.supported_models.HunyuanVideoI2V) or \\\r\n                isinstance (self.model_config, comfy.supported_models.HunyuanVideoSkyreelsI2V):\r\n                self.text_len_base = 128\r\n                self.text_channels = 4096\r\n                #self.pooled_len    = 1\r\n            else:\r\n                raise ValueError(f\"Unknown model config: {type(self.model_config)}\")\r\n        elif conditioning is not None:\r\n            self.device        = conditioning[0][0].device\r\n            self.dtype         = conditioning[0][0].dtype\r\n            self.text_len_base = conditioning[0][0].shape[-2]\r\n            if 'pooled_output' in conditioning[0][1]:\r\n                self.pooled_len = conditioning[0][1]['pooled_output'].shape[-1]\r\n            else:\r\n                self.pooled_len = 0\r\n            self.text_channels = conditioning[0][0].shape[-1]\r\n            \r\n    def get_empty_conditioning(self):\r\n        if self.llama3_shape is not None and self.pooled_len > 0:\r\n            return [[\r\n                torch.zeros((1, self.text_len_base, self.text_channels)),\r\n                {\r\n                    'pooled_output'      : torch.zeros((1, self.pooled_len)),\r\n                    'conditioning_llama3': torch.zeros(self.llama3_shape),\r\n                }\r\n            ]]\r\n        elif self.pooled_len > 0:\r\n            return [[\r\n                torch.zeros((1, self.text_len_base, self.text_channels)),\r\n                {\r\n                    'pooled_output': torch.zeros((1, self.pooled_len)),\r\n                }\r\n            ]]\r\n        else:\r\n            return [[\r\n                torch.zeros((1, self.text_len_base, self.text_channels)),\r\n            ]]\r\n\r\n    def get_empty_conditionings(self, count):\r\n        return [self.get_empty_conditioning() for _ in range(count)]\r\n    \r\n    def zero_none_conditionings_(self, *conds):\r\n        if len(conds) == 1 and isinstance(conds[0], (list, tuple)):\r\n            conds = conds[0]\r\n        for i, cond in enumerate(conds):\r\n            conds[i] = self.get_empty_conditioning() if cond is None else cond\r\n        return conds\r\n\r\n\"\"\"def zero_conditioning_from_list(conds):\r\n    for cond in conds:\r\n        if cond is not None:\r\n            for i in range(len(cond)):\r\n                pooled     = cond[i][1].get('pooled_output')\r\n                pooled_len = pooled.shape[-1] if pooled is not None else 1    # 1 default pooled_output len for those without it\r\n                \r\n                cond_zero  = [[\r\n                    torch.zeros_like(cond[i][0]),\r\n                    {\"pooled_output\": torch.zeros((1,pooled_len), dtype=cond[i][0].dtype, device=cond[i][0].device)},\r\n                ]]\r\n                \r\n            return cond_zero\"\"\"\r\n\r\ndef zero_conditioning_from_list(conds):\r\n    for cond in conds:\r\n        if cond is not None:\r\n            for i in range(len(cond)):\r\n                pooled = cond[i][1].get('pooled_output')\r\n                llama3 = cond[i][1].get('conditioning_llama3')\r\n\r\n                pooled_len = pooled.shape[-1] if pooled is not None else 1\r\n                llama3_shape = llama3.shape if llama3 is not None else (1, 32, 128, 4096)\r\n\r\n                cond_zero = [[\r\n                    torch.zeros_like(cond[i][0]),\r\n                    {\r\n                        \"pooled_output\": torch.zeros((1, pooled_len), dtype=cond[i][0].dtype, device=cond[i][0].device),\r\n                        \"conditioning_llama3\": torch.zeros(llama3_shape, dtype=cond[i][0].dtype, device=cond[i][0].device),\r\n                    },\r\n                ]]\r\n\r\n            return cond_zero\r\n\r\nclass TemporalMaskGenerator:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"switch_frame\": (\"INT\", {\"default\": 33, \"min\": 1, \"step\": 4, \"max\": 0xffffffffffffffff}),\r\n                    \"frames\":       (\"INT\", {\"default\": 65, \"min\": 1, \"step\": 4, \"max\": 0xffffffffffffffff}),\r\n                    \"invert_mask\":             (\"BOOLEAN\",                                  {\"default\": False}),\r\n\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"MASK\",)\r\n    RETURN_NAMES = (\"temporal_mask\",) \r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/masks\"\r\n    EXPERIMENTAL = True\r\n    \r\n    def main(self,\r\n            switch_frame = 33,\r\n            frames = 65,\r\n            invert_mask = False,\r\n            ):\r\n        \r\n        switch_frame = switch_frame // 4\r\n        frames = frames // 4 + 1\r\n        \r\n        temporal_mask = torch.ones((frames, 2, 2))\r\n        \r\n        temporal_mask[switch_frame:,...] = 0.0\r\n        \r\n        if invert_mask:\r\n            temporal_mask = 1 - temporal_mask\r\n        \r\n        return (temporal_mask,)\r\n\r\n\r\n\r\n\r\nclass TemporalSplitAttnMask_Midframe:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"self_attn_midframe\":  (\"INT\", {\"default\": 33, \"min\": 1, \"step\": 4, \"max\": 0xffffffffffffffff}),\r\n                    \"cross_attn_midframe\": (\"INT\", {\"default\": 33, \"min\": 1, \"step\": 4, \"max\": 0xffffffffffffffff}),\r\n                    \"self_attn_invert\":    (\"BOOLEAN\",                                  {\"default\": False}),\r\n                    \"cross_attn_invert\":   (\"BOOLEAN\",                                  {\"default\": False}),\r\n                    \"frames\":             (\"INT\", {\"default\": 65, \"min\": 1, \"step\": 4, \"max\": 0xffffffffffffffff}),\r\n\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"MASK\",)\r\n    RETURN_NAMES = (\"temporal_mask\",) \r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/masks\"\r\n    EXPERIMENTAL = True\r\n    \r\n    def main(self,\r\n            self_attn_midframe = 33,\r\n            cross_attn_midframe = 33,\r\n            self_attn_invert = False,\r\n            cross_attn_invert = False,\r\n            frames = 65,\r\n            ):\r\n\r\n        frames = frames // 4 + 1\r\n        \r\n        temporal_self_mask  = torch.ones((frames, 2, 2))\r\n        temporal_cross_mask = torch.ones((frames, 2, 2))\r\n\r\n        \r\n        self_attn_midframe  = self_attn_midframe  // 4\r\n        cross_attn_midframe = cross_attn_midframe // 4\r\n        \r\n        temporal_self_mask[self_attn_midframe  :,...] = 0.0\r\n        temporal_cross_mask[cross_attn_midframe:,...] = 0.0\r\n        \r\n        if self_attn_invert:\r\n            temporal_self_mask  = 1 - temporal_self_mask\r\n            \r\n        if cross_attn_invert:\r\n            temporal_cross_mask = 1 - temporal_cross_mask\r\n        \r\n        temporal_attn_masks = torch.stack([temporal_cross_mask, temporal_self_mask])\r\n        \r\n        return (temporal_attn_masks,)\r\n\r\n\r\n\r\n\r\nclass TemporalSplitAttnMask:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"self_attn_start\":  (\"INT\", {\"default\": 1,  \"min\": 1, \"step\": 4, \"max\": 0xffffffffffffffff}),\r\n                    \"self_attn_stop\":   (\"INT\", {\"default\": 33, \"min\": 1, \"step\": 4, \"max\": 0xffffffffffffffff}),\r\n                    \"cross_attn_start\": (\"INT\", {\"default\": 1,  \"min\": 1, \"step\": 4, \"max\": 0xffffffffffffffff}),\r\n                    \"cross_attn_stop\":  (\"INT\", {\"default\": 33, \"min\": 1, \"step\": 4, \"max\": 0xffffffffffffffff}),\r\n\r\n                    #\"frames\":           (\"INT\", {\"default\": 65, \"min\": 1, \"step\": 4, \"max\": 0xffffffffffffffff}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"MASK\",)\r\n    RETURN_NAMES = (\"temporal_mask\",) \r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/masks\"\r\n    \r\n    def main(self,\r\n            self_attn_start  = 0,\r\n            self_attn_stop   = 33,\r\n            cross_attn_start = 0,\r\n            cross_attn_stop  = 33,\r\n            #frames           = 65,\r\n            ):\r\n\r\n        #frames = frames // 4 + 1\r\n        \r\n        self_attn_start  = self_attn_start  // 4 #+ 1\r\n        self_attn_stop   = self_attn_stop   // 4 + 1\r\n        cross_attn_start = cross_attn_start // 4 #+ 1\r\n        cross_attn_stop  = cross_attn_stop  // 4 + 1\r\n        \r\n        max_stop = max(self_attn_stop, cross_attn_stop)\r\n        \r\n        temporal_self_mask  = torch.zeros((max_stop, 1, 1))\r\n        temporal_cross_mask = torch.zeros((max_stop, 1, 1))\r\n\r\n        temporal_self_mask [ self_attn_start: self_attn_stop,...] = 1.0\r\n        temporal_cross_mask[cross_attn_start:cross_attn_stop,...] = 1.0\r\n        \r\n        temporal_attn_masks = torch.stack([temporal_cross_mask, temporal_self_mask])\r\n        \r\n        return (temporal_attn_masks,)\r\n\r\n\r\n\r\n\r\nclass TemporalCrossAttnMask:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"cross_attn_start\": (\"INT\", {\"default\": 1,  \"min\": 1, \"step\": 4, \"max\": 0xffffffffffffffff}),\r\n                    \"cross_attn_stop\":  (\"INT\", {\"default\": 33, \"min\": 1, \"step\": 4, \"max\": 0xffffffffffffffff}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"MASK\",)\r\n    RETURN_NAMES = (\"temporal_mask\",) \r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/masks\"\r\n    \r\n    def main(self,\r\n            cross_attn_start = 0,\r\n            cross_attn_stop  = 33,\r\n            ):\r\n        \r\n        cross_attn_start = cross_attn_start // 4 #+ 1\r\n        cross_attn_stop  = cross_attn_stop  // 4 + 1\r\n        \r\n        temporal_self_mask  = torch.zeros((cross_attn_stop, 1, 1))  # dummy to satisfy stack\r\n        temporal_cross_mask = torch.zeros((cross_attn_stop, 1, 1))\r\n\r\n        temporal_cross_mask[cross_attn_start:cross_attn_stop,...] = 1.0\r\n        \r\n        temporal_attn_masks = torch.stack([temporal_cross_mask, temporal_self_mask])\r\n        \r\n        return (temporal_attn_masks,)\r\n\r\n\r\n\r\n\r\n@dataclass\r\nclass RegionalParameters:\r\n    weights    : List[float] = field(default_factory=list)\r\n    floors     : List[float] = field(default_factory=list)\r\n\r\n\r\n\r\nREG_MASK_TYPE_2 = [\r\n    \"gradient\",\r\n    \"gradient_masked\",\r\n    \"gradient_unmasked\",\r\n    \"boolean\",\r\n    \"boolean_masked\",\r\n    \"boolean_unmasked\",\r\n]\r\n\r\nREG_MASK_TYPE_3 = [\r\n    \"gradient\",\r\n    \"gradient_A\",\r\n    \"gradient_B\",\r\n    \"gradient_unmasked\",\r\n    \"gradient_AB\",\r\n    \"gradient_A,unmasked\",\r\n    \"gradient_B,unmasked\",\r\n\r\n    \"boolean\",\r\n    \"boolean_A\",\r\n    \"boolean_B\",\r\n    \"boolean_unmasked\",\r\n    \"boolean_AB\",\r\n    \"boolean_A,unmasked\",\r\n    \"boolean_B,unmasked\",\r\n]\r\n\r\nREG_MASK_TYPE_AB = [\r\n    \"gradient\",\r\n    \"gradient_A\",\r\n    \"gradient_B\",\r\n    \"boolean\",\r\n    \"boolean_A\",\r\n    \"boolean_B\",\r\n]\r\n\r\nREG_MASK_TYPE_ABC = [\r\n    \"gradient\",\r\n    \"gradient_A\",\r\n    \"gradient_B\",\r\n    \"gradient_C\",\r\n    \"gradient_AB\",\r\n    \"gradient_AC\",\r\n    \"gradient_BC\",\r\n\r\n    \"boolean\",\r\n    \"boolean_A\",\r\n    \"boolean_B\",\r\n    \"boolean_C\",\r\n    \"boolean_AB\",\r\n    \"boolean_AC\",\r\n    \"boolean_BC\",\r\n\r\n]\r\n\r\n\r\n\r\n\r\nclass ClownRegionalConditioning_AB:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": { \r\n                \"weight\":                  (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"region_bleed\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"region_bleed_start_step\": (\"INT\",                                       {\"default\": 0,   \"min\":  0,        \"max\": 10000}),\r\n                \"weight_scheduler\":        ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                \"start_step\":              (\"INT\",                                       {\"default\": 0,   \"min\":  0,        \"max\": 10000}),\r\n                \"end_step\":                (\"INT\",                                       {\"default\": -1,  \"min\": -1,        \"max\": 10000}),\r\n                \"mask_type\":               (REG_MASK_TYPE_AB,                            {\"default\": \"boolean\"}),\r\n                \"edge_width\":              (\"INT\",                                       {\"default\": 0,  \"min\": 0,          \"max\": 10000}),\r\n                \"invert_mask\":             (\"BOOLEAN\",                                   {\"default\": False}),\r\n            }, \r\n            \"optional\": {\r\n                \"conditioning_A\":          (\"CONDITIONING\", ),\r\n                \"conditioning_B\":          (\"CONDITIONING\", ),\r\n                \"mask_A\":                  (\"MASK\", ),\r\n                \"mask_B\":                  (\"MASK\", ),\r\n                \"weights\":                 (\"SIGMAS\", ),\r\n                \"region_bleeds\":           (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n\r\n    def create_callback(self, **kwargs):\r\n        def callback(model):\r\n            kwargs[\"model\"] = model  \r\n            pos_cond, = self.prepare_regional_cond(**kwargs)\r\n            return pos_cond\r\n        return callback\r\n\r\n    def main(self,\r\n            weight                   : float  = 1.0,\r\n            start_sigma              : float  = 0.0,\r\n            end_sigma                : float  = 1.0,\r\n            weight_scheduler                  = None,\r\n            start_step               : int    = 0,\r\n            end_step                 : int    = -1,\r\n            conditioning_A                    = None,\r\n            conditioning_B                    = None,\r\n            weights                  : Tensor = None,\r\n            region_bleeds            : Tensor = None,\r\n            region_bleed             : float  = 0.0,\r\n            region_bleed_start_step  : int    = 0,\r\n            mask_type                : str    = \"boolean\",\r\n            edge_width               : int    = 0,\r\n            mask_A                            = None,\r\n            mask_B                            = None,\r\n            invert_mask              : bool   = False\r\n            ) -> Tuple[Tensor]:\r\n        \r\n        mask   = mask_A\r\n        unmask = mask_B\r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n\r\n        callback = self.create_callback(weight                   = weight,\r\n                                        start_sigma              = start_sigma,\r\n                                        end_sigma                = end_sigma,\r\n                                        weight_scheduler         = weight_scheduler,\r\n                                        start_step               = start_step,\r\n                                        end_step                 = end_step,\r\n                                        weights                  = weights,\r\n                                        region_bleeds            = region_bleeds,\r\n                                        region_bleed             = region_bleed,\r\n                                        region_bleed_start_step  = region_bleed_start_step,\r\n                                        mask_type                = mask_type,\r\n                                        edge_width               = edge_width,\r\n                                        mask                     = mask,\r\n                                        unmask                   = unmask,\r\n                                        invert_mask              = invert_mask,\r\n                                        conditioning_A           = conditioning_A,\r\n                                        conditioning_B           = conditioning_B,\r\n                                        )\r\n\r\n        cond = zero_conditioning_from_list([conditioning_A, conditioning_B])\r\n        \r\n        cond[0][1]['callback_regional'] = callback\r\n        \r\n        return (cond,)\r\n\r\n\r\n\r\n    def prepare_regional_cond(self,\r\n                                model,\r\n                                weight                   : float  = 1.0,\r\n                                start_sigma              : float  = 0.0,\r\n                                end_sigma                : float  = 1.0,\r\n                                weight_scheduler                  = None,\r\n                                start_step               : int    = 0,\r\n                                end_step                 : int    = -1,\r\n                                conditioning_A                    = None,\r\n                                conditioning_B                    = None,\r\n                                weights                  : Tensor = None,\r\n                                region_bleeds            : Tensor = None,\r\n                                region_bleed             : float  = 0.0,\r\n                                region_bleed_start_step  : int    = 0,\r\n                                mask_type                : str    = \"gradient\",\r\n                                edge_width               : int    = 0,\r\n                                mask                              = None,\r\n                                unmask                            = None,\r\n                                invert_mask              : bool   = False,\r\n                                ) -> Tuple[Tensor]:\r\n\r\n        default_dtype  = torch.float64\r\n        default_device = torch.device(\"cuda\") \r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        if weights is None and weight_scheduler != \"constant\":\r\n            total_steps = end_step - start_step\r\n            weights     = get_sigmas(model, weight_scheduler, total_steps, 1.0).to(dtype=default_dtype, device=default_device) #/ model.inner_model.inner_model.model_sampling.sigma_max  #scaling doesn't matter as this is a flux-only node\r\n            prepend     = torch.zeros(start_step,                                  dtype=default_dtype, device=default_device)\r\n            weights     = torch.cat((prepend, weights), dim=0)\r\n        \r\n        if invert_mask and mask is not None:\r\n            mask   = 1-mask\r\n            unmask = 1-unmask\r\n\r\n        floor, floors = region_bleed, region_bleeds\r\n        \r\n        weights = initialize_or_scale(weights, weight, end_step).to(default_dtype).to(default_device)\r\n        weights = F.pad(weights, (0, MAX_STEPS), value=0.0)\r\n        \r\n        prepend = torch.full((region_bleed_start_step,),  0.0, dtype=default_dtype, device=default_device)\r\n        floors  = initialize_or_scale(floors,  floor,  end_step).to(default_dtype).to(default_device)\r\n        floors  = F.pad(floors,  (0, MAX_STEPS), value=0.0)\r\n        floors  = torch.cat((prepend, floors), dim=0)\r\n\r\n        if (conditioning_A is None) and (conditioning_B is None):\r\n            cond = None\r\n\r\n        elif mask is not None:\r\n            EmptyCondGen = EmptyConditioningGenerator(model)\r\n            conditioning_A, conditioning_B = EmptyCondGen.zero_none_conditionings_([conditioning_A, conditioning_B])\r\n            \r\n            cond = copy.deepcopy(conditioning_A)\r\n            \r\n            if isinstance(model.model.model_config, (comfy.supported_models.WAN21_T2V, comfy.supported_models.WAN21_I2V)):\r\n                if model.model.diffusion_model.blocks[0].self_attn.winderz_type != \"false\":\r\n                    AttnMask = CrossAttentionMask(mask_type, edge_width)\r\n                else:\r\n                    AttnMask = SplitAttentionMask(mask_type, edge_width)\r\n            elif isinstance(model.model.model_config, comfy.supported_models.HiDream):\r\n                AttnMask = FullAttentionMaskHiDream(mask_type, edge_width)\r\n            elif isinstance(model.model.model_config, (comfy.supported_models.SDXL, comfy.supported_models.SD15, comfy.supported_models.Stable_Cascade_C)):\r\n                AttnMask = SplitAttentionMask(mask_type, edge_width)\r\n            else:\r\n                AttnMask = FullAttentionMask(mask_type, edge_width)\r\n\r\n            RegContext = RegionalContext()\r\n            \r\n            if isinstance(model.model.model_config, comfy.supported_models.HiDream):\r\n\r\n                AttnMask.add_region_sizes(\r\n                    [\r\n                        conditioning_A[0][0].shape[-2],\r\n                        conditioning_A[0][1]['conditioning_llama3'][0,0,...].shape[-2],\r\n                        conditioning_A[0][1]['conditioning_llama3'][0,0,...].shape[-2],\r\n                    ],\r\n                    mask)\r\n                AttnMask.add_region_sizes(\r\n                    [\r\n                        conditioning_B[0][0].shape[-2],\r\n                        conditioning_B[0][1]['conditioning_llama3'][0,0,...].shape[-2],\r\n                        conditioning_B[0][1]['conditioning_llama3'][0,0,...].shape[-2],\r\n                    ],\r\n                    unmask)\r\n\r\n                RegContext.add_region_llama3(conditioning_A[0][1]['conditioning_llama3'])\r\n                RegContext.add_region_llama3(conditioning_B[0][1]['conditioning_llama3'])\r\n            else:\r\n                AttnMask.add_region(conditioning_A[0][0],   mask)\r\n                AttnMask.add_region(conditioning_B[0][0], unmask)\r\n            \r\n            RegContext.add_region(conditioning_A[0][0], conditioning_A[0][1].get('pooled_output'))\r\n            RegContext.add_region(conditioning_B[0][0], conditioning_B[0][1].get('pooled_output'))\r\n            \r\n            if 'clip_vision_output' in conditioning_A[0][1]: # For WAN... dicey results\r\n                RegContext.add_region_clip_fea(conditioning_A[0][1]['clip_vision_output'].penultimate_hidden_states)\r\n                RegContext.add_region_clip_fea(conditioning_B[0][1]['clip_vision_output'].penultimate_hidden_states)\r\n            if 'unclip_conditioning' in conditioning_A[0][1]:\r\n                RegContext.add_region_clip_fea(conditioning_A[0][1]['unclip_conditioning'][0]['clip_vision_output'].image_embeds) #['penultimate_hidden_states'])\r\n            if 'unclip_conditioning' in conditioning_B[0][1]:\r\n                RegContext.add_region_clip_fea(conditioning_B[0][1]['unclip_conditioning'][0]['clip_vision_output'].image_embeds) #['penultimate_hidden_states'])\r\n                \r\n            cond[0][1]['AttnMask'] = AttnMask\r\n            cond[0][1]['RegContext'] = RegContext\r\n            \r\n            cond = merge_with_base(base=cond, others=[conditioning_A, conditioning_B])\r\n            \r\n            if 'pooled_output' in cond[0][1] and cond[0][1]['pooled_output'] is not None:\r\n                cond[0][1]['pooled_output'] = (conditioning_A[0][1]['pooled_output'] + conditioning_B[0][1]['pooled_output']) / 2\r\n            \r\n            #if 'conditioning_llama3' in cond[0][1] and cond[0][1]['conditioning_llama3'] is not None:\r\n            #    cond[0][1]['conditioning_llama3'] = (conditioning_A[0][1]['conditioning_llama3'] + conditioning_B[0][1]['conditioning_llama3']) / 2\r\n            #cond[0] = list(cond[0])\r\n            #cond[0][0] = (conditioning_A[0][0] + conditioning_B[0][0]) / 2\r\n            #cond[0] = tuple(cond[0])\r\n            \r\n        else:\r\n            cond = conditioning_A\r\n            \r\n        cond[0][1]['RegParam'] = RegionalParameters(weights, floors)\r\n        \r\n        return (cond,)\r\n\r\n\r\n\r\n\r\n\r\nclass ClownRegionalConditioning_ABC:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": { \r\n                \"weight\":                  (\"FLOAT\",                                     {\"default\": 1.0,  \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"region_bleed\":            (\"FLOAT\",                                     {\"default\": 0.0,  \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"region_bleed_start_step\": (\"INT\",                                       {\"default\": 0,    \"min\":  0,       \"max\": 10000}),\r\n                \"weight_scheduler\":        ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                \"start_step\":              (\"INT\",                                       {\"default\": 0,    \"min\":  0,       \"max\": 10000}),\r\n                \"end_step\":                (\"INT\",                                       {\"default\": 100,  \"min\": -1,       \"max\": 10000}),\r\n                \"mask_type\":               (REG_MASK_TYPE_ABC,                           {\"default\": \"boolean\"}),\r\n                \"edge_width\":              (\"INT\",                                       {\"default\": 0,    \"min\": 0,        \"max\": 10000}),\r\n                \"invert_mask\":             (\"BOOLEAN\",                                   {\"default\": False}),\r\n            }, \r\n            \"optional\": {\r\n                \"conditioning_A\":          (\"CONDITIONING\", ),\r\n                \"conditioning_B\":          (\"CONDITIONING\", ),\r\n                \"conditioning_C\":          (\"CONDITIONING\", ),\r\n                \"mask_A\":                  (\"MASK\", ),\r\n                \"mask_B\":                  (\"MASK\", ),\r\n                \"mask_C\":                  (\"MASK\", ),\r\n                \"weights\":                 (\"SIGMAS\", ),\r\n                \"region_bleeds\":           (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n\r\n    def create_callback(self, **kwargs):\r\n        def callback(model):\r\n            kwargs[\"model\"] = model  \r\n            pos_cond, = self.prepare_regional_cond(**kwargs)\r\n            return pos_cond\r\n        return callback\r\n\r\n    def main(self,\r\n            weight                   : float  = 1.0,\r\n            start_sigma              : float  = 0.0,\r\n            end_sigma                : float  = 1.0,\r\n            weight_scheduler                  = None,\r\n            start_step               : int    = 0,\r\n            end_step                 : int    = -1,\r\n            conditioning_A                    = None,\r\n            conditioning_B                    = None,\r\n            conditioning_C                    = None,\r\n            weights                  : Tensor = None,\r\n            region_bleeds            : Tensor = None,\r\n            region_bleed             : float  = 0.0,\r\n            region_bleed_start_step  : int    = 0,\r\n\r\n            mask_type                : str    = \"boolean\",\r\n            edge_width               : int    = 0,\r\n            mask_A                              = None,\r\n            mask_B                              = None,\r\n            mask_C                              = None,\r\n            invert_mask              : bool   = False\r\n            ) -> Tuple[Tensor]:\r\n\r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        callback = self.create_callback(weight                   = weight,\r\n                                        start_sigma              = start_sigma,\r\n                                        end_sigma                = end_sigma,\r\n                                        weight_scheduler         = weight_scheduler,\r\n                                        start_step               = start_step,\r\n                                        end_step                 = end_step,\r\n                                        weights                  = weights,\r\n                                        region_bleeds            = region_bleeds,\r\n                                        region_bleed             = region_bleed,\r\n                                        region_bleed_start_step  = region_bleed_start_step,\r\n                                        mask_type                = mask_type,\r\n                                        edge_width               = edge_width,\r\n                                        mask_A                   = mask_A,\r\n                                        mask_B                   = mask_B,\r\n                                        mask_C                   = mask_C,\r\n                                        invert_mask              = invert_mask,\r\n                                        conditioning_A           = conditioning_A,\r\n                                        conditioning_B           = conditioning_B,\r\n                                        conditioning_C           = conditioning_C,\r\n                                        )\r\n\r\n        cond = zero_conditioning_from_list([conditioning_A, conditioning_B, conditioning_C])\r\n        \r\n        cond[0][1]['callback_regional'] = callback\r\n        \r\n        return (cond,)\r\n\r\n\r\n\r\n    def prepare_regional_cond(self,\r\n                                model,\r\n                                weight                   : float  = 1.0,\r\n                                start_sigma              : float  = 0.0,\r\n                                end_sigma                : float  = 1.0,\r\n                                weight_scheduler                  = None,\r\n                                start_step               : int    =  0,\r\n                                end_step                 : int    = -1,\r\n                                conditioning_A                    = None,\r\n                                conditioning_B                    = None,\r\n\r\n                                conditioning_C                    = None,\r\n                                weights                  : Tensor = None,\r\n                                region_bleeds            : Tensor = None,\r\n                                region_bleed             : float  = 0.0,\r\n                                region_bleed_start_step  : int    = 0,\r\n\r\n                                mask_type                : str    = \"boolean\",\r\n                                edge_width               : int    = 0,\r\n                                mask_A                            = None,\r\n                                mask_B                            = None,\r\n                                mask_C                            = None,\r\n                                invert_mask              : bool   = False,\r\n                                ) -> Tuple[Tensor]:\r\n\r\n        default_dtype  = torch.float64\r\n        default_device = torch.device(\"cuda\") \r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        if weights is None and weight_scheduler != \"constant\":\r\n            total_steps = end_step - start_step\r\n            weights     = get_sigmas(model, weight_scheduler, total_steps, 1.0).to(dtype=default_dtype, device=default_device) #/ model.inner_model.inner_model.model_sampling.sigma_max  #scaling doesn't matter as this is a flux-only node\r\n            prepend     = torch.zeros(start_step,                                  dtype=default_dtype, device=default_device)\r\n            weights     = torch.cat((prepend, weights), dim=0)\r\n        \r\n        if invert_mask and mask_A is not None:\r\n            mask_A = 1-mask_A\r\n            \r\n        if invert_mask and mask_B is not None:\r\n            mask_B = 1-mask_B\r\n        \r\n        mask_AB_inv = mask_C\r\n        if invert_mask and mask_AB_inv is not None:\r\n            mask_AB_inv = 1-mask_AB_inv\r\n        \r\n        floor, floors = region_bleed, region_bleeds\r\n        \r\n        weights = initialize_or_scale(weights, weight, end_step).to(default_dtype)\r\n        weights = F.pad(weights, (0, MAX_STEPS), value=0.0)\r\n        \r\n        prepend    = torch.full((region_bleed_start_step,),  0.0, dtype=default_dtype, device=default_device)\r\n        floors  = initialize_or_scale(floors,  floor,  end_step).to(default_dtype).to(default_device)\r\n        floors  = F.pad(floors,  (0, MAX_STEPS), value=0.0)\r\n        floors  = torch.cat((prepend, floors), dim=0)\r\n\r\n        if (conditioning_A is None) and (conditioning_B is None) and (conditioning_C is None):\r\n            conditioning = None\r\n\r\n        elif mask_A is not None:\r\n            \r\n            EmptyCondGen = EmptyConditioningGenerator(model)\r\n            conditioning_A, conditioning_B, conditioning_C = EmptyCondGen.zero_none_conditionings_([conditioning_A, conditioning_B, conditioning_C])\r\n\r\n            conditioning = copy.deepcopy(conditioning_A)\r\n            \r\n            if isinstance(model.model.model_config, (comfy.supported_models.WAN21_T2V, comfy.supported_models.WAN21_I2V)):\r\n                if model.model.diffusion_model.blocks[0].self_attn.winderz_type != \"false\":\r\n                    AttnMask = CrossAttentionMask(mask_type, edge_width)\r\n                else:\r\n                    AttnMask = SplitAttentionMask(mask_type, edge_width)\r\n            elif isinstance(model.model.model_config, comfy.supported_models.HiDream):\r\n                AttnMask = FullAttentionMaskHiDream(mask_type, edge_width)\r\n            elif isinstance(model.model.model_config, (comfy.supported_models.SDXL, comfy.supported_models.SD15, comfy.supported_models.Stable_Cascade_C)):\r\n                AttnMask = SplitAttentionMask(mask_type, edge_width)\r\n            else:\r\n                AttnMask = FullAttentionMask(mask_type, edge_width)\r\n                \r\n            RegContext = RegionalContext()\r\n            \r\n            if isinstance(model.model.model_config, comfy.supported_models.HiDream):\r\n                AttnMask.add_region_sizes(\r\n                    [\r\n                        conditioning_A[0][0].shape[-2],\r\n                        conditioning_A[0][1]['conditioning_llama3'][0,0,...].shape[-2],\r\n                        conditioning_A[0][1]['conditioning_llama3'][0,0,...].shape[-2],\r\n                    ],\r\n                    mask_A)\r\n                AttnMask.add_region_sizes(\r\n                    [\r\n                        conditioning_B[0][0].shape[-2],\r\n                        conditioning_B[0][1]['conditioning_llama3'][0,0,...].shape[-2],\r\n                        conditioning_B[0][1]['conditioning_llama3'][0,0,...].shape[-2],\r\n                    ],\r\n                    mask_B)\r\n                AttnMask.add_region_sizes(\r\n                    [\r\n                        conditioning_C[0][0].shape[-2],\r\n                        conditioning_C[0][1]['conditioning_llama3'][0,0,...].shape[-2],\r\n                        conditioning_C[0][1]['conditioning_llama3'][0,0,...].shape[-2],\r\n                    ],\r\n                    mask_AB_inv)\r\n                \r\n                RegContext.add_region_llama3(conditioning_A[0][1]['conditioning_llama3'])\r\n                RegContext.add_region_llama3(conditioning_B[0][1]['conditioning_llama3'])\r\n                RegContext.add_region_llama3(conditioning_C[0][1]['conditioning_llama3'])\r\n            else:\r\n                AttnMask.add_region(conditioning_A[0][0], mask_A)\r\n                AttnMask.add_region(conditioning_B[0][0], mask_B)\r\n                AttnMask.add_region(conditioning_C[0][0], mask_AB_inv)\r\n            \r\n            RegContext.add_region(conditioning_A[0][0], conditioning_A[0][1].get('pooled_output'))\r\n            RegContext.add_region(conditioning_B[0][0], conditioning_B[0][1].get('pooled_output'))\r\n            RegContext.add_region(conditioning_C[0][0], conditioning_C[0][1].get('pooled_output'))\r\n            \r\n            #if 'pooled_output' in conditioning_A[0][1]:\r\n            #    RegContext.pooled_output = conditioning_A[0][1]['pooled_output'] + conditioning_B[0][1]['pooled_output'] + conditioning_C[0][1]['pooled_output']\r\n            \r\n            conditioning[0][1]['AttnMask']   = AttnMask\r\n            conditioning[0][1]['RegContext'] = RegContext\r\n            \r\n            conditioning = merge_with_base(base=conditioning, others=[conditioning_A, conditioning_B, conditioning_C])\r\n            \r\n            if 'pooled_output' in conditioning[0][1] and conditioning[0][1]['pooled_output'] is not None:\r\n                conditioning[0][1]['pooled_output'] = (conditioning_A[0][1]['pooled_output'] + conditioning_B[0][1]['pooled_output'] + conditioning_C[0][1]['pooled_output']) / 3\r\n            \r\n        else:\r\n            conditioning = conditioning_A\r\n\r\n        conditioning[0][1]['RegParam'] = RegionalParameters(weights, floors)\r\n        \r\n        return (conditioning,)\r\n\r\n\r\n\r\nclass ClownRegionalConditioning2(ClownRegionalConditioning_AB):\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": { \r\n                \"weight\":                  (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"region_bleed\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"region_bleed_start_step\": (\"INT\",                                       {\"default\": 0,   \"min\":  0,        \"max\": 10000}),\r\n                \"weight_scheduler\":        ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                \"start_step\":              (\"INT\",                                       {\"default\": 0,   \"min\":  0,        \"max\": 10000}),\r\n                \"end_step\":                (\"INT\",                                       {\"default\": -1,  \"min\": -1,        \"max\": 10000}),\r\n                \"mask_type\":               (REG_MASK_TYPE_2,                             {\"default\": \"boolean\"}),\r\n                \"edge_width\":              (\"INT\",                                       {\"default\": 0,  \"min\": -10000,          \"max\": 10000}),\r\n                \"invert_mask\":             (\"BOOLEAN\",                                   {\"default\": False}),\r\n            }, \r\n            \"optional\": {\r\n                \"conditioning_masked\":     (\"CONDITIONING\", ),\r\n                \"conditioning_unmasked\":   (\"CONDITIONING\", ),\r\n                \"mask\":                    (\"MASK\", ),\r\n                \"weights\":                 (\"SIGMAS\", ),\r\n                \"region_bleeds\":           (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    def main(self, conditioning_masked, conditioning_unmasked, mask, **kwargs):\r\n        return super().main(\r\n            conditioning_A = conditioning_masked,\r\n            conditioning_B = conditioning_unmasked,\r\n            mask_A         =   mask,\r\n            mask_B         = 1-mask,\r\n            **kwargs\r\n        )    \r\n\r\n\r\n\r\nclass ClownRegionalConditioning3(ClownRegionalConditioning_ABC):\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": { \r\n                \"weight\":                  (\"FLOAT\",                                     {\"default\": 1.0,  \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"region_bleed\":            (\"FLOAT\",                                     {\"default\": 0.0,  \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"region_bleed_start_step\": (\"INT\",                                       {\"default\": 0,    \"min\":  0,       \"max\": 10000}),\r\n                \"weight_scheduler\":        ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                \"start_step\":              (\"INT\",                                       {\"default\": 0,    \"min\":  0,       \"max\": 10000}),\r\n                \"end_step\":                (\"INT\",                                       {\"default\": 100,  \"min\": -1,       \"max\": 10000}),\r\n                \"mask_type\":               (REG_MASK_TYPE_3,                             {\"default\": \"boolean\"}),\r\n                \"edge_width\":              (\"INT\",                                       {\"default\": 0,    \"min\": 0,        \"max\": 10000}),\r\n                \"invert_mask\":             (\"BOOLEAN\",                                   {\"default\": False}),\r\n            },\r\n            \"optional\": {\r\n                \"conditioning_A\":          (\"CONDITIONING\", ),\r\n                \"conditioning_B\":          (\"CONDITIONING\", ),\r\n                \"conditioning_unmasked\":   (\"CONDITIONING\", ),\r\n                \"mask_A\":                  (\"MASK\", ),\r\n                \"mask_B\":                  (\"MASK\", ),\r\n                \"weights\":                 (\"SIGMAS\", ),\r\n                \"region_bleeds\":           (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    def main(self, conditioning_unmasked, mask_A, mask_B, **kwargs):\r\n        \r\n        mask_AB_inv = torch.ones_like(mask_A) - mask_A - mask_B\r\n        mask_AB_inv[mask_AB_inv < 0] = 0\r\n        \r\n        return super().main(\r\n            conditioning_C = conditioning_unmasked,\r\n            mask_A         = mask_A,\r\n            mask_B         = mask_B,\r\n            mask_C         = mask_AB_inv,\r\n            **kwargs\r\n        )    \r\n\r\n\r\n\r\n\r\nclass ClownRegionalConditioning:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"spineless\":    (\"BOOLEAN\", {\"default\": False}),\r\n                    \"edge_width\":   (\"INT\",     {\"default\": 0,  \"min\": -10000,  \"max\": 10000}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"cond_regions\": (\"COND_REGIONS\", ),\r\n                    \"conditioning\": (\"CONDITIONING\", ),\r\n                    \"mask\":         (\"MASK\", ),\r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"COND_REGIONS\",)\r\n    RETURN_NAMES = (\"cond_regions\",) \r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n    \r\n    def main(self,\r\n            spineless    = False,\r\n            edge_width   = 0,\r\n            cond_regions = None,\r\n            conditioning = None,\r\n            mask         = None,\r\n            ):\r\n        \r\n        cond_reg = [] if cond_regions is None else copy.deepcopy(cond_regions)\r\n        \r\n        if mask is None:\r\n            mask = torch.ones_like(cond_reg[0]['mask'])\r\n            for i in range(len(cond_reg)):\r\n                if mask.dtype == torch.bool:\r\n                    mask &= cond_reg[i]['mask'].to(cond_reg[0]['mask'].dtype)\r\n                else:\r\n                    mask = mask - cond_reg[i]['mask'].to(cond_reg[0]['mask'].dtype)\r\n                    mask[mask < 0] = 0.0\r\n                    \r\n        \r\n        cond_reg.append(\r\n            {\r\n                'use_self_attn_mask': not spineless,\r\n                'edge_width'        : edge_width,\r\n                'conditioning'      : conditioning,\r\n                'mask'              : mask,\r\n            }\r\n        )\r\n\r\n        return (cond_reg,)\r\n\r\n\r\n\r\n\r\n\r\nclass ClownRegionalConditionings:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": { \r\n                \"weight\":                  (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"region_bleed\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"region_bleed_start_step\": (\"INT\",                                       {\"default\": 0,   \"min\":  0,        \"max\": 10000}),\r\n                \"weight_scheduler\":        ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                \"start_step\":              (\"INT\",                                       {\"default\": 0,   \"min\":  0,        \"max\": 10000}),\r\n                \"end_step\":                (\"INT\",                                       {\"default\": -1,  \"min\": -1,        \"max\": 10000}),\r\n                \"mask_type\":               ([\"gradient\", \"boolean\"],                     {\"default\": \"boolean\"}),\r\n                \"invert_masks\":            (\"BOOLEAN\",                                   {\"default\": False}),\r\n            },\r\n            \"optional\": {\r\n                \"cond_regions\":            (\"COND_REGIONS\", ),\r\n                \"weights\":                 (\"SIGMAS\", ),\r\n                \"region_bleeds\":           (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/conditioning\"\r\n\r\n    def create_callback(self, **kwargs):\r\n        def callback(model):\r\n            kwargs[\"model\"] = model  \r\n            pos_cond, = self.prepare_regional_cond(**kwargs)\r\n            return pos_cond\r\n        return callback\r\n\r\n    def main(self,\r\n            weight                   : float  = 1.0,\r\n            start_sigma              : float  = 0.0,\r\n            end_sigma                : float  = 1.0,\r\n            weight_scheduler                  = None,\r\n            start_step               : int    = 0,\r\n            end_step                 : int    = -1,\r\n            cond_regions                      = None,\r\n            weights                  : Tensor = None,\r\n            region_bleeds            : Tensor = None,\r\n            region_bleed             : float  = 0.0,\r\n            region_bleed_start_step  : int    = 0,\r\n            mask_type                : str    = \"boolean\",\r\n            invert_masks             : bool   = False\r\n            ) -> Tuple[Tensor]:\r\n                \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n\r\n        callback = self.create_callback(weight                   = weight,\r\n                                        start_sigma              = start_sigma,\r\n                                        end_sigma                = end_sigma,\r\n                                        weight_scheduler         = weight_scheduler,\r\n                                        start_step               = start_step,\r\n                                        end_step                 = end_step,\r\n                                        weights                  = weights,\r\n                                        region_bleeds            = region_bleeds,\r\n                                        region_bleed             = region_bleed,\r\n                                        region_bleed_start_step  = region_bleed_start_step,\r\n                                        mask_type                = mask_type,\r\n                                        invert_masks             = invert_masks,\r\n                                        cond_regions             = cond_regions,\r\n                                        )\r\n\r\n        cond_list = [region['conditioning'] for region in cond_regions]\r\n        conditioning = zero_conditioning_from_list(cond_list)\r\n        \r\n        conditioning[0][1]['callback_regional'] = callback\r\n        \r\n        return (conditioning,)\r\n\r\n\r\n\r\n    def prepare_regional_cond(self,\r\n                                model,\r\n                                weight                   : float  = 1.0,\r\n                                start_sigma              : float  = 0.0,\r\n                                end_sigma                : float  = 1.0,\r\n                                weight_scheduler                  = None,\r\n                                start_step               : int    = 0,\r\n                                end_step                 : int    = -1,\r\n                                weights                  : Tensor = None,\r\n                                region_bleeds            : Tensor = None,\r\n                                region_bleed             : float  = 0.0,\r\n                                region_bleed_start_step  : int    = 0,\r\n                                mask_type                : str    = \"gradient\",\r\n                                cond_regions                      = None,\r\n                                invert_masks             : bool   = False,\r\n                                ) -> Tuple[Tensor]:\r\n\r\n        default_dtype  = torch.float64\r\n        default_device = torch.device(\"cuda\") \r\n        \r\n        cond_list               = [region['conditioning']       for region in cond_regions]\r\n        mask_list               = [region['mask']               for region in cond_regions]\r\n        edge_width_list         = [region['edge_width']         for region in cond_regions]\r\n        use_self_attn_mask_list = [region['use_self_attn_mask'] for region in cond_regions]\r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        if weights is None and weight_scheduler != \"constant\":\r\n            total_steps = end_step - start_step\r\n            weights     = get_sigmas(model, weight_scheduler, total_steps, 1.0).to(dtype=default_dtype, device=default_device) #/ model.inner_model.inner_model.model_sampling.sigma_max  #scaling doesn't matter as this is a flux-only node\r\n            prepend     = torch.zeros(start_step,                                  dtype=default_dtype, device=default_device)\r\n            weights     = torch.cat((prepend, weights), dim=0)\r\n        \r\n        if invert_masks:\r\n            for i in range(len(mask_list)):\r\n                if mask_list[i].dtype == torch.bool:\r\n                    mask_list[i] = ~mask_list[i]\r\n                else:\r\n                    mask_list[i] = 1 - mask_list[i]\r\n                    \r\n        floor, floors = region_bleed, region_bleeds\r\n        \r\n        weights = initialize_or_scale(weights, weight, end_step).to(default_dtype).to(default_device)\r\n        weights = F.pad(weights, (0, MAX_STEPS), value=0.0)\r\n        \r\n        prepend = torch.full((region_bleed_start_step,),  0.0, dtype=default_dtype, device=default_device)\r\n        floors  = initialize_or_scale(floors,  floor,  end_step).to(default_dtype).to(default_device)\r\n        floors  = F.pad(floors,  (0, MAX_STEPS), value=0.0)\r\n        floors  = torch.cat((prepend, floors), dim=0)\r\n\r\n        EmptyCondGen = EmptyConditioningGenerator(model)\r\n        cond_list = EmptyCondGen.zero_none_conditionings_(cond_list)\r\n        \r\n        conditioning = copy.deepcopy(cond_list[0])\r\n        \r\n        if isinstance(model.model.model_config, comfy.supported_models.WAN21_T2V) or isinstance(model.model.model_config, comfy.supported_models.WAN21_I2V):\r\n            if model.model.diffusion_model.blocks[0].self_attn.winderz_type != \"false\":\r\n                AttnMask = CrossAttentionMask  (mask_type, edge_width_list=edge_width_list, use_self_attn_mask_list=use_self_attn_mask_list)\r\n            else:\r\n                AttnMask = SplitAttentionMask  (mask_type, edge_width_list=edge_width_list, use_self_attn_mask_list=use_self_attn_mask_list)\r\n        elif isinstance(model.model.model_config, comfy.supported_models.HiDream):\r\n            AttnMask = FullAttentionMaskHiDream(mask_type, edge_width_list=edge_width_list, use_self_attn_mask_list=use_self_attn_mask_list)\r\n        elif isinstance(model.model.model_config, comfy.supported_models.SDXL) or isinstance(model.model.model_config, comfy.supported_models.SD15):\r\n            AttnMask = SplitAttentionMask(mask_type, edge_width_list=edge_width_list, use_self_attn_mask_list=use_self_attn_mask_list)\r\n        else:\r\n            AttnMask = FullAttentionMask       (mask_type, edge_width_list=edge_width_list, use_self_attn_mask_list=use_self_attn_mask_list)\r\n\r\n        RegContext = RegionalContext()\r\n        \r\n        for cond, mask in zip(cond_list, mask_list):\r\n            if isinstance(model.model.model_config, comfy.supported_models.HiDream):\r\n                \r\n                AttnMask.add_region_sizes(\r\n                    [\r\n                        cond[0][0].shape[-2],\r\n                        cond[0][1]['conditioning_llama3'][0,0,...].shape[-2],\r\n                        cond[0][1]['conditioning_llama3'][0,0,...].shape[-2],\r\n                    ],\r\n                    mask)\r\n\r\n                RegContext.add_region_llama3(cond[0][1]['conditioning_llama3'])\r\n            else:\r\n                AttnMask.add_region(cond[0][0],   mask)\r\n            \r\n            RegContext.add_region(cond[0][0])\r\n            \r\n            if 'clip_vision_output' in cond[0][1]: # For WAN... dicey results\r\n                RegContext.add_region_clip_fea(cond[0][1]['clip_vision_output'].penultimate_hidden_states)\r\n            \r\n        conditioning[0][1]['AttnMask']   = AttnMask\r\n        conditioning[0][1]['RegContext'] = RegContext\r\n        conditioning[0][1]['RegParam']   = RegionalParameters(weights, floors)\r\n        \r\n        conditioning = merge_with_base(base=conditioning, others=cond_list)\r\n        \r\n        if 'pooled_output' in conditioning[0][1] and conditioning[0][1]['pooled_output'] is not None:\r\n            conditioning[0][1]['pooled_output'] = torch.stack([cond_tmp[0][1]['pooled_output'] for cond_tmp in cond_list]).mean(dim=0)\r\n\r\n            #conditioning[0][1]['pooled_output'] = cond_list[0][0][1]['pooled_output']\r\n\r\n        return (conditioning,)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\ndef merge_with_base(\r\n    base   : List[     Tuple[torch.Tensor, Dict[str, Any]]],\r\n    others : List[List[Tuple[torch.Tensor, Dict[str, Any]]]],\r\n    dim    : int = -2\r\n) ->         List[     Tuple[torch.Tensor, Dict[str, Any]]]:\r\n    \"\"\"\r\n    Merge `base` plus an arbitrary list of other conditioning objects:\r\n        - base: zero out its tensors, for use as an accumulator\r\n        - For each level ℓ:\r\n            • Collect the base’s zeroed tensor + all others’ ℓ-tensors.\r\n            • Pad them along `dim` to the same length and sum.\r\n            • Replace merged[ℓ][0] with that sum.\r\n        - For each tensor-valued key in the base’s info-dict at level ℓ:\r\n            • Gather a zeroed tensor + that key from all others.\r\n            • Pad & sum, and store back under that key.\r\n        - Any non-tensor entries in the base’s info are preserved untouched.\r\n    \"\"\"\r\n    max_levels = max(len(base), *(len(p) for p in others))\r\n\r\n    for lvl in range(max_levels):\r\n        if lvl >= len(base): # if base lacks this level, skip entirely\r\n            continue\r\n\r\n        # --- tokens merge ---\r\n        base_tokens, base_info = base[lvl]\r\n        zero_tokens = torch.zeros_like(base_tokens)\r\n        toks = [zero_tokens]\r\n\r\n        # zero-out any tensor fields in base_info\r\n        for key, val in base_info.items():\r\n            if isinstance(val, torch.Tensor):\r\n                base_info[key] = torch.zeros_like(val)\r\n\r\n        # collect same-level tokens from each other\r\n        for pos in others:\r\n            if lvl < len(pos):\r\n                toks.append(pos[lvl][0])\r\n\r\n        toks = pad_tensor_list_to_max_len(toks, dim=dim)\r\n        base_tokens = sum(toks)\r\n        base[lvl] = (base_tokens, base_info)\r\n\r\n        # --- info-dict tensor merge ---\r\n        for key, val in list(base_info.items()):\r\n            if not isinstance(val, torch.Tensor):\r\n                continue\r\n            pieces = [val]  # zeroed base tensor\r\n            for pos in others:\r\n                if lvl < len(pos):\r\n                    info_i = pos[lvl][1]\r\n                    if key in info_i and isinstance(info_i[key], torch.Tensor):\r\n                        pieces.append(info_i[key])\r\n            pieces = pad_tensor_list_to_max_len(pieces, dim=dim)\r\n            base[lvl][1][key] = sum(pieces)\r\n\r\n    return base\r\n\r\n\r\n\r\n\r\n\r\ndef best_hw(n): # get factor pair closesst to a true square\r\n    best = (1, n)\r\n    min_diff = n\r\n    for i in range(1, int(n**0.5) + 1):\r\n        if n % i == 0:\r\n            j = n // i\r\n            if abs(i - j) < min_diff:\r\n                best = (i, j)\r\n                min_diff = abs(i - j)\r\n    return best\r\n\r\ndef downsample_tokens(cond: torch.Tensor, target_tokens: int, mode=\"bicubic\") -> torch.Tensor:\r\n    B, T, D = cond.shape\r\n\r\n    def next_square(n: int):\r\n        root = math.ceil(n**0.5)\r\n        return root * root\r\n\r\n    padded_len = next_square(T)\r\n    pad_amount = padded_len - T\r\n    if pad_amount > 0:\r\n        pad_tensor = torch.zeros(B, pad_amount, D, dtype=cond.dtype, device=cond.device)\r\n        cond = torch.cat([cond, pad_tensor], dim=1)\r\n\r\n    side_len = int(math.sqrt(padded_len))\r\n    cond_reshaped = cond.view(B, side_len, side_len, D).permute(0, 3, 1, 2)  # [B, D, H, W]\r\n\r\n    H_target, W_target = best_hw(target_tokens)\r\n    cond_interp = F.interpolate(cond_reshaped, size=(H_target, W_target), mode=mode)\r\n\r\n    cond_final = cond_interp.permute(0, 2, 3, 1).reshape(B, -1, D)\r\n    cond_final = cond_final[:, :target_tokens, :]\r\n\r\n    return cond_final\r\n\r\n\r\n\r\n\r\n\r\nclass CrossAttn_EraseReplace_HiDream:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\r\n            \"clip\": (\"CLIP\", ),\r\n            \"t5xxl_erase\":   (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            \"llama_erase\":   (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            \"t5xxl_replace\": (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            \"llama_replace\": (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            \"t5xxl_erase_token\":   (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            \"llama_erase_token\":   (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            \"t5xxl_replace_token\": (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            \"llama_replace_token\": (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            }}\r\n    RETURN_TYPES = (\"CONDITIONING\",\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"positive\",    \"negative\",)\r\n    FUNCTION = \"encode\"\r\n\r\n    CATEGORY = \"advanced/conditioning\"\r\n    EXPERIMENTAL = True\r\n\r\n    def encode(self, clip, t5xxl_erase, llama_erase, t5xxl_replace, llama_replace, t5xxl_erase_token, llama_erase_token, t5xxl_replace_token, llama_replace_token):\r\n\r\n        tokens_erase      = clip.tokenize(\"\")\r\n        tokens_erase[\"l\"] = clip.tokenize(\"\")[\"l\"]\r\n        tokens_replace      = clip.tokenize(\"\")\r\n        tokens_replace[\"l\"] = clip.tokenize(\"\")[\"l\"]\r\n        \r\n        tokens_erase  [\"t5xxl\"] = clip.tokenize(t5xxl_erase)  [\"t5xxl\"]\r\n        tokens_erase  [\"llama\"] = clip.tokenize(llama_erase)  [\"llama\"]\r\n        tokens_replace[\"t5xxl\"] = clip.tokenize(t5xxl_replace)[\"t5xxl\"]\r\n        tokens_replace[\"llama\"] = clip.tokenize(llama_replace)[\"llama\"]\r\n        \r\n        \r\n        tokens_erase_token      = clip.tokenize(\"\")\r\n        tokens_erase_token[\"l\"] = clip.tokenize(\"\")[\"l\"]\r\n        tokens_replace_token      = clip.tokenize(\"\")\r\n        tokens_replace_token[\"l\"] = clip.tokenize(\"\")[\"l\"]\r\n        \r\n        tokens_erase_token  [\"t5xxl\"] = clip.tokenize(t5xxl_erase_token)  [\"t5xxl\"]\r\n        tokens_erase_token  [\"llama\"] = clip.tokenize(llama_erase_token)  [\"llama\"]\r\n        tokens_replace_token[\"t5xxl\"] = clip.tokenize(t5xxl_replace_token)[\"t5xxl\"]\r\n        tokens_replace_token[\"llama\"] = clip.tokenize(llama_replace_token)[\"llama\"]\r\n        \r\n        \r\n        encoded_erase   = clip.encode_from_tokens_scheduled(tokens_erase)\r\n        encoded_replace = clip.encode_from_tokens_scheduled(tokens_replace)\r\n        \r\n        return (encoded_replace, encoded_erase, )\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass CrossAttn_EraseReplace_Flux:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\r\n            \"clip\": (\"CLIP\", ),\r\n            \"t5xxl_erase\":   (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            \"t5xxl_replace\": (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            \"t5xxl_erase_token\":   (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            \"t5xxl_replace_token\": (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            }}\r\n    RETURN_TYPES = (\"CONDITIONING\",\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"positive\",    \"negative\",)\r\n    FUNCTION = \"encode\"\r\n\r\n    CATEGORY = \"advanced/conditioning\"\r\n    EXPERIMENTAL = True\r\n\r\n    def encode(self, clip, t5xxl_erase, llama_erase, t5xxl_replace, llama_replace, t5xxl_erase_token, llama_erase_token, t5xxl_replace_token, llama_replace_token):\r\n\r\n        tokens_erase      = clip.tokenize(\"\")\r\n        tokens_erase[\"l\"] = clip.tokenize(\"\")[\"l\"]\r\n        tokens_replace      = clip.tokenize(\"\")\r\n        tokens_replace[\"l\"] = clip.tokenize(\"\")[\"l\"]\r\n        \r\n        tokens_erase  [\"t5xxl\"] = clip.tokenize(t5xxl_erase)  [\"t5xxl\"]\r\n        tokens_erase  [\"llama\"] = clip.tokenize(llama_erase)  [\"llama\"]\r\n        tokens_replace[\"t5xxl\"] = clip.tokenize(t5xxl_replace)[\"t5xxl\"]\r\n        tokens_replace[\"llama\"] = clip.tokenize(llama_replace)[\"llama\"]\r\n        \r\n        \r\n        tokens_erase_token      = clip.tokenize(\"\")\r\n        tokens_erase_token[\"l\"] = clip.tokenize(\"\")[\"l\"]\r\n        tokens_replace_token      = clip.tokenize(\"\")\r\n        tokens_replace_token[\"l\"] = clip.tokenize(\"\")[\"l\"]\r\n        \r\n        tokens_erase_token  [\"t5xxl\"] = clip.tokenize(t5xxl_erase_token)  [\"t5xxl\"]\r\n        tokens_erase_token  [\"llama\"] = clip.tokenize(llama_erase_token)  [\"llama\"]\r\n        tokens_replace_token[\"t5xxl\"] = clip.tokenize(t5xxl_replace_token)[\"t5xxl\"]\r\n        tokens_replace_token[\"llama\"] = clip.tokenize(llama_replace_token)[\"llama\"]\r\n        \r\n        \r\n        encoded_erase   = clip.encode_from_tokens_scheduled(tokens_erase)\r\n        encoded_replace = clip.encode_from_tokens_scheduled(tokens_replace)\r\n        \r\n        return (encoded_replace, encoded_erase, )\r\n\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "example_workflows/chroma regional antiblur.json",
    "content": "{\"last_node_id\":726,\"last_link_id\":2104,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1280,-650],\"size\":[75,26],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2098}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1967],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[1280,-610],\"size\":[75,26],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2099}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[1939,2092,2101],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":14,\"type\":\"Reroute\",\"pos\":[1280,-570],\"size\":[75,26],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2100}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,1328],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":398,\"type\":\"SaveImage\",\"pos\":[1379.9996337890625,-267.2835998535156],\"size\":[341.7508850097656,561.0067749023438],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1329}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":701,\"type\":\"Note\",\"pos\":[80,-520],\"size\":[342.05950927734375,88],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"I usually just lazily draw masks in Load Image nodes (with some random image loaded), but for the sake of reproducibility, here's another approach.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":712,\"type\":\"Note\",\"pos\":[-210,-520],\"size\":[245.76409912109375,91.6677017211914],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"So long as these masks are all the same size, the regional conditioning nodes will handle resizing to the image size for you.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":676,\"type\":\"InvertMask\",\"pos\":[20,-370],\"size\":[142.42074584960938,26],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2073}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2083],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"InvertMask\"},\"widgets_values\":[]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[719.6110229492188,16.752899169921875],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1399],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":710,\"type\":\"MaskPreview\",\"pos\":[180,-190],\"size\":[210,246],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2054}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":397,\"type\":\"VAEDecode\",\"pos\":[1382.3662109375,-374.17059326171875],\"size\":[210,46],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":2096},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1328}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1329],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":715,\"type\":\"SolidMask\",\"pos\":[-220,-370],\"size\":[210,106],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2073],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,1024,1024]},{\"id\":716,\"type\":\"SolidMask\",\"pos\":[-220,-220],\"size\":[210,106],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2065],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,384,864]},{\"id\":709,\"type\":\"MaskComposite\",\"pos\":[190,-370],\"size\":[210,126],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"MASK\",\"link\":2083},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"MASK\",\"link\":2065}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2054,2091],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskComposite\"},\"widgets_values\":[256,160,\"add\"]},{\"id\":704,\"type\":\"Note\",\"pos\":[101.74818420410156,112.67951965332031],\"size\":[290.7107238769531,155.35317993164062],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownRegionalConditionings:\\n\\nTry raising or lowering weight, and changing the weight scheduler from beta57 to Karras (weakens more quickly), or to linear quadratic (stronger late).\\n\\nTry changing region_bleed_start_step (earlier will make the image blend together more), and end_step.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":703,\"type\":\"Note\",\"pos\":[423.10699462890625,-96.14085388183594],\"size\":[241.9689483642578,386.7543640136719],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"edge_width also creates some overlap around the edges of the mask.\\n\\nboolean_masked means that the masked area can \\\"see\\\" the rest of the image, but the unmasked area cannot. \\\"boolean\\\" would mean neither area could see the rest of the image.\\n\\nTry setting to boolean_unmasked and see what happens!\\n\\nIf you still have blur, try reducing edge_width (and if you have seams, try increasing it, or setting end_step to something like 20). \\n\\nAlso verify that you can generate the background prompt alone without blur (if you can't, this won't work). And don't get stuck on one seed.\\n\\nVaguely human-shaped masks also tend to work better than the blocky one used here.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":401,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1010,-370],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1967},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2104},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2102},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1399},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2096],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"bong_tangent\",30,-1,1,4,3,\"fixed\",\"standard\",true]},{\"id\":723,\"type\":\"CLIPTextEncode\",\"pos\":[460,-240],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2092}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2093],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a college campus\"]},{\"id\":662,\"type\":\"CLIPTextEncode\",\"pos\":[460,-370],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1939}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2094],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a woman wearing a red flannel shirt and a cute shark plush blue hat\"]},{\"id\":724,\"type\":\"ClownModelLoader\",\"pos\":[615.2467651367188,-699.0204467773438],\"size\":[361.6804504394531,266],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2097],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2099],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2100],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"chroma-unlocked-v29.5.safetensors\",\"fp8_e4m3fn_fast\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\".none\",\".none\",\".none\",\"chroma\",\"ae.sft\"]},{\"id\":725,\"type\":\"ReChromaPatcher\",\"pos\":[1030.2850341796875,-698.6190795898438],\"size\":[210,82],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2097}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2098],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReChromaPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":726,\"type\":\"CLIPTextEncode\",\"pos\":[772.4685668945312,350.9657897949219],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2101}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2102],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"low quality, bad quality, mutated, low detail, blurry, out of focus, jpeg artifacts\"]},{\"id\":722,\"type\":\"ClownRegionalConditioning2\",\"pos\":[690,-370],\"size\":[287.75750732421875,330],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"conditioning_masked\",\"localized_name\":\"conditioning_masked\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2094},{\"name\":\"conditioning_unmasked\",\"localized_name\":\"conditioning_unmasked\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2093},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2091},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":[2104],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning2\"},\"widgets_values\":[1,0,0,\"constant\",0,10,\"boolean_masked\",32,false]}],\"links\":[[18,14,0,7,4,\"VAE\"],[1328,14,0,397,1,\"VAE\"],[1329,397,0,398,0,\"IMAGE\"],[1399,7,3,401,3,\"LATENT\"],[1939,490,0,662,0,\"CLIP\"],[1967,13,0,401,0,\"MODEL\"],[2054,709,0,710,0,\"MASK\"],[2065,716,0,709,1,\"MASK\"],[2073,715,0,676,0,\"MASK\"],[2083,676,0,709,0,\"MASK\"],[2091,709,0,722,2,\"MASK\"],[2092,490,0,723,0,\"CLIP\"],[2093,723,0,722,1,\"CONDITIONING\"],[2094,662,0,722,0,\"CONDITIONING\"],[2096,401,0,397,0,\"LATENT\"],[2097,724,0,725,0,\"MODEL\"],[2098,725,0,13,0,\"*\"],[2099,724,1,490,0,\"*\"],[2100,724,2,14,0,\"*\"],[2101,490,0,726,0,\"CLIP\"],[2102,726,0,401,2,\"CONDITIONING\"],[2104,722,0,401,1,\"CONDITIONING\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.5863092971715371,\"offset\":[2215.7489179851177,830.3089944212893]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/chroma txt2img.json",
    "content": "{\"last_node_id\":727,\"last_link_id\":2113,\"nodes\":[{\"id\":398,\"type\":\"SaveImage\",\"pos\":[1379.9996337890625,-267.2835998535156],\"size\":[341.7508850097656,561.0067749023438],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1329}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":397,\"type\":\"VAEDecode\",\"pos\":[1382.3662109375,-374.17059326171875],\"size\":[210,46],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":2096},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2112}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1329],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":401,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1010,-370],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2108},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2107},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2102},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2113},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2096],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"bong_tangent\",30,-1,1,4,3,\"fixed\",\"standard\",true]},{\"id\":662,\"type\":\"CLIPTextEncode\",\"pos\":[770.2921752929688,-373.6678771972656],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2109}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2107],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a woman wearing a red flannel shirt and a cute shark plush blue hat\"]},{\"id\":726,\"type\":\"CLIPTextEncode\",\"pos\":[772.46923828125,-238.8079376220703],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2110}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2102],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"low quality, bad quality, mutated, low detail, blurry, out of focus, jpeg artifacts\"]},{\"id\":727,\"type\":\"EmptyLatentImage\",\"pos\":[771.9976196289062,-98.32988739013672],\"size\":[213.03683471679688,106],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"LATENT\",\"localized_name\":\"LATENT\",\"type\":\"LATENT\",\"links\":[2113],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"EmptyLatentImage\"},\"widgets_values\":[1024,1024,1]},{\"id\":724,\"type\":\"ClownModelLoader\",\"pos\":[380.5105285644531,-376.99224853515625],\"size\":[361.6804504394531,266],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2108],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2109,2110],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2112],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"chroma-unlocked-v37-detail-calibrated.safetensors\",\"fp8_e4m3fn_fast\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\".none\",\".none\",\".none\",\"chroma\",\"ae.sft\"]}],\"links\":[[1329,397,0,398,0,\"IMAGE\"],[2096,401,0,397,0,\"LATENT\"],[2102,726,0,401,2,\"CONDITIONING\"],[2107,662,0,401,1,\"CONDITIONING\"],[2108,724,0,401,0,\"MODEL\"],[2109,724,1,662,0,\"CLIP\"],[2110,724,1,726,0,\"CLIP\"],[2112,724,2,397,1,\"VAE\"],[2113,727,0,401,3,\"LATENT\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.5863092971715371,\"offset\":[1675.8567061174099,917.6014919421251]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/comparison ksampler vs csksampler chain workflows.json",
    "content": "{\"last_node_id\":1423,\"last_link_id\":3992,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[17750,830],\"size\":[75,26],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3988}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1395],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":402,\"type\":\"QuadrupleCLIPLoader\",\"pos\":[17300,870],\"size\":[407.7720031738281,130],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[1552],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"QuadrupleCLIPLoader\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\"]},{\"id\":403,\"type\":\"UNETLoader\",\"pos\":[17390,740],\"size\":[320.7802429199219,82],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[3988],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UNETLoader\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn\"]},{\"id\":404,\"type\":\"VAELoader\",\"pos\":[17500,1060],\"size\":[210,58],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"VAE\",\"localized_name\":\"VAE\",\"type\":\"VAE\",\"links\":[1344],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAELoader\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ae.sft\"]},{\"id\":1381,\"type\":\"Reroute\",\"pos\":[18770,-310],\"size\":[75,26],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3961}],\"outputs\":[{\"name\":\"\",\"type\":\"CONDITIONING\",\"links\":[3881]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1383,\"type\":\"Reroute\",\"pos\":[18770,-420],\"size\":[75,26],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3877}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[3879],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1388,\"type\":\"Reroute\",\"pos\":[18750,410],\"size\":[75,26],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3886}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[3887,3891,3896,3901],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1393,\"type\":\"SaveImage\",\"pos\":[20400,450],\"size\":[457.3382263183594,422.2065124511719],\"flags\":{},\"order\":51,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3908}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":1399,\"type\":\"Reroute\",\"pos\":[18790,1920],\"size\":[75,26],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3967}],\"outputs\":[{\"name\":\"\",\"type\":\"CONDITIONING\",\"links\":[3925,3933]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1401,\"type\":\"Reroute\",\"pos\":[18780,1870],\"size\":[75,26],\"flags\":{},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3916}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[3924,3931,3932],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1408,\"type\":\"FlipSigmas\",\"pos\":[19150,2270],\"size\":[140,26],\"flags\":{},\"order\":42,\"mode\":0,\"inputs\":[{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"link\":3941}],\"outputs\":[{\"name\":\"SIGMAS\",\"localized_name\":\"SIGMAS\",\"type\":\"SIGMAS\",\"links\":[3929]}],\"properties\":{\"Node name for S&R\":\"FlipSigmas\"},\"widgets_values\":[]},{\"id\":1394,\"type\":\"SamplerCustom\",\"pos\":[18940,1910],\"size\":[253.52972412109375,230],\"flags\":{},\"order\":46,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3924},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"link\":3925},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"link\":3926},{\"name\":\"sampler\",\"localized_name\":\"sampler\",\"type\":\"SAMPLER\",\"link\":3928},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"link\":3929},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":3979}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3938],\"slot_index\":0},{\"name\":\"denoised_output\",\"localized_name\":\"denoised_output\",\"type\":\"LATENT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"SamplerCustom\"},\"widgets_values\":[false,0,\"fixed\",1]},{\"id\":1411,\"type\":\"SplitSigmas\",\"pos\":[19030,2350],\"size\":[210,78],\"flags\":{},\"order\":38,\"mode\":0,\"inputs\":[{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"link\":3940}],\"outputs\":[{\"name\":\"high_sigmas\",\"localized_name\":\"high_sigmas\",\"type\":\"SIGMAS\",\"links\":null},{\"name\":\"low_sigmas\",\"localized_name\":\"low_sigmas\",\"type\":\"SIGMAS\",\"links\":[3941,3942],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"SplitSigmas\"},\"widgets_values\":[15]},{\"id\":1409,\"type\":\"BetaSamplingScheduler\",\"pos\":[18780,2360],\"size\":[210,106],\"flags\":{},\"order\":34,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3931}],\"outputs\":[{\"name\":\"SIGMAS\",\"localized_name\":\"SIGMAS\",\"type\":\"SIGMAS\",\"links\":[3940],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"BetaSamplingScheduler\"},\"widgets_values\":[30,0.5,0.7]},{\"id\":1407,\"type\":\"KSamplerSelect\",\"pos\":[18720,2210],\"size\":[210,58],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"SAMPLER\",\"localized_name\":\"SAMPLER\",\"type\":\"SAMPLER\",\"links\":[3928,3935]}],\"properties\":{\"Node name for S&R\":\"KSamplerSelect\"},\"widgets_values\":[\"euler\"]},{\"id\":1395,\"type\":\"Reroute\",\"pos\":[18750,1110],\"size\":[75,26],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3965}],\"outputs\":[{\"name\":\"\",\"type\":\"CONDITIONING\",\"links\":[3949],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1405,\"type\":\"VAEDecode\",\"pos\":[19650,1810],\"size\":[210,46],\"flags\":{},\"order\":52,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3992},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":3922}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3923],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":1403,\"type\":\"VAEDecode\",\"pos\":[19650,990],\"size\":[210,46],\"flags\":{},\"order\":41,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3991},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":3919}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3920],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":1263,\"type\":\"VAEDecode\",\"pos\":[20410,-500],\"size\":[210,46],\"flags\":{},\"order\":47,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3989},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":3429}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3430],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":490,\"type\":\"Reroute\",\"pos\":[17750,870],\"size\":[75,26],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1552}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[3959,3960],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1385,\"type\":\"Reroute\",\"pos\":[18750,520],\"size\":[75,26],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3964}],\"outputs\":[{\"name\":\"\",\"type\":\"CONDITIONING\",\"links\":[3889,3893,3898,3903],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1415,\"type\":\"CLIPTextEncode\",\"pos\":[17860,1070],\"size\":[261.8798522949219,111.21334838867188],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":3960}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[3961,3964,3966,3968],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, low quality, bad quality, low detail, mutated, jpeg artifacts, compression artifacts,\"]},{\"id\":1414,\"type\":\"CLIPTextEncode\",\"pos\":[17860,870],\"size\":[271.3465270996094,126.98572540283203],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":3959}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[3962,3963,3965,3967],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"a photo of a doghead cannibal holding a sign that says \\\"the clown jumped the shark\\\" in a landfill at night\"]},{\"id\":1397,\"type\":\"Reroute\",\"pos\":[18750,1060],\"size\":[75,26],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3912}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[3948],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1402,\"type\":\"Reroute\",\"pos\":[18780,1980],\"size\":[75,26],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3968}],\"outputs\":[{\"name\":\"\",\"type\":\"CONDITIONING\",\"links\":[3926,3934],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1417,\"type\":\"LoadImage\",\"pos\":[18263.712890625,1364.093017578125],\"size\":[315,314],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3973]},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"00107-496528661.png\",\"image\"]},{\"id\":1420,\"type\":\"VAEEncode\",\"pos\":[18710,2080],\"size\":[140,46],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"pixels\",\"localized_name\":\"pixels\",\"type\":\"IMAGE\",\"link\":3977},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":3980}],\"outputs\":[{\"name\":\"LATENT\",\"localized_name\":\"LATENT\",\"type\":\"LATENT\",\"links\":[3979],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEEncode\"},\"widgets_values\":[]},{\"id\":1419,\"type\":\"ImageResize+\",\"pos\":[18460,2080],\"size\":[210,218],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":3976}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3977],\"slot_index\":0},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ImageResize+\"},\"widgets_values\":[1024,1024,\"bicubic\",\"stretch\",\"always\",0]},{\"id\":14,\"type\":\"Reroute\",\"pos\":[17750,910],\"size\":[75,26],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1344}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[3429,3907,3919,3922,3969,3980],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1418,\"type\":\"LoadImage\",\"pos\":[18120,2080],\"size\":[315,314],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3976],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"00107-496528661.png\",\"image\"]},{\"id\":1398,\"type\":\"Reroute\",\"pos\":[18750,1160],\"size\":[75,26],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3966}],\"outputs\":[{\"name\":\"\",\"type\":\"CONDITIONING\",\"links\":[3950],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1416,\"type\":\"VAEEncodeAdvanced\",\"pos\":[18620,1370],\"size\":[253.78292846679688,278],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":3973},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":3969}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[3975],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":1423,\"type\":\"FluxLoader\",\"pos\":[16942.298828125,795.814208984375],\"size\":[315,282],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":null},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":null},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"default\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\".none\",\".none\"]},{\"id\":431,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[17868.26953125,666.623046875],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1395},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":3987}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3877,3886,3912,3916],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":1422,\"type\":\"EmptyLatentImage\",\"pos\":[17486.916015625,540.6340942382812],\"size\":[315,106],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"LATENT\",\"localized_name\":\"LATENT\",\"type\":\"LATENT\",\"links\":[3985,3986,3987],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"EmptyLatentImage\"},\"widgets_values\":[1024,1024,1]},{\"id\":1380,\"type\":\"Reroute\",\"pos\":[18768.1875,-255.9905242919922],\"size\":[75,26],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3985}],\"outputs\":[{\"name\":\"\",\"type\":\"LATENT\",\"links\":[3882],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1382,\"type\":\"Reroute\",\"pos\":[18769.365234375,-367.63720703125],\"size\":[75,26],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3962}],\"outputs\":[{\"name\":\"\",\"type\":\"CONDITIONING\",\"links\":[3880]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1386,\"type\":\"Reroute\",\"pos\":[18750.548828125,467.08831787109375],\"size\":[75,26],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3963}],\"outputs\":[{\"name\":\"\",\"type\":\"CONDITIONING\",\"links\":[3888,3892,3897,3902]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1387,\"type\":\"Reroute\",\"pos\":[18747.00390625,569.2838745117188],\"size\":[75,26],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3986}],\"outputs\":[{\"name\":\"\",\"type\":\"LATENT\",\"links\":[3890]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1264,\"type\":\"SaveImage\",\"pos\":[20410,-410],\"size\":[457.3382263183594,422.2065124511719],\"flags\":{},\"order\":50,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3430}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":1392,\"type\":\"VAEDecode\",\"pos\":[20400,360],\"size\":[210,46],\"flags\":{},\"order\":48,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3990},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":3907}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3908],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":1410,\"type\":\"SamplerCustom\",\"pos\":[19300,1900],\"size\":[272.0888977050781,230],\"flags\":{},\"order\":49,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3932},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"link\":3933},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"link\":3934},{\"name\":\"sampler\",\"localized_name\":\"sampler\",\"type\":\"SAMPLER\",\"link\":3935},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"link\":3942},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":3938}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3992],\"slot_index\":0},{\"name\":\"denoised_output\",\"localized_name\":\"denoised_output\",\"type\":\"LATENT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"SamplerCustom\"},\"widgets_values\":[false,0,\"fixed\",4]},{\"id\":1261,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[18944.17578125,-390],\"size\":[283.8435974121094,418],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":3879},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3880},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3881},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3882},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3427],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"beta57\",30,5,1,4,0,\"fixed\",\"standard\",true]},{\"id\":1262,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[19310.083984375,-402.36279296875],\"size\":[285.8560485839844,298],\"flags\":{},\"order\":35,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3427},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3435],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_2m\",5,4,\"resample\",true]},{\"id\":1266,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[19679.115234375,-407.62518310546875],\"size\":[269.3165283203125,298],\"flags\":{},\"order\":39,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3435},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3436],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_2m\",5,4,\"resample\",true]},{\"id\":1265,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[20054.2421875,-408.6135559082031],\"size\":[271.6801452636719,298],\"flags\":{},\"order\":43,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3436},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3989],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_2m\",-1,4,\"resample\",true]},{\"id\":1384,\"type\":\"KSamplerAdvanced\",\"pos\":[18936.240234375,444.8757019042969],\"size\":[278.3764343261719,334],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3887},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"link\":3888},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"link\":3889},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":3890}],\"outputs\":[{\"name\":\"LATENT\",\"localized_name\":\"LATENT\",\"type\":\"LATENT\",\"links\":[3895],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"KSamplerAdvanced\"},\"widgets_values\":[\"enable\",0,\"fixed\",30,4,\"euler\",\"beta57\",0,5,\"enable\"]},{\"id\":1391,\"type\":\"KSamplerAdvanced\",\"pos\":[20044.978515625,449.22869873046875],\"size\":[278.3769226074219,334],\"flags\":{},\"order\":44,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3901},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"link\":3902},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"link\":3903},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":3905}],\"outputs\":[{\"name\":\"LATENT\",\"localized_name\":\"LATENT\",\"type\":\"LATENT\",\"links\":[3990],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"KSamplerAdvanced\"},\"widgets_values\":[\"disable\",15,\"fixed\",30,4,\"euler\",\"beta57\",15,10000,\"disable\"]},{\"id\":1390,\"type\":\"KSamplerAdvanced\",\"pos\":[19672.99609375,448.818603515625],\"size\":[273.651123046875,334],\"flags\":{},\"order\":40,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3896},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"link\":3897},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"link\":3898},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":3900}],\"outputs\":[{\"name\":\"LATENT\",\"localized_name\":\"LATENT\",\"type\":\"LATENT\",\"links\":[3905],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"KSamplerAdvanced\"},\"widgets_values\":[\"disable\",10,\"fixed\",30,4,\"euler\",\"beta57\",10,15,\"enable\"]},{\"id\":1389,\"type\":\"KSamplerAdvanced\",\"pos\":[19308.921875,451.14801025390625],\"size\":[273.652099609375,334],\"flags\":{},\"order\":36,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3891},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"link\":3892},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"link\":3893},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":3895}],\"outputs\":[{\"name\":\"LATENT\",\"localized_name\":\"LATENT\",\"type\":\"LATENT\",\"links\":[3900],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"KSamplerAdvanced\"},\"widgets_values\":[\"disable\",5,\"fixed\",30,4,\"euler\",\"beta57\",5,10,\"enable\"]},{\"id\":1413,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[19294.095703125,1089.451171875],\"size\":[275.2236328125,298],\"flags\":{},\"order\":37,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3947},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3991],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_2s\",-1,4,\"resample\",true]},{\"id\":1412,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[18922.447265625,1091.1812744140625],\"size\":[281.48095703125,418],\"flags\":{},\"order\":33,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":3948},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3949},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3950},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3975},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3947],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"exponential/res_2s\",\"beta57\",30,15,1,1,0,\"fixed\",\"unsample\",true]},{\"id\":1406,\"type\":\"SaveImage\",\"pos\":[19650,1900],\"size\":[457.3382263183594,422.2065124511719],\"flags\":{},\"order\":53,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3923}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":1404,\"type\":\"SaveImage\",\"pos\":[19650,1080],\"size\":[457.3382263183594,422.2065124511719],\"flags\":{},\"order\":45,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3920}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]}],\"links\":[[1344,404,0,14,0,\"*\"],[1395,13,0,431,0,\"MODEL\"],[1552,402,0,490,0,\"*\"],[3427,1261,0,1262,4,\"LATENT\"],[3429,14,0,1263,1,\"VAE\"],[3430,1263,0,1264,0,\"IMAGE\"],[3435,1262,0,1266,4,\"LATENT\"],[3436,1266,0,1265,4,\"LATENT\"],[3877,431,0,1383,0,\"*\"],[3879,1383,0,1261,0,\"MODEL\"],[3880,1382,0,1261,1,\"CONDITIONING\"],[3881,1381,0,1261,2,\"CONDITIONING\"],[3882,1380,0,1261,3,\"LATENT\"],[3886,431,0,1388,0,\"*\"],[3887,1388,0,1384,0,\"MODEL\"],[3888,1386,0,1384,1,\"CONDITIONING\"],[3889,1385,0,1384,2,\"CONDITIONING\"],[3890,1387,0,1384,3,\"LATENT\"],[3891,1388,0,1389,0,\"MODEL\"],[3892,1386,0,1389,1,\"CONDITIONING\"],[3893,1385,0,1389,2,\"CONDITIONING\"],[3895,1384,0,1389,3,\"LATENT\"],[3896,1388,0,1390,0,\"MODEL\"],[3897,1386,0,1390,1,\"CONDITIONING\"],[3898,1385,0,1390,2,\"CONDITIONING\"],[3900,1389,0,1390,3,\"LATENT\"],[3901,1388,0,1391,0,\"MODEL\"],[3902,1386,0,1391,1,\"CONDITIONING\"],[3903,1385,0,1391,2,\"CONDITIONING\"],[3905,1390,0,1391,3,\"LATENT\"],[3907,14,0,1392,1,\"VAE\"],[3908,1392,0,1393,0,\"IMAGE\"],[3912,431,0,1397,0,\"*\"],[3916,431,0,1401,0,\"*\"],[3919,14,0,1403,1,\"VAE\"],[3920,1403,0,1404,0,\"IMAGE\"],[3922,14,0,1405,1,\"VAE\"],[3923,1405,0,1406,0,\"IMAGE\"],[3924,1401,0,1394,0,\"MODEL\"],[3925,1399,0,1394,1,\"CONDITIONING\"],[3926,1402,0,1394,2,\"CONDITIONING\"],[3928,1407,0,1394,3,\"SAMPLER\"],[3929,1408,0,1394,4,\"SIGMAS\"],[3931,1401,0,1409,0,\"MODEL\"],[3932,1401,0,1410,0,\"MODEL\"],[3933,1399,0,1410,1,\"CONDITIONING\"],[3934,1402,0,1410,2,\"CONDITIONING\"],[3935,1407,0,1410,3,\"SAMPLER\"],[3938,1394,0,1410,5,\"LATENT\"],[3940,1409,0,1411,0,\"SIGMAS\"],[3941,1411,1,1408,0,\"SIGMAS\"],[3942,1411,1,1410,4,\"SIGMAS\"],[3947,1412,0,1413,4,\"LATENT\"],[3948,1397,0,1412,0,\"MODEL\"],[3949,1395,0,1412,1,\"CONDITIONING\"],[3950,1398,0,1412,2,\"CONDITIONING\"],[3959,490,0,1414,0,\"CLIP\"],[3960,490,0,1415,0,\"CLIP\"],[3961,1415,0,1381,0,\"*\"],[3962,1414,0,1382,0,\"*\"],[3963,1414,0,1386,0,\"*\"],[3964,1415,0,1385,0,\"*\"],[3965,1414,0,1395,0,\"*\"],[3966,1415,0,1398,0,\"*\"],[3967,1414,0,1399,0,\"*\"],[3968,1415,0,1402,0,\"*\"],[3969,14,0,1416,4,\"VAE\"],[3973,1417,0,1416,0,\"IMAGE\"],[3975,1416,0,1412,3,\"LATENT\"],[3976,1418,0,1419,0,\"IMAGE\"],[3977,1419,0,1420,0,\"IMAGE\"],[3979,1420,0,1394,5,\"LATENT\"],[3980,14,0,1420,1,\"VAE\"],[3985,1422,0,1380,0,\"*\"],[3986,1422,0,1387,0,\"*\"],[3987,1422,0,431,1,\"LATENT\"],[3988,403,0,13,0,\"*\"],[3989,1265,0,1263,0,\"LATENT\"],[3990,1391,0,1392,0,\"LATENT\"],[3991,1413,0,1403,0,\"LATENT\"],[3992,1410,0,1405,0,\"LATENT\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":0.9849732675807865,\"offset\":[-14560.618477888858,-446.28944651783576]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux faceswap sync pulid.json",
    "content": "{\"last_node_id\":1741,\"last_link_id\":6622,\"nodes\":[{\"id\":490,\"type\":\"Reroute\",\"pos\":[-1346.8087158203125,-823.3269653320312],\"size\":[75,26],\"flags\":{},\"order\":39,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":6398}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[4157,6103],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1162,\"type\":\"Reroute\",\"pos\":[1930.0975341796875,-817.45556640625],\"size\":[75,26],\"flags\":{},\"order\":78,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":4185}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[4186],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":744,\"type\":\"SaveImage\",\"pos\":[1276.456787109375,-719.9273681640625],\"size\":[424.53594970703125,455.0760192871094],\"flags\":{},\"order\":72,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2241}],\"outputs\":[],\"title\":\"Save Patch\",\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"],\"color\":\"#332922\",\"bgcolor\":\"#593930\"},{\"id\":1022,\"type\":\"ImageBlend\",\"pos\":[2313.7607421875,-792.44091796875],\"size\":[210,102],\"flags\":{\"collapsed\":true},\"order\":73,\"mode\":0,\"inputs\":[{\"name\":\"image1\",\"localized_name\":\"image1\",\"type\":\"IMAGE\",\"link\":3568},{\"name\":\"image2\",\"localized_name\":\"image2\",\"type\":\"IMAGE\",\"link\":3570}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3569],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageBlend\"},\"widgets_values\":[0.5,\"multiply\"]},{\"id\":729,\"type\":\"SetImageSize\",\"pos\":[-812.6932373046875,-86.24114227294922],\"size\":[210,102],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[2104,2108,4998],\"slot_index\":0},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[2105,2109,4999],\"slot_index\":1}],\"title\":\"Inpaint Tile Size\",\"properties\":{\"Node name for S&R\":\"SetImageSize\"},\"widgets_values\":[1024,1024]},{\"id\":1161,\"type\":\"Image Save\",\"pos\":[2186.75634765625,-722.2388916015625],\"size\":[351.4677734375,796.8805541992188],\"flags\":{},\"order\":79,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":4186}],\"outputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"links\":null},{\"name\":\"files\",\"localized_name\":\"files\",\"type\":\"STRING\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Image Save\"},\"widgets_values\":[\"[time(%Y-%m-%d)]\",\"ComfyUI\",\"_\",4,\"false\",\"jpeg\",300,100,\"true\",\"false\",\"false\",\"false\",\"true\",\"true\",\"true\"],\"color\":\"#232\",\"bgcolor\":\"#353\"},{\"id\":1024,\"type\":\"PreviewImage\",\"pos\":[1286.05859375,-198.6599884033203],\"size\":[413.7582092285156,445.8081359863281],\"flags\":{},\"order\":76,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3569}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[],\"color\":\"#332922\",\"bgcolor\":\"#593930\"},{\"id\":758,\"type\":\"ImageResize+\",\"pos\":[1468.4384765625,-790.391845703125],\"size\":[210,218],\"flags\":{\"collapsed\":true},\"order\":71,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2201},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,76],\"widget\":{\"name\":\"width\"},\"link\":2204},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,100],\"widget\":{\"name\":\"height\"},\"link\":2205}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2198],\"slot_index\":0},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ImageResize+\"},\"widgets_values\":[512,512,\"lanczos\",\"stretch\",\"always\",0]},{\"id\":1369,\"type\":\"ImageResize+\",\"pos\":[2183.37109375,151.09762573242188],\"size\":[210,218],\"flags\":{\"collapsed\":true},\"order\":44,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":4996},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,76],\"widget\":{\"name\":\"width\"},\"link\":4998},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,100],\"widget\":{\"name\":\"height\"},\"link\":4999}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[5000],\"slot_index\":0},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ImageResize+\"},\"widgets_values\":[512,512,\"lanczos\",\"stretch\",\"always\",0]},{\"id\":1407,\"type\":\"Reroute\",\"pos\":[-914.50390625,-361.0196533203125],\"size\":[75,26],\"flags\":{},\"order\":37,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":6620}],\"outputs\":[{\"name\":\"\",\"type\":\"MASK\",\"links\":[5021],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":725,\"type\":\"Reroute\",\"pos\":[-914.8554077148438,-440.6482238769531],\"size\":[75,26],\"flags\":{},\"order\":36,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":6619}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[2210,2211,5054],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":14,\"type\":\"Reroute\",\"pos\":[-1346.8087158203125,-783.3269653320312],\"size\":[75,26],\"flags\":{},\"order\":35,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":5447}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[2153,3508],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1667,\"type\":\"GrowMask\",\"pos\":[-302.060302734375,-164.22067260742188],\"size\":[210,82],\"flags\":{},\"order\":53,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":6360}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[6361],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"GrowMask\"},\"widgets_values\":[-10,false]},{\"id\":1039,\"type\":\"ImageBlend\",\"pos\":[-769.9498901367188,220.86917114257812],\"size\":[210,102],\"flags\":{\"collapsed\":true},\"order\":50,\"mode\":0,\"inputs\":[{\"name\":\"image1\",\"localized_name\":\"image1\",\"type\":\"IMAGE\",\"link\":3606},{\"name\":\"image2\",\"localized_name\":\"image2\",\"type\":\"IMAGE\",\"link\":3605}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3607],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageBlend\"},\"widgets_values\":[0.5,\"multiply\"]},{\"id\":731,\"type\":\"SimpleMath+\",\"pos\":[-776.4415893554688,126.82145690917969],\"size\":[315,98],\"flags\":{\"collapsed\":true},\"order\":33,\"mode\":0,\"inputs\":[{\"name\":\"a\",\"localized_name\":\"a\",\"type\":\"*\",\"shape\":7,\"link\":2108},{\"name\":\"b\",\"localized_name\":\"b\",\"type\":\"*\",\"shape\":7,\"link\":2109},{\"name\":\"c\",\"localized_name\":\"c\",\"type\":\"*\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"INT\",\"localized_name\":\"INT\",\"type\":\"INT\",\"links\":null},{\"name\":\"FLOAT\",\"localized_name\":\"FLOAT\",\"type\":\"FLOAT\",\"links\":[2100],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"SimpleMath+\"},\"widgets_values\":[\"a/b\"]},{\"id\":728,\"type\":\"MaskToImage\",\"pos\":[-791.0198364257812,176.82147216796875],\"size\":[176.39999389648438,26],\"flags\":{\"collapsed\":true},\"order\":45,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2106}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2103,3605],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskToImage\"},\"widgets_values\":[]},{\"id\":765,\"type\":\"MaskToImage\",\"pos\":[2080.868896484375,-792.6943359375],\"size\":[182.28543090820312,26],\"flags\":{\"collapsed\":true},\"order\":46,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":5529}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3570],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskToImage\"},\"widgets_values\":[]},{\"id\":761,\"type\":\"Image Comparer (rgthree)\",\"pos\":[1747.432373046875,-712.1251220703125],\"size\":[410.4466247558594,447.8973388671875],\"flags\":{},\"order\":77,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":2210},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":2200}],\"outputs\":[],\"title\":\"Compare Output\",\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_lonqd_00061_.png&type=temp&subfolder=&rand=0.1196562401371497\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_lonqd_00062_.png&type=temp&subfolder=&rand=0.958614793318614\"}]],\"color\":\"#232\",\"bgcolor\":\"#353\"},{\"id\":1569,\"type\":\"ClownGuides_Sync_Advanced\",\"pos\":[261.355224609375,-1000.5784912109375],\"size\":[315,1938],\"flags\":{\"collapsed\":true},\"order\":56,\"mode\":0,\"inputs\":[{\"name\":\"guide_masked\",\"localized_name\":\"guide_masked\",\"type\":\"LATENT\",\"shape\":7,\"link\":6201},{\"name\":\"guide_unmasked\",\"localized_name\":\"guide_unmasked\",\"type\":\"LATENT\",\"shape\":7,\"link\":6202},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":6223},{\"name\":\"mask_sync\",\"localized_name\":\"mask_sync\",\"type\":\"MASK\",\"shape\":7,\"link\":6224},{\"name\":\"mask_drift_x\",\"localized_name\":\"mask_drift_x\",\"type\":\"MASK\",\"shape\":7,\"link\":6225},{\"name\":\"mask_drift_y\",\"localized_name\":\"mask_drift_y\",\"type\":\"MASK\",\"shape\":7,\"link\":6226},{\"name\":\"mask_lure_x\",\"localized_name\":\"mask_lure_x\",\"type\":\"MASK\",\"shape\":7,\"link\":6227},{\"name\":\"mask_lure_y\",\"localized_name\":\"mask_lure_y\",\"type\":\"MASK\",\"shape\":7,\"link\":6228},{\"name\":\"weights_masked\",\"localized_name\":\"weights_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"weights_unmasked\",\"localized_name\":\"weights_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"syncs_masked\",\"localized_name\":\"syncs_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"syncs_unmasked\",\"localized_name\":\"syncs_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_xs_masked\",\"localized_name\":\"drift_xs_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_xs_unmasked\",\"localized_name\":\"drift_xs_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_ys_masked\",\"localized_name\":\"drift_ys_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_ys_unmasked\",\"localized_name\":\"drift_ys_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_xs_masked\",\"localized_name\":\"lure_xs_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_xs_unmasked\",\"localized_name\":\"lure_xs_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_ys_masked\",\"localized_name\":\"lure_ys_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_ys_unmasked\",\"localized_name\":\"lure_ys_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_x_data\",\"type\":\"FLOAT\",\"pos\":[10,800],\"widget\":{\"name\":\"drift_x_data\"},\"link\":6239},{\"name\":\"drift_y_guide\",\"type\":\"FLOAT\",\"pos\":[10,1088],\"widget\":{\"name\":\"drift_y_guide\"},\"link\":6240},{\"name\":\"sync_masked\",\"type\":\"FLOAT\",\"pos\":[10,608],\"widget\":{\"name\":\"sync_masked\"},\"link\":6241}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[6411],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuides_Sync_Advanced\"},\"widgets_values\":[1,1,\"constant\",\"constant\",0,0,-1,-1,0,1,\"constant\",\"constant\",0,0,-1,-1,0.2,0,1,0,\"constant\",\"constant\",0,0,-1,-1,0,0,0.2,1,0,\"constant\",\"constant\",0,0,-1,-1,0,0,\"constant\",\"constant\",0,0,-1,-1,0,0,\"constant\",\"constant\",0,0,-1,-1,0,\"y -> x\",false,false,false,false,false,false]},{\"id\":1571,\"type\":\"Reroute\",\"pos\":[141.35520935058594,-1030.5784912109375],\"size\":[75,26],\"flags\":{},\"order\":52,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":6222}],\"outputs\":[{\"name\":\"\",\"type\":\"MASK\",\"links\":[6223,6224,6225,6226,6227,6228,6342,6584],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1368,\"type\":\"Image Comparer (rgthree)\",\"pos\":[1744.9150390625,-199.16920471191406],\"size\":[410.4466247558594,447.8973388671875],\"flags\":{},\"order\":74,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":4997},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":5000}],\"outputs\":[],\"title\":\"Compare Patch\",\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_fyekd_00061_.png&type=temp&subfolder=&rand=0.6117808776963016\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_fyekd_00062_.png&type=temp&subfolder=&rand=0.2735573488508416\"}]],\"color\":\"#232\",\"bgcolor\":\"#353\"},{\"id\":1673,\"type\":\"Note\",\"pos\":[1824.9287109375,-1010.687744140625],\"size\":[322.34954833984375,88],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Preview of first stage output: sometimes it can be worth manually (or automatically, using DINO, etc.) adjusting your mask for the second stage, based on this output.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1539,\"type\":\"GrowMask\",\"pos\":[573.4215698242188,-1145.86767578125],\"size\":[214.5684051513672,82],\"flags\":{},\"order\":57,\"mode\":4,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":6342}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[6343,6344,6345,6346,6347,6348],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"GrowMask\"},\"widgets_values\":[10,false]},{\"id\":1383,\"type\":\"Note\",\"pos\":[216.7359161376953,340.25775146484375],\"size\":[291.67218017578125,232.2296142578125],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"eta > 0.0 means you are using SDE/ancestral sampling. With this guide mode you will generally want to use bongmath = true.\\n\\nSamplers such as res_2s and res_3s will be very accurate. Try res_5s and res_8s if you really want to go crazy with it. They run 2x (2s), 3x (3s), etc slower than Euler.\\n\\nres_2m and 3m will be fast and also good, and run at the same speed as Euler.\\n\\neta_substep will increase the power of bongmath. If it is set to 0.0, you can turn bongmath off without any effect.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1380,\"type\":\"Note\",\"pos\":[544.9375610351562,342.0576477050781],\"size\":[290.1026611328125,231.5842742919922],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Setting denoise to a negative value is equivalent to just scaling it. For example:\\n\\nDenoise = -0.90 is the same as multiplying every sigma value in the entire schedule by 0.9.\\n\\nI find this is a lot easier to control than the regular denoise scale. The difference between -0.95 and -0.9 is much more predictable than with 0.95 and 0.9. Most of us have seen how different denoise 0.8 might be with Karras vs. exponential. \\n\\nTry a denoise between -0.95 and -0.85. \"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":759,\"type\":\"ImageCompositeMasked\",\"pos\":[1697.19140625,-790.8740844726562],\"size\":[210,186],\"flags\":{\"collapsed\":true},\"order\":75,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"IMAGE\",\"link\":2211},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"IMAGE\",\"link\":2198},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":6447},{\"name\":\"x\",\"type\":\"INT\",\"pos\":[10,76],\"widget\":{\"name\":\"x\"},\"link\":2206},{\"name\":\"y\",\"type\":\"INT\",\"pos\":[10,100],\"widget\":{\"name\":\"y\"},\"link\":2207}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2200,4185],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageCompositeMasked\"},\"widgets_values\":[712,800,false]},{\"id\":1687,\"type\":\"Note\",\"pos\":[-101.33948516845703,339.7750244140625],\"size\":[286.97723388671875,180.28128051757812],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The cycles node causes the connected sampler to loop between sampling and unsampling steps. (Unsampling is running the sampler backwards, where it predicts the noise that would lead to a given output).\\n\\nWhen unsample_eta is set to -1, it simply uses the same settings for eta as in the connected node. \"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":745,\"type\":\"VAEDecode\",\"pos\":[1297.53369140625,-791.137939453125],\"size\":[140,46],\"flags\":{\"collapsed\":true},\"order\":70,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":6478},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2153}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2201,2241,3568,4997],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":1678,\"type\":\"Note\",\"pos\":[-422.92510986328125,-333.6911926269531],\"size\":[324.0018005371094,113.63665771484375],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ReduxAdvanced is used to help get things on track. Bypass if you're having problems with it disrupting character likeness.\\n\\nThe SDE Mask ensures SDE noise is used only in the masked area, limiting change in unmasked areas that could lead to seams. \"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1572,\"type\":\"ClownGuides_Sync_Advanced\",\"pos\":[581.355224609375,-1000.5784912109375],\"size\":[315,1878],\"flags\":{\"collapsed\":true},\"order\":62,\"mode\":0,\"inputs\":[{\"name\":\"guide_masked\",\"localized_name\":\"guide_masked\",\"type\":\"LATENT\",\"shape\":7,\"link\":6229},{\"name\":\"guide_unmasked\",\"localized_name\":\"guide_unmasked\",\"type\":\"LATENT\",\"shape\":7,\"link\":6230},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":6343},{\"name\":\"mask_sync\",\"localized_name\":\"mask_sync\",\"type\":\"MASK\",\"shape\":7,\"link\":6344},{\"name\":\"mask_drift_x\",\"localized_name\":\"mask_drift_x\",\"type\":\"MASK\",\"shape\":7,\"link\":6345},{\"name\":\"mask_drift_y\",\"localized_name\":\"mask_drift_y\",\"type\":\"MASK\",\"shape\":7,\"link\":6346},{\"name\":\"mask_lure_x\",\"localized_name\":\"mask_lure_x\",\"type\":\"MASK\",\"shape\":7,\"link\":6347},{\"name\":\"mask_lure_y\",\"localized_name\":\"mask_lure_y\",\"type\":\"MASK\",\"shape\":7,\"link\":6348},{\"name\":\"weights_masked\",\"localized_name\":\"weights_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"weights_unmasked\",\"localized_name\":\"weights_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"syncs_masked\",\"localized_name\":\"syncs_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"syncs_unmasked\",\"localized_name\":\"syncs_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_xs_masked\",\"localized_name\":\"drift_xs_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_xs_unmasked\",\"localized_name\":\"drift_xs_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_ys_masked\",\"localized_name\":\"drift_ys_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_ys_unmasked\",\"localized_name\":\"drift_ys_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_xs_masked\",\"localized_name\":\"lure_xs_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_xs_unmasked\",\"localized_name\":\"lure_xs_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_ys_masked\",\"localized_name\":\"lure_ys_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_ys_unmasked\",\"localized_name\":\"lure_ys_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[6414],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuides_Sync_Advanced\"},\"widgets_values\":[0,1,\"constant\",\"constant\",0,0,-1,-1,0,1,\"constant\",\"constant\",0,0,-1,-1,0,0,1,0,\"constant\",\"constant\",0,0,-1,-1,0,0,0,1,0,\"constant\",\"constant\",0,0,-1,-1,0,0,\"constant\",\"constant\",0,0,-1,-1,0,0,\"constant\",\"constant\",0,0,-1,-1,0,\"y -> x\",false,false,false,false,false,false]},{\"id\":1693,\"type\":\"Note\",\"pos\":[-1535.57666015625,-641.8590087890625],\"size\":[276.7918701171875,88],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Padding can be very important. Some models/loras/IPadapter embeds etc. are going to respond very differently if the shot is close up vs. farther away.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1694,\"type\":\"Note\",\"pos\":[-441.5133056640625,-999.14990234375],\"size\":[291.2616882324219,189.98562622070312],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increase character likeness by: \\n\\nDecreasing \\\"Similarity\\\"\\nIncreasing \\\"Drift Toward Target\\\"\\nIncreasing cycles\\nIncreasing eta (max 1.0)\\nIncreasing denoise\\n\\nIncrease adherence to the input image by:\\n\\nDoing the opposite of any of the above\\nIncreasing \\\"Drift Toward Guide\\\"\\nEnabling the ReduxAdvanced node\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1277,\"type\":\"SharkOptions_GuideCond_Beta\",\"pos\":[575.9444580078125,221.88970947265625],\"size\":[315,98],\"flags\":{\"collapsed\":true},\"order\":51,\"mode\":0,\"inputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":5653},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":4650},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[5493],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_GuideCond_Beta\"},\"widgets_values\":[1]},{\"id\":1040,\"type\":\"PreviewImage\",\"pos\":[-1267.6248779296875,-30.252229690551758],\"size\":[304.98114013671875,265.58380126953125],\"flags\":{},\"order\":55,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3607}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":1698,\"type\":\"Note\",\"pos\":[-1623.859375,-355.951416015625],\"size\":[276.7918701171875,88],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Draw a mask over the face in the Load Image node. Ideally, try stopping precisely at the hairline, and just above or just below the chin.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1477,\"type\":\"LoraLoader\",\"pos\":[-1684.5245361328125,-845.994140625],\"size\":[315,126],\"flags\":{},\"order\":34,\"mode\":4,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":5439},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":5440}],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[6397],\"slot_index\":0},{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[6398],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"LoraLoader\"},\"widgets_values\":[\"FLUX/Kirsten_Dunst_Flux_V1.safetensors\",1,1]},{\"id\":1279,\"type\":\"TorchCompileModels\",\"pos\":[-2086.55322265625,-1090.6181640625],\"size\":[285.9945068359375,179.0001983642578],\"flags\":{},\"order\":38,\"mode\":4,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":6397}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[6396],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"TorchCompileModels\"},\"widgets_values\":[\"inductor\",false,\"default\",false,64,0]},{\"id\":1478,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[-1773.91259765625,-1030.6773681640625],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":54,\"mode\":4,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":6396},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":5442}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[6383],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":1454,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[-74.8967514038086,24.043270111083984],\"size\":[261.7955627441406,202],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[6402],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[20,1,-1,\"none\",-1,1,true]},{\"id\":726,\"type\":\"Mask Bounding Box Aspect Ratio\",\"pos\":[-828.6614990234375,-412.50946044921875],\"size\":[252,250],\"flags\":{\"collapsed\":false},\"order\":40,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"shape\":7,\"link\":5054},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":5021},{\"name\":\"aspect_ratio\",\"type\":\"FLOAT\",\"pos\":[10,204],\"widget\":{\"name\":\"aspect_ratio\"},\"link\":2100}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[2101,2102,3606,3721,4996,6543],\"slot_index\":0},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[2106,5529],\"slot_index\":1},{\"name\":\"mask_blurred\",\"localized_name\":\"mask_blurred\",\"type\":\"MASK\",\"links\":[6447],\"slot_index\":2},{\"name\":\"x\",\"localized_name\":\"x\",\"type\":\"INT\",\"links\":[2206],\"slot_index\":3},{\"name\":\"y\",\"localized_name\":\"y\",\"type\":\"INT\",\"links\":[2207],\"slot_index\":4},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[2204],\"slot_index\":5},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[2205],\"slot_index\":6}],\"properties\":{\"Node name for S&R\":\"Mask Bounding Box Aspect Ratio\"},\"widgets_values\":[100,40,1.75,false]},{\"id\":1702,\"type\":\"PulidFluxInsightFaceLoader\",\"pos\":[-1150,-1080],\"size\":[365.4000244140625,58],\"flags\":{\"collapsed\":true},\"order\":10,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"FACEANALYSIS\",\"localized_name\":\"FACEANALYSIS\",\"type\":\"FACEANALYSIS\",\"shape\":3,\"links\":[6526],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"PulidFluxInsightFaceLoader\"},\"widgets_values\":[\"CPU\"]},{\"id\":1524,\"type\":\"ReFluxPatcher\",\"pos\":[-1486.33251953125,-986.468505859375],\"size\":[210,82],\"flags\":{},\"order\":60,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":6383}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[6547],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":13,\"type\":\"Reroute\",\"pos\":[-1346.8087158203125,-863.3270874023438],\"size\":[75,26],\"flags\":{},\"order\":64,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":6547}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[6548],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1703,\"type\":\"PulidFluxModelLoader\",\"pos\":[-1140,-970],\"size\":[315,58],\"flags\":{\"collapsed\":true},\"order\":11,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"PULIDFLUX\",\"localized_name\":\"PULIDFLUX\",\"type\":\"PULIDFLUX\",\"shape\":3,\"links\":[6524],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"PulidFluxModelLoader\"},\"widgets_values\":[\"pulid_flux_v0.9.0.safetensors\"]},{\"id\":1688,\"type\":\"Note\",\"pos\":[-1527.4205322265625,-1311.8199462890625],\"size\":[274.47601318359375,104.34856414794922],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ReFluxPatcher is required to use the \\\"Style\\\" nodes. Different \\\"Re...Patcher\\\" nodes are available for many other models, from SD1.5/SDXL to SD3.5, HiDream, AuraFlow, Chroma, WAN, and LTXV.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1071,\"type\":\"CLIPVisionEncode\",\"pos\":[586.1533203125,119.24115753173828],\"size\":[253.60000610351562,78],\"flags\":{\"collapsed\":true},\"order\":43,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":6552},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":3721}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[3720],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":1073,\"type\":\"CLIPTextEncode\",\"pos\":[575.77001953125,186.9269256591797],\"size\":[263.280517578125,88.73566436767578],\"flags\":{\"collapsed\":true},\"order\":41,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":4157}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[4650,4980],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"\"]},{\"id\":1476,\"type\":\"FluxLoader\",\"pos\":[-2094.3544921875,-847.2406005859375],\"size\":[385.17449951171875,282],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[5439],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[5440],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[5447],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":[6550,6552],\"slot_index\":3},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":[6551,6553],\"slot_index\":4}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"fp8_e4m3fn_fast\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\"siglip2-so400m-patch16-512.safetensors\",\"flex1_redux_siglip2_512.safetensors\"]},{\"id\":1716,\"type\":\"Note\",\"pos\":[-2101.239013671875,-463.0836486816406],\"size\":[395.2708740234375,177.91754150390625],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"To use the 512x512 Redux models, download and place in the following paths:\\n\\ncomfy/models/style_models:\\nhttps://huggingface.co/ostris/Flex.1-alpha-Redux/blob/main/flex1_redux_siglip2_512.safetensors\\n\\ncomfy/models/clip_vision:\\nhttps://huggingface.co/google/siglip2-so400m-patch16-512/blob/main/model.safetensors\\n\\nRename the latter as siglip2-so400m-patch16-512.safetensors\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1701,\"type\":\"PulidFluxEvaClipLoader\",\"pos\":[-1145.7685546875,-1024.2314453125],\"size\":[327.5999755859375,26],\"flags\":{\"collapsed\":true},\"order\":15,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"EVA_CLIP\",\"localized_name\":\"EVA_CLIP\",\"type\":\"EVA_CLIP\",\"shape\":3,\"links\":[6525],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"PulidFluxEvaClipLoader\"},\"widgets_values\":[]},{\"id\":1548,\"type\":\"ReduxAdvanced\",\"pos\":[-69.81456756591797,-498.3502502441406],\"size\":[248.6250457763672,234],\"flags\":{},\"order\":47,\"mode\":4,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":6422},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":6551},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":6550},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":6543},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[6421],\"slot_index\":0},{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":null},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ReduxAdvanced\"},\"widgets_values\":[3,\"area\",\"center crop (square)\",1,0.1]},{\"id\":1072,\"type\":\"StyleModelApply\",\"pos\":[596.4773559570312,153.7720947265625],\"size\":[262,122],\"flags\":{\"collapsed\":true},\"order\":48,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":4980},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":6553},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":3720}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[5653],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":1714,\"type\":\"Note\",\"pos\":[-816.8351440429688,-725.0016479492188],\"size\":[252.3572998046875,162.81890869140625],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The repo for PuLID Flux is currently broken, but the ReFluxPatcher node will repair the issues and make it usable. You must have ReFluxPatcher enabled to use this. Aside from that, install as instructed:\\n\\nhttps://github.com/balazik/ComfyUI-PuLID-Flux\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1575,\"type\":\"PrimitiveFloat\",\"pos\":[11.355203628540039,-940.5784912109375],\"size\":[210,58],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"FLOAT\",\"localized_name\":\"FLOAT\",\"type\":\"FLOAT\",\"links\":[6241],\"slot_index\":0}],\"title\":\"Similarity\",\"properties\":{\"Node name for S&R\":\"PrimitiveFloat\"},\"widgets_values\":[1]},{\"id\":1573,\"type\":\"PrimitiveFloat\",\"pos\":[10.393571853637695,-834.4251708984375],\"size\":[210,58],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"FLOAT\",\"localized_name\":\"FLOAT\",\"type\":\"FLOAT\",\"links\":[6239],\"slot_index\":0}],\"title\":\"Drift Toward Target\",\"properties\":{\"Node name for S&R\":\"PrimitiveFloat\"},\"widgets_values\":[0.2]},{\"id\":1574,\"type\":\"PrimitiveFloat\",\"pos\":[11.355203628540039,-720.5784912109375],\"size\":[210,58],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"FLOAT\",\"localized_name\":\"FLOAT\",\"type\":\"FLOAT\",\"links\":[6240],\"slot_index\":0}],\"title\":\"Drift Toward Guide\",\"properties\":{\"Node name for S&R\":\"PrimitiveFloat\"},\"widgets_values\":[0.2]},{\"id\":727,\"type\":\"VAEEncodeAdvanced\",\"pos\":[-789.0958862304688,67.53204345703125],\"size\":[262.4812927246094,298],\"flags\":{\"collapsed\":true},\"order\":49,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2101},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2102},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2103},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":3508},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,160],\"widget\":{\"name\":\"width\"},\"link\":2104},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,184],\"widget\":{\"name\":\"height\"},\"link\":2105}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[5373,5715,6201,6202,6229,6230,6412],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[6222,6360,6569,6570],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[5442],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[],\"slot_index\":4},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[]}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1344,768,\"red\",false,\"16_channels\"]},{\"id\":1674,\"type\":\"Note\",\"pos\":[170.8737030029297,-1390.4803466796875],\"size\":[322.6287841796875,128.15802001953125],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Activate the style nodes if you are having issues with color, detail, light, blurriness or pixelation drifting too far from your source input.\\n\\nIf end_step is too high, you may get faint halos and an oversharpened look.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1689,\"type\":\"Note\",\"pos\":[525.9268798828125,-1349.89794921875],\"size\":[263.00439453125,88],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Expanding the mask for the second pass can sometimes help prevent seams.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1525,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[251.35520935058594,-950.5784912109375],\"size\":[252.0535430908203,286],\"flags\":{},\"order\":61,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":5715},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":6569},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":6411}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[6051],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"scattersort\",1,1,\"constant\",0,-1,false]},{\"id\":1672,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[561.355224609375,-950.5784912109375],\"size\":[252.0535430908203,286],\"flags\":{},\"order\":65,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":6412},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":6570},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":6414}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[6415,6476],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"scattersort\",1,1,\"constant\",0,-1,false]},{\"id\":1516,\"type\":\"ClownOptions_SDE_Mask_Beta\",\"pos\":[-68.4439468383789,-163.1180877685547],\"size\":[252.8383331298828,126],\"flags\":{},\"order\":59,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":6361},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[5776],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Mask_Beta\"},\"widgets_values\":[1,0,false]},{\"id\":1731,\"type\":\"ClownOptions_SDE_Mask_Beta\",\"pos\":[898.4906005859375,-756.2548217773438],\"size\":[252.8383331298828,126],\"flags\":{},\"order\":63,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":6586},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[6585,6587],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Mask_Beta\"},\"widgets_values\":[1,0,false]},{\"id\":1730,\"type\":\"MaskEdge\",\"pos\":[903.2994384765625,-949.55322265625],\"size\":[248.64459228515625,130],\"flags\":{},\"order\":58,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":6584}],\"outputs\":[{\"name\":\"edge_mask\",\"localized_name\":\"edge_mask\",\"type\":\"MASK\",\"links\":[6586],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskEdge\"},\"widgets_values\":[10,\"percent\",1,1]},{\"id\":1677,\"type\":\"Note\",\"pos\":[-439.5185241699219,-738.3756713867188],\"size\":[290.3874816894531,88],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Try setting both drift values to 0.0 or 0.2 as a starting point.\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1552,\"type\":\"ClownOptions_SDE_Beta\",\"pos\":[-271.7193603515625,259.6875915527344],\"size\":[315,266],\"flags\":{\"collapsed\":true},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas_substep\",\"localized_name\":\"etas_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Beta\"},\"widgets_values\":[\"gaussian\",\"gaussian\",\"hard\",\"hard\",1,1,-1,\"fixed\"]},{\"id\":1726,\"type\":\"ClownOptions_ImplicitSteps_Beta\",\"pos\":[-493.06549072265625,258.3205871582031],\"size\":[300.7710876464844,130],\"flags\":{\"collapsed\":true},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownOptions_ImplicitSteps_Beta\"},\"widgets_values\":[\"bongmath\",\"bongmath\",10,0]},{\"id\":1722,\"type\":\"ClownOptions_DetailBoost_Beta\",\"pos\":[-302.6524963378906,-24.413410186767578],\"size\":[210.1761016845703,218],\"flags\":{\"collapsed\":false},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[6589,6590,6591],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_DetailBoost_Beta\"},\"widgets_values\":[1,\"model\",\"hard\",0.5,3,10]},{\"id\":1732,\"type\":\"Note\",\"pos\":[890.6793823242188,-1148.8226318359375],\"size\":[290.3854675292969,122.62060546875],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The mask below allows the SDE/ancestral noise used in the last two samplers to only hit the seams around the inpainted area.\\n\\nTry bypassing the SDE mask and see if you like the results - it lets the entire face be affected by noise.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1727,\"type\":\"Note\",\"pos\":[-453.12371826171875,343.8135681152344],\"size\":[296.5935363769531,187.9747314453125],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownOptions Detail gives a boost to detail a lot like the \\\"Detail Daemon\\\" node, though I think with somewhat less risk of mutations and loss of saturation. Change \\\"weight\\\", \\\"eta\\\", or \\\"end_step\\\" to control strength.\\n\\nImplicit steps can be used in place of \\\"Cycles\\\". Try setting steps_to_run to 3 or  4 if you use it.\\n\\nClownOptions SDE contains extra settings for noise, so you can change the type, amount, etc. with more precision.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1733,\"type\":\"Note\",\"pos\":[-819.1915893554688,-1111.3170166015625],\"size\":[251.92019653320312,88],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Try changing the weight or end_at if results look plastic.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1704,\"type\":\"ApplyPulidFlux\",\"pos\":[-805.7684326171875,-986.1819458007812],\"size\":[219.79336547851562,206],\"flags\":{},\"order\":66,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":6548},{\"name\":\"pulid_flux\",\"localized_name\":\"pulid_flux\",\"type\":\"PULIDFLUX\",\"link\":6524},{\"name\":\"eva_clip\",\"localized_name\":\"eva_clip\",\"type\":\"EVA_CLIP\",\"link\":6525},{\"name\":\"face_analysis\",\"localized_name\":\"face_analysis\",\"type\":\"FACEANALYSIS\",\"link\":6526},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":null},{\"name\":\"attn_mask\",\"localized_name\":\"attn_mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"shape\":3,\"links\":[6549],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ApplyPulidFlux\"},\"widgets_values\":[1,0,1]},{\"id\":1737,\"type\":\"Note\",\"pos\":[-1184.4395751953125,-1304.4234619140625],\"size\":[251.92019653320312,88],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The image you choose is very important. The face should have its proportions clearly distinguishable.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1717,\"type\":\"LoadImage\",\"pos\":[-603.783203125,-1602.01904296875],\"size\":[315,314],\"flags\":{},\"order\":30,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (812).png\",\"image\"]},{\"id\":1446,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[214.812255859375,-508.00537109375],\"size\":[277.5089111328125,735.1378784179688],\"flags\":{},\"order\":67,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":6549},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":6421},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":5373},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":6051},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":5493},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":5776},{\"name\":\"options 3\",\"type\":\"OPTIONS\",\"link\":6402},{\"name\":\"options 4\",\"type\":\"OPTIONS\",\"link\":6589},{\"name\":\"options 5\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[6380],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[1,\"exponential/res_2s\",\"bong_tangent\",30,1,0.65,1,100,\"fixed\",\"standard\",true],\"color\":\"#332922\",\"bgcolor\":\"#593930\"},{\"id\":1556,\"type\":\"CLIPTextEncode\",\"pos\":[-392.6881408691406,-498.2940979003906],\"size\":[289.0962829589844,113.79679870605469],\"flags\":{\"collapsed\":false},\"order\":42,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":6103}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[6422],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"\"],\"color\":\"#2a363b\",\"bgcolor\":\"#3f5159\"},{\"id\":1707,\"type\":\"LoadImage\",\"pos\":[-1272.3699951171875,-406.4196472167969],\"size\":[315,314],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[6619],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[6620],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"clipspace/clipspace-mask-18464655.700000048.png [input]\",\"image\"]},{\"id\":1740,\"type\":\"Note\",\"pos\":[-892.4718627929688,-1299.925048828125],\"size\":[251.92019653320312,88],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"PuLID will copy much of the lighting and especially position/angle of the face. Keep this in mind.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1690,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[865.4187622070312,-518.0064086914062],\"size\":[281.7781677246094,571.74853515625],\"flags\":{},\"order\":69,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":6566},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":6476},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":6587},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":6591},{\"name\":\"options 3\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[6478],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0,\"multistep/res_3m\",-1,1,\"resample\",false]},{\"id\":1479,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[536.1533203125,-510.75872802734375],\"size\":[288.1370544433594,571.74853515625],\"flags\":{},\"order\":68,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":6380},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":6415},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":6585},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":6590},{\"name\":\"options 3\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[6566],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0,\"exponential/res_2s\",2,1,\"resample\",true]}],\"links\":[[2100,731,1,726,2,\"FLOAT\"],[2101,726,0,727,0,\"IMAGE\"],[2102,726,0,727,1,\"IMAGE\"],[2103,728,0,727,2,\"IMAGE\"],[2104,729,0,727,5,\"INT\"],[2105,729,1,727,6,\"INT\"],[2106,726,1,728,0,\"MASK\"],[2108,729,0,731,0,\"*\"],[2109,729,1,731,1,\"*\"],[2153,14,0,745,1,\"VAE\"],[2198,758,0,759,1,\"IMAGE\"],[2200,759,0,761,1,\"IMAGE\"],[2201,745,0,758,0,\"IMAGE\"],[2204,726,5,758,1,\"INT\"],[2205,726,6,758,2,\"INT\"],[2206,726,3,759,3,\"INT\"],[2207,726,4,759,4,\"INT\"],[2210,725,0,761,0,\"IMAGE\"],[2211,725,0,759,0,\"IMAGE\"],[2241,745,0,744,0,\"IMAGE\"],[3508,14,0,727,4,\"VAE\"],[3568,745,0,1022,0,\"IMAGE\"],[3569,1022,0,1024,0,\"IMAGE\"],[3570,765,0,1022,1,\"IMAGE\"],[3605,728,0,1039,1,\"IMAGE\"],[3606,726,0,1039,0,\"IMAGE\"],[3607,1039,0,1040,0,\"IMAGE\"],[3720,1071,0,1072,2,\"CLIP_VISION_OUTPUT\"],[3721,726,0,1071,1,\"IMAGE\"],[4157,490,0,1073,0,\"CLIP\"],[4185,759,0,1162,0,\"*\"],[4186,1162,0,1161,0,\"IMAGE\"],[4650,1073,0,1277,1,\"CONDITIONING\"],[4980,1073,0,1072,0,\"CONDITIONING\"],[4996,726,0,1369,0,\"IMAGE\"],[4997,745,0,1368,0,\"IMAGE\"],[4998,729,0,1369,1,\"INT\"],[4999,729,1,1369,2,\"INT\"],[5000,1369,0,1368,1,\"IMAGE\"],[5021,1407,0,726,1,\"MASK\"],[5054,725,0,726,0,\"IMAGE\"],[5373,727,0,1446,3,\"LATENT\"],[5439,1476,0,1477,0,\"MODEL\"],[5440,1476,1,1477,1,\"CLIP\"],[5442,727,3,1478,1,\"LATENT\"],[5447,1476,2,14,0,\"*\"],[5493,1277,0,1446,6,\"OPTIONS\"],[5529,726,1,765,0,\"MASK\"],[5653,1072,0,1277,0,\"CONDITIONING\"],[5715,727,0,1525,0,\"LATENT\"],[5776,1516,0,1446,7,\"OPTIONS\"],[6051,1525,0,1446,5,\"GUIDES\"],[6103,490,0,1556,0,\"CLIP\"],[6201,727,0,1569,0,\"LATENT\"],[6202,727,0,1569,1,\"LATENT\"],[6222,727,2,1571,0,\"*\"],[6223,1571,0,1569,2,\"MASK\"],[6224,1571,0,1569,3,\"MASK\"],[6225,1571,0,1569,4,\"MASK\"],[6226,1571,0,1569,5,\"MASK\"],[6227,1571,0,1569,6,\"MASK\"],[6228,1571,0,1569,7,\"MASK\"],[6229,727,0,1572,0,\"LATENT\"],[6230,727,0,1572,1,\"LATENT\"],[6239,1573,0,1569,20,\"FLOAT\"],[6240,1574,0,1569,21,\"FLOAT\"],[6241,1575,0,1569,22,\"FLOAT\"],[6342,1571,0,1539,0,\"MASK\"],[6343,1539,0,1572,2,\"MASK\"],[6344,1539,0,1572,3,\"MASK\"],[6345,1539,0,1572,4,\"MASK\"],[6346,1539,0,1572,5,\"MASK\"],[6347,1539,0,1572,6,\"MASK\"],[6348,1539,0,1572,7,\"MASK\"],[6360,727,2,1667,0,\"MASK\"],[6361,1667,0,1516,0,\"MASK\"],[6380,1446,0,1479,4,\"LATENT\"],[6383,1478,0,1524,0,\"MODEL\"],[6396,1279,0,1478,0,\"MODEL\"],[6397,1477,0,1279,0,\"MODEL\"],[6398,1477,1,490,0,\"*\"],[6402,1454,0,1446,8,\"OPTIONS\"],[6411,1569,0,1525,3,\"GUIDES\"],[6412,727,0,1672,0,\"LATENT\"],[6414,1572,0,1672,3,\"GUIDES\"],[6415,1672,0,1479,5,\"GUIDES\"],[6421,1548,0,1446,1,\"CONDITIONING\"],[6422,1556,0,1548,0,\"CONDITIONING\"],[6447,726,2,759,2,\"MASK\"],[6476,1672,0,1690,5,\"GUIDES\"],[6478,1690,0,745,0,\"LATENT\"],[6524,1703,0,1704,1,\"PULIDFLUX\"],[6525,1701,0,1704,2,\"EVA_CLIP\"],[6526,1702,0,1704,3,\"FACEANALYSIS\"],[6543,726,0,1548,3,\"IMAGE\"],[6547,1524,0,13,0,\"*\"],[6548,13,0,1704,0,\"MODEL\"],[6549,1704,0,1446,0,\"MODEL\"],[6550,1476,3,1548,2,\"CLIP_VISION\"],[6551,1476,4,1548,1,\"STYLE_MODEL\"],[6552,1476,3,1071,0,\"CLIP_VISION\"],[6553,1476,4,1072,1,\"STYLE_MODEL\"],[6566,1479,0,1690,4,\"LATENT\"],[6569,727,2,1525,1,\"MASK\"],[6570,727,2,1672,1,\"MASK\"],[6584,1571,0,1730,0,\"MASK\"],[6585,1731,0,1479,6,\"OPTIONS\"],[6586,1730,0,1731,0,\"MASK\"],[6587,1731,0,1690,6,\"OPTIONS\"],[6589,1722,0,1446,9,\"OPTIONS\"],[6590,1722,0,1479,7,\"OPTIONS\"],[6591,1722,0,1690,7,\"OPTIONS\"],[6619,1707,0,725,0,\"*\"],[6620,1707,1,1407,0,\"*\"]],\"groups\":[{\"id\":1,\"title\":\"Prepare Input\",\"bounding\":[-1310.92529296875,-489.52618408203125,755.7755737304688,762.867431640625],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":2,\"title\":\"Patch and Stitch\",\"bounding\":[1250.695068359375,-877.5091552734375,1320.4892578125,1148.6859130859375],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":3,\"title\":\"Loaders\",\"bounding\":[-2115.099853515625,-1180.8953857421875,881.3677368164062,646.2952880859375],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":5,\"title\":\"Sampling\",\"bounding\":[-510.548828125,-602.9613037109375,1686.064208984375,874.1248168945312],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":6,\"title\":\"Guides\",\"bounding\":[-37.0714225769043,-1229.123046875,888.9586791992188,587.7683715820312],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":7,\"title\":\"PuLID\",\"bounding\":[-1191.9031982421875,-1177.2020263671875,649.8841552734375,641.718994140625],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}}],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.3310000000000006,\"offset\":[4741.826990245036,1361.8744550803772]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux faceswap sync.json",
    "content": "{\"last_node_id\":1698,\"last_link_id\":6519,\"nodes\":[{\"id\":490,\"type\":\"Reroute\",\"pos\":[-669.7835083007812,-822.2691040039062],\"size\":[75,26],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":6398}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[4157,6103],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1162,\"type\":\"Reroute\",\"pos\":[1930.0975341796875,-817.45556640625],\"size\":[75,26],\"flags\":{},\"order\":66,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":4185}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[4186],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":744,\"type\":\"SaveImage\",\"pos\":[1276.456787109375,-719.9273681640625],\"size\":[424.53594970703125,455.0760192871094],\"flags\":{},\"order\":60,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2241}],\"outputs\":[],\"title\":\"Save Patch\",\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"],\"color\":\"#332922\",\"bgcolor\":\"#593930\"},{\"id\":1022,\"type\":\"ImageBlend\",\"pos\":[2313.7607421875,-792.44091796875],\"size\":[210,102],\"flags\":{\"collapsed\":true},\"order\":61,\"mode\":0,\"inputs\":[{\"name\":\"image1\",\"localized_name\":\"image1\",\"type\":\"IMAGE\",\"link\":3568},{\"name\":\"image2\",\"localized_name\":\"image2\",\"type\":\"IMAGE\",\"link\":3570}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3569],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageBlend\"},\"widgets_values\":[0.5,\"multiply\"]},{\"id\":729,\"type\":\"SetImageSize\",\"pos\":[-812.6932373046875,-86.24114227294922],\"size\":[210,102],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[2104,2108,4998],\"slot_index\":0},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[2105,2109,4999],\"slot_index\":1}],\"title\":\"Inpaint Tile Size\",\"properties\":{\"Node name for S&R\":\"SetImageSize\"},\"widgets_values\":[1024,1024]},{\"id\":1161,\"type\":\"Image Save\",\"pos\":[2186.75634765625,-722.2388916015625],\"size\":[351.4677734375,796.8805541992188],\"flags\":{},\"order\":67,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":4186}],\"outputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"links\":null},{\"name\":\"files\",\"localized_name\":\"files\",\"type\":\"STRING\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Image Save\"},\"widgets_values\":[\"[time(%Y-%m-%d)]\",\"ComfyUI\",\"_\",4,\"false\",\"jpeg\",300,100,\"true\",\"false\",\"false\",\"false\",\"true\",\"true\",\"true\"],\"color\":\"#232\",\"bgcolor\":\"#353\"},{\"id\":1024,\"type\":\"PreviewImage\",\"pos\":[1286.05859375,-198.6599884033203],\"size\":[413.7582092285156,445.8081359863281],\"flags\":{},\"order\":64,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3569}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[],\"color\":\"#332922\",\"bgcolor\":\"#593930\"},{\"id\":758,\"type\":\"ImageResize+\",\"pos\":[1468.4384765625,-790.391845703125],\"size\":[210,218],\"flags\":{\"collapsed\":true},\"order\":59,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2201},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,76],\"widget\":{\"name\":\"width\"},\"link\":2204},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,100],\"widget\":{\"name\":\"height\"},\"link\":2205}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2198],\"slot_index\":0},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ImageResize+\"},\"widgets_values\":[512,512,\"lanczos\",\"stretch\",\"always\",0]},{\"id\":1369,\"type\":\"ImageResize+\",\"pos\":[2183.37109375,151.09762573242188],\"size\":[210,218],\"flags\":{\"collapsed\":true},\"order\":33,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":4996},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,76],\"widget\":{\"name\":\"width\"},\"link\":4998},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,100],\"widget\":{\"name\":\"height\"},\"link\":4999}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[5000],\"slot_index\":0},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ImageResize+\"},\"widgets_values\":[512,512,\"lanczos\",\"stretch\",\"always\",0]},{\"id\":1407,\"type\":\"Reroute\",\"pos\":[-914.50390625,-361.0196533203125],\"size\":[75,26],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":6519}],\"outputs\":[{\"name\":\"\",\"type\":\"MASK\",\"links\":[5021],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":725,\"type\":\"Reroute\",\"pos\":[-914.8554077148438,-440.6482238769531],\"size\":[75,26],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":6518}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[2210,2211,5054],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1071,\"type\":\"CLIPVisionEncode\",\"pos\":[586.1533203125,119.24115753173828],\"size\":[253.60000610351562,78],\"flags\":{\"collapsed\":true},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":5443},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":3721}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[3720],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":1575,\"type\":\"PrimitiveFloat\",\"pos\":[11.355203628540039,-940.5784912109375],\"size\":[210,58],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"FLOAT\",\"localized_name\":\"FLOAT\",\"type\":\"FLOAT\",\"links\":[6241],\"slot_index\":0}],\"title\":\"Similarity\",\"properties\":{\"Node name for S&R\":\"PrimitiveFloat\"},\"widgets_values\":[1]},{\"id\":1654,\"type\":\"LoadImage\",\"pos\":[773.8897705078125,1813.0185546875],\"size\":[315,314],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":null},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"7c2a2a772675a224-photo.JPG\",\"image\"]},{\"id\":1478,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[-1096.887451171875,-1029.6195068359375],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":43,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":6396},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":5442}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[6383],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":1279,\"type\":\"TorchCompileModels\",\"pos\":[-1409.527587890625,-1089.560302734375],\"size\":[285.9945068359375,179.0001983642578],\"flags\":{},\"order\":27,\"mode\":4,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":6397}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[6396],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"TorchCompileModels\"},\"widgets_values\":[\"inductor\",false,\"default\",false,64,0]},{\"id\":14,\"type\":\"Reroute\",\"pos\":[-669.7835083007812,-782.2691040039062],\"size\":[75,26],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":5447}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[2153,3508,6353],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":13,\"type\":\"Reroute\",\"pos\":[-669.7835083007812,-862.2692260742188],\"size\":[75,26],\"flags\":{},\"order\":51,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":5845}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[5846],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1516,\"type\":\"ClownOptions_SDE_Mask_Beta\",\"pos\":[-68.4439468383789,-163.1180877685547],\"size\":[252.8383331298828,126],\"flags\":{},\"order\":47,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":6361},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[5776,6016,6477],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Mask_Beta\"},\"widgets_values\":[1,0,false]},{\"id\":1667,\"type\":\"GrowMask\",\"pos\":[-302.060302734375,-164.22067260742188],\"size\":[210,82],\"flags\":{},\"order\":42,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":6360}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[6361],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"GrowMask\"},\"widgets_values\":[-10,false]},{\"id\":1039,\"type\":\"ImageBlend\",\"pos\":[-769.9498901367188,220.86917114257812],\"size\":[210,102],\"flags\":{\"collapsed\":true},\"order\":39,\"mode\":0,\"inputs\":[{\"name\":\"image1\",\"localized_name\":\"image1\",\"type\":\"IMAGE\",\"link\":3606},{\"name\":\"image2\",\"localized_name\":\"image2\",\"type\":\"IMAGE\",\"link\":3605}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3607],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageBlend\"},\"widgets_values\":[0.5,\"multiply\"]},{\"id\":727,\"type\":\"VAEEncodeAdvanced\",\"pos\":[-789.0958862304688,67.53204345703125],\"size\":[262.4812927246094,298],\"flags\":{\"collapsed\":true},\"order\":38,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2101},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2102},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2103},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":3508},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,160],\"widget\":{\"name\":\"width\"},\"link\":2104},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,184],\"widget\":{\"name\":\"height\"},\"link\":2105}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[5373,5715,6201,6202,6229,6230,6412],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[6222,6360],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[5442],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[],\"slot_index\":4},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[]}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1344,768,\"red\",false,\"16_channels\"]},{\"id\":731,\"type\":\"SimpleMath+\",\"pos\":[-776.4415893554688,126.82145690917969],\"size\":[315,98],\"flags\":{\"collapsed\":true},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"a\",\"localized_name\":\"a\",\"type\":\"*\",\"shape\":7,\"link\":2108},{\"name\":\"b\",\"localized_name\":\"b\",\"type\":\"*\",\"shape\":7,\"link\":2109},{\"name\":\"c\",\"localized_name\":\"c\",\"type\":\"*\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"INT\",\"localized_name\":\"INT\",\"type\":\"INT\",\"links\":null},{\"name\":\"FLOAT\",\"localized_name\":\"FLOAT\",\"type\":\"FLOAT\",\"links\":[2100],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"SimpleMath+\"},\"widgets_values\":[\"a/b\"]},{\"id\":728,\"type\":\"MaskToImage\",\"pos\":[-791.0198364257812,176.82147216796875],\"size\":[176.39999389648438,26],\"flags\":{\"collapsed\":true},\"order\":34,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2106}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2103,3605],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskToImage\"},\"widgets_values\":[]},{\"id\":765,\"type\":\"MaskToImage\",\"pos\":[2080.868896484375,-792.6943359375],\"size\":[182.28543090820312,26],\"flags\":{\"collapsed\":true},\"order\":35,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":5529}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3570],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskToImage\"},\"widgets_values\":[]},{\"id\":761,\"type\":\"Image Comparer (rgthree)\",\"pos\":[1747.432373046875,-712.1251220703125],\"size\":[410.4466247558594,447.8973388671875],\"flags\":{},\"order\":65,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":2210},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":2200}],\"outputs\":[],\"title\":\"Compare Output\",\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_udooi_00119_.png&type=temp&subfolder=&rand=0.4602348825653009\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_udooi_00120_.png&type=temp&subfolder=&rand=0.24695456359911838\"}]],\"color\":\"#232\",\"bgcolor\":\"#353\"},{\"id\":1072,\"type\":\"StyleModelApply\",\"pos\":[591.9240112304688,151.93089294433594],\"size\":[262,122],\"flags\":{\"collapsed\":true},\"order\":37,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":4980},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":5444},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":3720}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[5653],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":1073,\"type\":\"CLIPTextEncode\",\"pos\":[575.77001953125,186.9269256591797],\"size\":[263.280517578125,88.73566436767578],\"flags\":{\"collapsed\":true},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":4157}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[4650,4980],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"\"]},{\"id\":1569,\"type\":\"ClownGuides_Sync_Advanced\",\"pos\":[261.355224609375,-1000.5784912109375],\"size\":[315,1938],\"flags\":{\"collapsed\":true},\"order\":45,\"mode\":0,\"inputs\":[{\"name\":\"guide_masked\",\"localized_name\":\"guide_masked\",\"type\":\"LATENT\",\"shape\":7,\"link\":6201},{\"name\":\"guide_unmasked\",\"localized_name\":\"guide_unmasked\",\"type\":\"LATENT\",\"shape\":7,\"link\":6202},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":6223},{\"name\":\"mask_sync\",\"localized_name\":\"mask_sync\",\"type\":\"MASK\",\"shape\":7,\"link\":6224},{\"name\":\"mask_drift_x\",\"localized_name\":\"mask_drift_x\",\"type\":\"MASK\",\"shape\":7,\"link\":6225},{\"name\":\"mask_drift_y\",\"localized_name\":\"mask_drift_y\",\"type\":\"MASK\",\"shape\":7,\"link\":6226},{\"name\":\"mask_lure_x\",\"localized_name\":\"mask_lure_x\",\"type\":\"MASK\",\"shape\":7,\"link\":6227},{\"name\":\"mask_lure_y\",\"localized_name\":\"mask_lure_y\",\"type\":\"MASK\",\"shape\":7,\"link\":6228},{\"name\":\"weights_masked\",\"localized_name\":\"weights_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"weights_unmasked\",\"localized_name\":\"weights_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"syncs_masked\",\"localized_name\":\"syncs_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"syncs_unmasked\",\"localized_name\":\"syncs_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_xs_masked\",\"localized_name\":\"drift_xs_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_xs_unmasked\",\"localized_name\":\"drift_xs_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_ys_masked\",\"localized_name\":\"drift_ys_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_ys_unmasked\",\"localized_name\":\"drift_ys_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_xs_masked\",\"localized_name\":\"lure_xs_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_xs_unmasked\",\"localized_name\":\"lure_xs_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_ys_masked\",\"localized_name\":\"lure_ys_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_ys_unmasked\",\"localized_name\":\"lure_ys_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_x_data\",\"type\":\"FLOAT\",\"pos\":[10,800],\"widget\":{\"name\":\"drift_x_data\"},\"link\":6239},{\"name\":\"drift_y_guide\",\"type\":\"FLOAT\",\"pos\":[10,1088],\"widget\":{\"name\":\"drift_y_guide\"},\"link\":6240},{\"name\":\"sync_masked\",\"type\":\"FLOAT\",\"pos\":[10,608],\"widget\":{\"name\":\"sync_masked\"},\"link\":6241}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[6411],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuides_Sync_Advanced\"},\"widgets_values\":[1,1,\"constant\",\"constant\",0,0,-1,-1,0,1,\"constant\",\"constant\",0,0,-1,-1,0.2,0,1,0,\"constant\",\"constant\",0,0,-1,-1,0,0,0.2,1,0,\"constant\",\"constant\",0,0,-1,-1,0,0,\"constant\",\"constant\",0,0,-1,-1,0,0,\"constant\",\"constant\",0,0,-1,-1,0,\"y -> x\",false,false,false,false,false,false]},{\"id\":1571,\"type\":\"Reroute\",\"pos\":[141.35520935058594,-1030.5784912109375],\"size\":[75,26],\"flags\":{},\"order\":41,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":6222}],\"outputs\":[{\"name\":\"\",\"type\":\"MASK\",\"links\":[6223,6224,6225,6226,6227,6228,6342],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1664,\"type\":\"VAEDecode\",\"pos\":[1440,-1320],\"size\":[140,46],\"flags\":{\"collapsed\":true},\"order\":55,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":6354},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":6353}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[6355],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":1368,\"type\":\"Image Comparer (rgthree)\",\"pos\":[1744.9150390625,-199.16920471191406],\"size\":[410.4466247558594,447.8973388671875],\"flags\":{},\"order\":62,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":4997},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":5000}],\"outputs\":[],\"title\":\"Compare Patch\",\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_sgbfj_00119_.png&type=temp&subfolder=&rand=0.4913573783056806\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_sgbfj_00120_.png&type=temp&subfolder=&rand=0.2366457814945162\"}]],\"color\":\"#232\",\"bgcolor\":\"#353\"},{\"id\":1665,\"type\":\"PreviewImage\",\"pos\":[1430,-1270],\"size\":[343.7617492675781,360.52777099609375],\"flags\":{},\"order\":57,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":6355}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":1673,\"type\":\"Note\",\"pos\":[1824.9287109375,-1010.687744140625],\"size\":[322.34954833984375,88],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Preview of first stage output: sometimes it can be worth manually (or automatically, using DINO, etc.) adjusting your mask for the second stage, based on this output.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1539,\"type\":\"GrowMask\",\"pos\":[573.4215698242188,-1145.86767578125],\"size\":[214.5684051513672,82],\"flags\":{},\"order\":46,\"mode\":4,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":6342}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[6343,6344,6345,6346,6347,6348],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"GrowMask\"},\"widgets_values\":[10,false]},{\"id\":1383,\"type\":\"Note\",\"pos\":[216.7359161376953,340.25775146484375],\"size\":[291.67218017578125,232.2296142578125],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"eta > 0.0 means you are using SDE/ancestral sampling. With this guide mode you will generally want to use bongmath = true.\\n\\nSamplers such as res_2s and res_3s will be very accurate. Try res_5s and res_8s if you really want to go crazy with it. They run 2x (2s), 3x (3s), etc slower than Euler.\\n\\nres_2m and 3m will be fast and also good, and run at the same speed as Euler.\\n\\neta_substep will increase the power of bongmath. If it is set to 0.0, you can turn bongmath off without any effect.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1380,\"type\":\"Note\",\"pos\":[544.9375610351562,342.0576477050781],\"size\":[290.1026611328125,231.5842742919922],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Setting denoise to a negative value is equivalent to just scaling it. For example:\\n\\nDenoise = -0.90 is the same as multiplying every sigma value in the entire schedule by 0.9.\\n\\nI find this is a lot easier to control than the regular denoise scale. The difference between -0.95 and -0.9 is much more predictable than with 0.95 and 0.9. Most of us have seen how different denoise 0.8 might be with Karras vs. exponential. \\n\\nTry a denoise between -0.95 and -0.85. \"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":759,\"type\":\"ImageCompositeMasked\",\"pos\":[1697.19140625,-790.8740844726562],\"size\":[210,186],\"flags\":{\"collapsed\":true},\"order\":63,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"IMAGE\",\"link\":2211},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"IMAGE\",\"link\":2198},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":6447},{\"name\":\"x\",\"type\":\"INT\",\"pos\":[10,76],\"widget\":{\"name\":\"x\"},\"link\":2206},{\"name\":\"y\",\"type\":\"INT\",\"pos\":[10,100],\"widget\":{\"name\":\"y\"},\"link\":2207}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2200,4185],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageCompositeMasked\"},\"widgets_values\":[712,800,false]},{\"id\":1552,\"type\":\"ClownOptions_SDE_Beta\",\"pos\":[-275.5662841796875,211.60325622558594],\"size\":[315,266],\"flags\":{\"collapsed\":true},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas_substep\",\"localized_name\":\"etas_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Beta\"},\"widgets_values\":[\"gaussian\",\"gaussian\",\"hard\",\"hard\",1,1,-1,\"fixed\"]},{\"id\":1619,\"type\":\"LoadImage\",\"pos\":[79.17283630371094,1820.8131103515625],\"size\":[315,314],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":null},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"9319202660b0e794-photo.JPG\",\"image\"]},{\"id\":1476,\"type\":\"FluxLoader\",\"pos\":[-1417.3287353515625,-846.1827392578125],\"size\":[385.17449951171875,282],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[5439],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[5440],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[5447],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":[5443,5993],\"slot_index\":3},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":[5444,5994],\"slot_index\":4}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"flux1-dev.sft\",\"fp8_e4m3fn_fast\",\"clip_l_flux.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"ae.sft\",\"sigclip_vision_patch14_384.safetensors\",\"flux1-redux-dev.safetensors\"]},{\"id\":1687,\"type\":\"Note\",\"pos\":[-101.33948516845703,339.7750244140625],\"size\":[286.97723388671875,180.28128051757812],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The cycles node causes the connected sampler to loop between sampling and unsampling steps. (Unsampling is running the sampler backwards, where it predicts the noise that would lead to a given output).\\n\\nWhen unsample_eta is set to -1, it simply uses the same settings for eta as in the connected node. \"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":745,\"type\":\"VAEDecode\",\"pos\":[1297.53369140625,-791.137939453125],\"size\":[140,46],\"flags\":{\"collapsed\":true},\"order\":58,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":6478},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2153}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2201,2241,3568,4997],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":1689,\"type\":\"Note\",\"pos\":[525.9268798828125,-1349.89794921875],\"size\":[263.00439453125,88],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Expanding the mask for the second pass can sometimes help prevent seams.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1688,\"type\":\"Note\",\"pos\":[-838.7593994140625,-1316.05126953125],\"size\":[274.47601318359375,104.34856414794922],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ReFluxPatcher is required to use the \\\"Style\\\" nodes. Different \\\"Re...Patcher\\\" nodes are available for many other models, from SD1.5/SDXL to SD3.5, HiDream, AuraFlow, Chroma, WAN, and LTXV.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1678,\"type\":\"Note\",\"pos\":[-422.92510986328125,-333.6911926269531],\"size\":[324.0018005371094,113.63665771484375],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ReduxAdvanced is used to help get things on track. Bypass if you're having problems with it disrupting character likeness.\\n\\nThe SDE Mask ensures SDE noise is used only in the masked area, limiting change in unmasked areas that could lead to seams. \"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1572,\"type\":\"ClownGuides_Sync_Advanced\",\"pos\":[581.355224609375,-1000.5784912109375],\"size\":[315,1878],\"flags\":{\"collapsed\":true},\"order\":50,\"mode\":0,\"inputs\":[{\"name\":\"guide_masked\",\"localized_name\":\"guide_masked\",\"type\":\"LATENT\",\"shape\":7,\"link\":6229},{\"name\":\"guide_unmasked\",\"localized_name\":\"guide_unmasked\",\"type\":\"LATENT\",\"shape\":7,\"link\":6230},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":6343},{\"name\":\"mask_sync\",\"localized_name\":\"mask_sync\",\"type\":\"MASK\",\"shape\":7,\"link\":6344},{\"name\":\"mask_drift_x\",\"localized_name\":\"mask_drift_x\",\"type\":\"MASK\",\"shape\":7,\"link\":6345},{\"name\":\"mask_drift_y\",\"localized_name\":\"mask_drift_y\",\"type\":\"MASK\",\"shape\":7,\"link\":6346},{\"name\":\"mask_lure_x\",\"localized_name\":\"mask_lure_x\",\"type\":\"MASK\",\"shape\":7,\"link\":6347},{\"name\":\"mask_lure_y\",\"localized_name\":\"mask_lure_y\",\"type\":\"MASK\",\"shape\":7,\"link\":6348},{\"name\":\"weights_masked\",\"localized_name\":\"weights_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"weights_unmasked\",\"localized_name\":\"weights_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"syncs_masked\",\"localized_name\":\"syncs_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"syncs_unmasked\",\"localized_name\":\"syncs_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_xs_masked\",\"localized_name\":\"drift_xs_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_xs_unmasked\",\"localized_name\":\"drift_xs_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_ys_masked\",\"localized_name\":\"drift_ys_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"drift_ys_unmasked\",\"localized_name\":\"drift_ys_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_xs_masked\",\"localized_name\":\"lure_xs_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_xs_unmasked\",\"localized_name\":\"lure_xs_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_ys_masked\",\"localized_name\":\"lure_ys_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"lure_ys_unmasked\",\"localized_name\":\"lure_ys_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[6414],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuides_Sync_Advanced\"},\"widgets_values\":[0,1,\"constant\",\"constant\",0,0,-1,-1,0,1,\"constant\",\"constant\",0,0,-1,-1,0,0,1,0,\"constant\",\"constant\",0,0,-1,-1,0,0,0,1,0,\"constant\",\"constant\",0,0,-1,-1,0,0,\"constant\",\"constant\",0,0,-1,-1,0,0,\"constant\",\"constant\",0,0,-1,-1,0,\"y -> x\",false,false,false,false,false,false]},{\"id\":1674,\"type\":\"Note\",\"pos\":[170.8737030029297,-1390.4803466796875],\"size\":[322.6287841796875,128.15802001953125],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Activate the style nodes if you are having issues with color, detail, light, blurriness or pixelation drifting too far from your source input.\\n\\nIf end_step is too high, you may get faint halos and an oversharpened look.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1524,\"type\":\"ReFluxPatcher\",\"pos\":[-809.3073120117188,-985.41064453125],\"size\":[210,82],\"flags\":{},\"order\":48,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":6383}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[5845],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":1690,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[865.4187622070312,-518.0064086914062],\"size\":[281.7781677246094,571.74853515625],\"flags\":{},\"order\":56,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":6479},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":6476},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":6477},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[6478],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0,\"multistep/res_3m\",-1,1,\"resample\",false]},{\"id\":1479,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[536.1533203125,-510.75872802734375],\"size\":[288.1370544433594,571.74853515625],\"flags\":{},\"order\":54,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":6380},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":6415},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":6016},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[6479],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0,\"exponential/res_2s\",2,1,\"resample\",true]},{\"id\":1693,\"type\":\"Note\",\"pos\":[-858.5514526367188,-640.8011474609375],\"size\":[276.7918701171875,88],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Padding can be very important. Some models/loras/IPadapter embeds etc. are going to respond very differently if the shot is close up vs. farther away.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1525,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[251.35520935058594,-950.5784912109375],\"size\":[252.0535430908203,286],\"flags\":{},\"order\":49,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":5715},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":6411}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[6051],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":1672,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[561.355224609375,-950.5784912109375],\"size\":[252.0535430908203,286],\"flags\":{},\"order\":52,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":6412},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":6414}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[6415,6476],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,5,false]},{\"id\":726,\"type\":\"Mask Bounding Box Aspect Ratio\",\"pos\":[-828.6614990234375,-412.50946044921875],\"size\":[252,250],\"flags\":{\"collapsed\":false},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"shape\":7,\"link\":5054},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":5021},{\"name\":\"aspect_ratio\",\"type\":\"FLOAT\",\"pos\":[10,204],\"widget\":{\"name\":\"aspect_ratio\"},\"link\":2100}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[2101,2102,3606,3721,4996,5995],\"slot_index\":0},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[2106,5529],\"slot_index\":1},{\"name\":\"mask_blurred\",\"localized_name\":\"mask_blurred\",\"type\":\"MASK\",\"links\":[6447],\"slot_index\":2},{\"name\":\"x\",\"localized_name\":\"x\",\"type\":\"INT\",\"links\":[2206],\"slot_index\":3},{\"name\":\"y\",\"localized_name\":\"y\",\"type\":\"INT\",\"links\":[2207],\"slot_index\":4},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[2204],\"slot_index\":5},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[2205],\"slot_index\":6}],\"properties\":{\"Node name for S&R\":\"Mask Bounding Box Aspect Ratio\"},\"widgets_values\":[100,40,1.75,false]},{\"id\":1677,\"type\":\"Note\",\"pos\":[-439.5185241699219,-738.3756713867188],\"size\":[290.3874816894531,88],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Try setting both drift values to 0.0 or 0.2 as a starting point.\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1694,\"type\":\"Note\",\"pos\":[-441.5133056640625,-999.14990234375],\"size\":[291.2616882324219,189.98562622070312],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increase character likeness by: \\n\\nDecreasing \\\"Similarity\\\"\\nIncreasing \\\"Drift Toward Target\\\"\\nIncreasing cycles\\nIncreasing eta (max 1.0)\\nIncreasing denoise\\n\\nIncrease adherence to the input image by:\\n\\nDoing the opposite of any of the above\\nIncreasing \\\"Drift Toward Guide\\\"\\nEnabling the ReduxAdvanced node\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1277,\"type\":\"SharkOptions_GuideCond_Beta\",\"pos\":[575.9444580078125,221.88970947265625],\"size\":[315,98],\"flags\":{\"collapsed\":true},\"order\":40,\"mode\":0,\"inputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":5653},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":4650},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[5493],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_GuideCond_Beta\"},\"widgets_values\":[1]},{\"id\":1548,\"type\":\"ReduxAdvanced\",\"pos\":[-69.81456756591797,-498.3502502441406],\"size\":[248.6250457763672,234],\"flags\":{},\"order\":36,\"mode\":4,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":6422},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":5994},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":5993},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":5995},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[6421],\"slot_index\":0},{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":null},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ReduxAdvanced\"},\"widgets_values\":[3,\"area\",\"center crop (square)\",1,0.1]},{\"id\":1446,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[214.812255859375,-508.00537109375],\"size\":[277.5089111328125,735.1378784179688],\"flags\":{},\"order\":53,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":5846},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":6421},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":5373},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":6051},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":5493},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":5776},{\"name\":\"options 3\",\"type\":\"OPTIONS\",\"link\":6402},{\"name\":\"options 4\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[6380],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":[6354],\"slot_index\":1},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[1,\"exponential/res_2s\",\"bong_tangent\",30,1,0.55,1,100,\"fixed\",\"standard\",true],\"color\":\"#332922\",\"bgcolor\":\"#593930\"},{\"id\":1454,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[-74.8967514038086,24.043270111083984],\"size\":[261.7955627441406,202],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[6402],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[20,1,-1,\"none\",-1,1,true]},{\"id\":1573,\"type\":\"PrimitiveFloat\",\"pos\":[10.393571853637695,-834.4251708984375],\"size\":[210,58],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"FLOAT\",\"localized_name\":\"FLOAT\",\"type\":\"FLOAT\",\"links\":[6239],\"slot_index\":0}],\"title\":\"Drift Toward Target\",\"properties\":{\"Node name for S&R\":\"PrimitiveFloat\"},\"widgets_values\":[0.2]},{\"id\":1574,\"type\":\"PrimitiveFloat\",\"pos\":[11.355203628540039,-720.5784912109375],\"size\":[210,58],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"FLOAT\",\"localized_name\":\"FLOAT\",\"type\":\"FLOAT\",\"links\":[6240],\"slot_index\":0}],\"title\":\"Drift Toward Guide\",\"properties\":{\"Node name for S&R\":\"PrimitiveFloat\"},\"widgets_values\":[0.2]},{\"id\":1477,\"type\":\"LoraLoader\",\"pos\":[-1007.4993896484375,-844.936279296875],\"size\":[315,126],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":5439},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":5440}],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[6397],\"slot_index\":0},{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[6398],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"LoraLoader\"},\"widgets_values\":[\"FLUX/Kirsten_Dunst_Flux_V1.safetensors\",1,1]},{\"id\":1556,\"type\":\"CLIPTextEncode\",\"pos\":[-392.6881408691406,-498.2940979003906],\"size\":[289.0962829589844,113.79679870605469],\"flags\":{\"collapsed\":false},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":6103}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[6422],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"kirsten dunst\"],\"color\":\"#2a363b\",\"bgcolor\":\"#3f5159\"},{\"id\":1451,\"type\":\"LoadImage\",\"pos\":[-1267.7357177734375,-412.5631103515625],\"size\":[315,314],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[6518],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[6519],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"clipspace/clipspace-mask-54212258.30000001.png [input]\",\"image\"]},{\"id\":1040,\"type\":\"PreviewImage\",\"pos\":[-1267.6248779296875,-30.252229690551758],\"size\":[304.98114013671875,265.58380126953125],\"flags\":{},\"order\":44,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3607}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":1698,\"type\":\"Note\",\"pos\":[-1623.859375,-355.951416015625],\"size\":[276.7918701171875,88],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Draw a mask over the face in the Load Image node. Ideally, try stopping precisely at the hairline, and just above or just below the chin.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[2100,731,1,726,2,\"FLOAT\"],[2101,726,0,727,0,\"IMAGE\"],[2102,726,0,727,1,\"IMAGE\"],[2103,728,0,727,2,\"IMAGE\"],[2104,729,0,727,5,\"INT\"],[2105,729,1,727,6,\"INT\"],[2106,726,1,728,0,\"MASK\"],[2108,729,0,731,0,\"*\"],[2109,729,1,731,1,\"*\"],[2153,14,0,745,1,\"VAE\"],[2198,758,0,759,1,\"IMAGE\"],[2200,759,0,761,1,\"IMAGE\"],[2201,745,0,758,0,\"IMAGE\"],[2204,726,5,758,1,\"INT\"],[2205,726,6,758,2,\"INT\"],[2206,726,3,759,3,\"INT\"],[2207,726,4,759,4,\"INT\"],[2210,725,0,761,0,\"IMAGE\"],[2211,725,0,759,0,\"IMAGE\"],[2241,745,0,744,0,\"IMAGE\"],[3508,14,0,727,4,\"VAE\"],[3568,745,0,1022,0,\"IMAGE\"],[3569,1022,0,1024,0,\"IMAGE\"],[3570,765,0,1022,1,\"IMAGE\"],[3605,728,0,1039,1,\"IMAGE\"],[3606,726,0,1039,0,\"IMAGE\"],[3607,1039,0,1040,0,\"IMAGE\"],[3720,1071,0,1072,2,\"CLIP_VISION_OUTPUT\"],[3721,726,0,1071,1,\"IMAGE\"],[4157,490,0,1073,0,\"CLIP\"],[4185,759,0,1162,0,\"*\"],[4186,1162,0,1161,0,\"IMAGE\"],[4650,1073,0,1277,1,\"CONDITIONING\"],[4980,1073,0,1072,0,\"CONDITIONING\"],[4996,726,0,1369,0,\"IMAGE\"],[4997,745,0,1368,0,\"IMAGE\"],[4998,729,0,1369,1,\"INT\"],[4999,729,1,1369,2,\"INT\"],[5000,1369,0,1368,1,\"IMAGE\"],[5021,1407,0,726,1,\"MASK\"],[5054,725,0,726,0,\"IMAGE\"],[5373,727,0,1446,3,\"LATENT\"],[5439,1476,0,1477,0,\"MODEL\"],[5440,1476,1,1477,1,\"CLIP\"],[5442,727,3,1478,1,\"LATENT\"],[5443,1476,3,1071,0,\"CLIP_VISION\"],[5444,1476,4,1072,1,\"STYLE_MODEL\"],[5447,1476,2,14,0,\"*\"],[5493,1277,0,1446,6,\"OPTIONS\"],[5529,726,1,765,0,\"MASK\"],[5653,1072,0,1277,0,\"CONDITIONING\"],[5715,727,0,1525,0,\"LATENT\"],[5776,1516,0,1446,7,\"OPTIONS\"],[5845,1524,0,13,0,\"*\"],[5846,13,0,1446,0,\"MODEL\"],[5993,1476,3,1548,2,\"CLIP_VISION\"],[5994,1476,4,1548,1,\"STYLE_MODEL\"],[5995,726,0,1548,3,\"IMAGE\"],[6016,1516,0,1479,6,\"OPTIONS\"],[6051,1525,0,1446,5,\"GUIDES\"],[6103,490,0,1556,0,\"CLIP\"],[6201,727,0,1569,0,\"LATENT\"],[6202,727,0,1569,1,\"LATENT\"],[6222,727,2,1571,0,\"*\"],[6223,1571,0,1569,2,\"MASK\"],[6224,1571,0,1569,3,\"MASK\"],[6225,1571,0,1569,4,\"MASK\"],[6226,1571,0,1569,5,\"MASK\"],[6227,1571,0,1569,6,\"MASK\"],[6228,1571,0,1569,7,\"MASK\"],[6229,727,0,1572,0,\"LATENT\"],[6230,727,0,1572,1,\"LATENT\"],[6239,1573,0,1569,20,\"FLOAT\"],[6240,1574,0,1569,21,\"FLOAT\"],[6241,1575,0,1569,22,\"FLOAT\"],[6342,1571,0,1539,0,\"MASK\"],[6343,1539,0,1572,2,\"MASK\"],[6344,1539,0,1572,3,\"MASK\"],[6345,1539,0,1572,4,\"MASK\"],[6346,1539,0,1572,5,\"MASK\"],[6347,1539,0,1572,6,\"MASK\"],[6348,1539,0,1572,7,\"MASK\"],[6353,14,0,1664,1,\"VAE\"],[6354,1446,1,1664,0,\"LATENT\"],[6355,1664,0,1665,0,\"IMAGE\"],[6360,727,2,1667,0,\"MASK\"],[6361,1667,0,1516,0,\"MASK\"],[6380,1446,0,1479,4,\"LATENT\"],[6383,1478,0,1524,0,\"MODEL\"],[6396,1279,0,1478,0,\"MODEL\"],[6397,1477,0,1279,0,\"MODEL\"],[6398,1477,1,490,0,\"*\"],[6402,1454,0,1446,8,\"OPTIONS\"],[6411,1569,0,1525,3,\"GUIDES\"],[6412,727,0,1672,0,\"LATENT\"],[6414,1572,0,1672,3,\"GUIDES\"],[6415,1672,0,1479,5,\"GUIDES\"],[6421,1548,0,1446,1,\"CONDITIONING\"],[6422,1556,0,1548,0,\"CONDITIONING\"],[6447,726,2,759,2,\"MASK\"],[6476,1672,0,1690,5,\"GUIDES\"],[6477,1516,0,1690,6,\"OPTIONS\"],[6478,1690,0,745,0,\"LATENT\"],[6479,1479,0,1690,4,\"LATENT\"],[6518,1451,0,725,0,\"*\"],[6519,1451,1,1407,0,\"*\"]],\"groups\":[{\"id\":1,\"title\":\"Prepare Input\",\"bounding\":[-1310.92529296875,-489.52618408203125,755.7755737304688,762.867431640625],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":2,\"title\":\"Patch and Stitch\",\"bounding\":[1250.695068359375,-877.5091552734375,1320.4892578125,1148.6859130859375],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":3,\"title\":\"Loaders\",\"bounding\":[-1438.07421875,-1179.8375244140625,881.3677368164062,646.2952880859375],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":5,\"title\":\"Sampling\",\"bounding\":[-510.548828125,-602.9613037109375,1686.064208984375,874.1248168945312],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":6,\"title\":\"Guides\",\"bounding\":[-37.0714225769043,-1229.123046875,888.9586791992188,587.7683715820312],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}}],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.2100000000000002,\"offset\":[4241.572246240033,1450.4856076460571]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux faceswap.json",
    "content": "{\"last_node_id\":1153,\"last_link_id\":4163,\"nodes\":[{\"id\":758,\"type\":\"ImageResize+\",\"pos\":[1987.2191162109375,-351.3092041015625],\"size\":[210,218],\"flags\":{\"collapsed\":true},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2201},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,76],\"widget\":{\"name\":\"width\"},\"link\":2204},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,100],\"widget\":{\"name\":\"height\"},\"link\":2205}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2198],\"slot_index\":0},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ImageResize+\"},\"widgets_values\":[512,512,\"lanczos\",\"stretch\",\"always\",0]},{\"id\":490,\"type\":\"Reroute\",\"pos\":[-693.37158203125,-93.71382904052734],\"size\":[75,26],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":4149}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[4157],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":728,\"type\":\"MaskToImage\",\"pos\":[219.2652130126953,854.9601440429688],\"size\":[176.39999389648438,26],\"flags\":{\"collapsed\":true},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2106}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2103,3605],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskToImage\"},\"widgets_values\":[]},{\"id\":765,\"type\":\"MaskToImage\",\"pos\":[2707.509765625,226.7833709716797],\"size\":[182.28543090820312,26],\"flags\":{\"collapsed\":true},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2233}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3570],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskToImage\"},\"widgets_values\":[]},{\"id\":1024,\"type\":\"PreviewImage\",\"pos\":[2707.52197265625,-277.8296203613281],\"size\":[413.7582092285156,445.8081359863281],\"flags\":{},\"order\":36,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3569}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[],\"color\":\"#332922\",\"bgcolor\":\"#593930\"},{\"id\":744,\"type\":\"SaveImage\",\"pos\":[1807.2188720703125,-291.30926513671875],\"size\":[424.53594970703125,455.0760192871094],\"flags\":{},\"order\":33,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2241}],\"outputs\":[],\"title\":\"Save Patch\",\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"],\"color\":\"#332922\",\"bgcolor\":\"#593930\"},{\"id\":1040,\"type\":\"PreviewImage\",\"pos\":[-195.9951934814453,694.224609375],\"size\":[304.98114013671875,265.58380126953125],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3607}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":731,\"type\":\"SimpleMath+\",\"pos\":[219.2652130126953,804.9601440429688],\"size\":[315,98],\"flags\":{\"collapsed\":true},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"a\",\"localized_name\":\"a\",\"type\":\"*\",\"shape\":7,\"link\":2108},{\"name\":\"b\",\"localized_name\":\"b\",\"type\":\"*\",\"shape\":7,\"link\":2109},{\"name\":\"c\",\"localized_name\":\"c\",\"type\":\"*\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"INT\",\"localized_name\":\"INT\",\"type\":\"INT\",\"links\":null},{\"name\":\"FLOAT\",\"localized_name\":\"FLOAT\",\"type\":\"FLOAT\",\"links\":[2100],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"SimpleMath+\"},\"widgets_values\":[\"a/b\"]},{\"id\":14,\"type\":\"Reroute\",\"pos\":[-693.37158203125,-53.713836669921875],\"size\":[75,26],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":4146}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[2153,3508],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1039,\"type\":\"ImageBlend\",\"pos\":[219.2652130126953,954.9601440429688],\"size\":[210,102],\"flags\":{\"collapsed\":true},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"image1\",\"localized_name\":\"image1\",\"type\":\"IMAGE\",\"link\":3606},{\"name\":\"image2\",\"localized_name\":\"image2\",\"type\":\"IMAGE\",\"link\":3605}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3607],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageBlend\"},\"widgets_values\":[0.5,\"multiply\"]},{\"id\":1022,\"type\":\"ImageBlend\",\"pos\":[2710.7275390625,275.91143798828125],\"size\":[210,102],\"flags\":{\"collapsed\":true},\"order\":34,\"mode\":0,\"inputs\":[{\"name\":\"image1\",\"localized_name\":\"image1\",\"type\":\"IMAGE\",\"link\":3568},{\"name\":\"image2\",\"localized_name\":\"image2\",\"type\":\"IMAGE\",\"link\":3570}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3569],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageBlend\"},\"widgets_values\":[0.5,\"multiply\"]},{\"id\":726,\"type\":\"Mask Bounding Box Aspect Ratio\",\"pos\":[216.9475860595703,323.4888610839844],\"size\":[252,250],\"flags\":{\"collapsed\":false},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2338},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":4158},{\"name\":\"aspect_ratio\",\"type\":\"FLOAT\",\"pos\":[10,204],\"widget\":{\"name\":\"aspect_ratio\"},\"link\":2100}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[2101,2102,2209,3606,3721],\"slot_index\":0},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[2106],\"slot_index\":1},{\"name\":\"mask_blurred\",\"localized_name\":\"mask_blurred\",\"type\":\"MASK\",\"links\":[3884],\"slot_index\":2},{\"name\":\"x\",\"localized_name\":\"x\",\"type\":\"INT\",\"links\":[2206],\"slot_index\":3},{\"name\":\"y\",\"localized_name\":\"y\",\"type\":\"INT\",\"links\":[2207],\"slot_index\":4},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[2204],\"slot_index\":5},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[2205],\"slot_index\":6}],\"properties\":{\"Node name for S&R\":\"Mask Bounding Box Aspect Ratio\"},\"widgets_values\":[100,40,1.75,false]},{\"id\":760,\"type\":\"SaveImage\",\"pos\":[1807.2188720703125,218.6908721923828],\"size\":[418.26055908203125,456.04608154296875],\"flags\":{},\"order\":37,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2199}],\"outputs\":[],\"title\":\"Save Output\",\"properties\":{},\"widgets_values\":[\"ComfyUI\"],\"color\":\"#232\",\"bgcolor\":\"#353\"},{\"id\":761,\"type\":\"Image Comparer (rgthree)\",\"pos\":[2257.2197265625,228.6908416748047],\"size\":[410.4466247558594,447.8973388671875],\"flags\":{},\"order\":38,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":2210},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":2200}],\"outputs\":[],\"title\":\"Compare Output\",\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_dluyj_00015_.png&type=temp&subfolder=&rand=0.8734695511873163\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_dluyj_00016_.png&type=temp&subfolder=&rand=0.23774072803641766\"}]],\"color\":\"#232\",\"bgcolor\":\"#353\"},{\"id\":1074,\"type\":\"ClownOptions_SDE_Beta\",\"pos\":[790.0368041992188,-161.93728637695312],\"size\":[315,266],\"flags\":{\"collapsed\":true},\"order\":0,\"mode\":0,\"inputs\":[{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas_substep\",\"localized_name\":\"etas_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Beta\"},\"widgets_values\":[\"gaussian\",\"gaussian\",\"hard\",\"hard\",0.5,0.75,-1,\"fixed\"]},{\"id\":13,\"type\":\"Reroute\",\"pos\":[-693.37158203125,-133.7138214111328],\"size\":[75,26],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":4163}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[3812],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":745,\"type\":\"VAEDecode\",\"pos\":[1818.999755859375,-349.32073974609375],\"size\":[140,46],\"flags\":{\"collapsed\":true},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":4031},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2153}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2201,2208,2241,3568],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":727,\"type\":\"VAEEncodeAdvanced\",\"pos\":[219.2652130126953,904.9601440429688],\"size\":[262.4812927246094,298],\"flags\":{\"collapsed\":true},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2101},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2102},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2103},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":3508},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,160],\"widget\":{\"name\":\"width\"},\"link\":2104},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,184],\"widget\":{\"name\":\"height\"},\"link\":2105}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[3602,3603,3700,3785,3786,4097],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[2233,3604,3901],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2125],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[],\"slot_index\":4},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[]}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1344,768,\"red\",false,\"16_channels\"]},{\"id\":729,\"type\":\"SetImageSize\",\"pos\":[257.9150695800781,633.8616333007812],\"size\":[210,102],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[2104,2108],\"slot_index\":0},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[2105,2109],\"slot_index\":1}],\"title\":\"Inpaint Tile Size\",\"properties\":{\"Node name for S&R\":\"SetImageSize\"},\"widgets_values\":[1024,1024]},{\"id\":1072,\"type\":\"StyleModelApply\",\"pos\":[618.7158813476562,-201.9373016357422],\"size\":[262,122],\"flags\":{\"collapsed\":true},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":3724},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":4151},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":3720}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[4088,4102],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":1071,\"type\":\"CLIPVisionEncode\",\"pos\":[618.708251953125,-160.76882934570312],\"size\":[253.60000610351562,78],\"flags\":{\"collapsed\":true},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":4152},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":3721}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[3720],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":1152,\"type\":\"FluxLoader\",\"pos\":[-1424.1221923828125,-136.28652954101562],\"size\":[315,282],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[4144],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[4150],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[4146],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":[4152],\"slot_index\":3},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":[4151],\"slot_index\":4}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"flux1-dev.sft\",\"fp8_e4m3fn_fast\",\"clip_l_flux.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"ae.sft\",\"sigclip_vision_patch14_384.safetensors\",\"flux1-redux-dev.safetensors\"]},{\"id\":1145,\"type\":\"SharkOptions_GuideCond_Beta\",\"pos\":[623.8969116210938,-288.85443115234375],\"size\":[315,98],\"flags\":{\"collapsed\":true},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":4088},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":4087},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[4086,4089],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_GuideCond_Beta\"},\"widgets_values\":[1]},{\"id\":762,\"type\":\"Image Comparer (rgthree)\",\"pos\":[2254.142822265625,-285.88934326171875],\"size\":[402.1800842285156,455.1059875488281],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":2208},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":2209}],\"outputs\":[],\"title\":\"Compare Inpaint Patch\",\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_glyrv_00015_.png&type=temp&subfolder=&rand=0.6304345035966803\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_glyrv_00016_.png&type=temp&subfolder=&rand=0.03317535764596258\"}]],\"color\":\"#332922\",\"bgcolor\":\"#593930\"},{\"id\":1073,\"type\":\"CLIPTextEncode\",\"pos\":[618.718017578125,-243.58985900878906],\"size\":[263.280517578125,88.73566436767578],\"flags\":{\"collapsed\":true},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":4157}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[3724,4087],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"\"]},{\"id\":759,\"type\":\"ImageCompositeMasked\",\"pos\":[2182.82080078125,-351.82415771484375],\"size\":[210,186],\"flags\":{\"collapsed\":true},\"order\":35,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"IMAGE\",\"link\":2211},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"IMAGE\",\"link\":2198},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":3884},{\"name\":\"x\",\"type\":\"INT\",\"pos\":[10,76],\"widget\":{\"name\":\"x\"},\"link\":2206},{\"name\":\"y\",\"type\":\"INT\",\"pos\":[10,100],\"widget\":{\"name\":\"y\"},\"link\":2207}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2199,2200],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageCompositeMasked\"},\"widgets_values\":[712,800,false]},{\"id\":1102,\"type\":\"LoadImage\",\"pos\":[-205.95057678222656,316.025390625],\"size\":[315,314],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[4156],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[4158],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"clipspace/clipspace-mask-67304674.png [input]\",\"image\"]},{\"id\":725,\"type\":\"Reroute\",\"pos\":[126.9476318359375,319.6999206542969],\"size\":[75,26],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":4156}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[2210,2211,2338],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1038,\"type\":\"ClownGuides_Beta\",\"pos\":[-491.9494934082031,-334.2093505859375],\"size\":[315,450],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"guide_masked\",\"localized_name\":\"guide_masked\",\"type\":\"LATENT\",\"shape\":7,\"link\":3602},{\"name\":\"guide_unmasked\",\"localized_name\":\"guide_unmasked\",\"type\":\"LATENT\",\"shape\":7,\"link\":3603},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":3604},{\"name\":\"weights_masked\",\"localized_name\":\"weights_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"weights_unmasked\",\"localized_name\":\"weights_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[4095],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuides_Beta\"},\"widgets_values\":[\"flow\",false,false,1,1,1,1,\"constant\",\"constant\",0,0,8,8,false],\"color\":\"#2a363b\",\"bgcolor\":\"#3f5159\"},{\"id\":1069,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[1011.0429077148438,-95.05850219726562],\"size\":[315,570],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3711},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":4155},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":4089},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":4159},{\"name\":\"options 3\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[4104],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0,\"exponential/res_3s\",1,1,\"resample\",true],\"color\":\"#2a363b\",\"bgcolor\":\"#3f5159\"},{\"id\":1066,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[620.0368041992188,-101.93733215332031],\"size\":[340.55120849609375,730],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":3812},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":4102},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3700},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":4096},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":4086},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3711],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0,\"exponential/res_3s\",\"beta57\",30,7,1,1,0,\"fixed\",\"standard\",false],\"color\":\"#2a363b\",\"bgcolor\":\"#3f5159\"},{\"id\":1070,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[1361.5435791015625,-100.98193359375],\"size\":[315,570],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":4104},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3832},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[4031],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0,\"exponential/res_3s\",-1,1,\"resample\",true],\"color\":\"#232\",\"bgcolor\":\"#353\"},{\"id\":1143,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[1023.6978149414062,-356.53753662109375],\"size\":[282.6300964355469,202],\"flags\":{},\"order\":4,\"mode\":4,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[4159],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[5,1,0,\"none\",1,1,false],\"color\":\"#2a363b\",\"bgcolor\":\"#3f5159\"},{\"id\":1153,\"type\":\"LoraLoader\",\"pos\":[-1079.3297119140625,-135.3394012451172],\"size\":[315,126],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":4144},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":4150}],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[4160],\"slot_index\":0},{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[4149],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"LoraLoader\"},\"widgets_values\":[\"FLUX/Raura.safetensors\",1,1]},{\"id\":737,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[-1125.156005859375,-356.12274169921875],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":4161},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2125}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[4162],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":1149,\"type\":\"ReFluxPatcher\",\"pos\":[-828.3265380859375,-352.2313232421875],\"size\":[210,82],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":4162}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[4163],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float64\",true],\"color\":\"#223\",\"bgcolor\":\"#335\"},{\"id\":1142,\"type\":\"TorchCompileModels\",\"pos\":[-1416.9853515625,-362.2281799316406],\"size\":[210,178],\"flags\":{},\"order\":9,\"mode\":4,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":4160}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[4161],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"TorchCompileModels\"},\"widgets_values\":[\"inductor\",false,\"default\",false,64,0]},{\"id\":1150,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[-140.6088409423828,-331.50213623046875],\"size\":[248.69369506835938,286],\"flags\":{\"collapsed\":false},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":4097},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":4095}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[4096,4155],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,-1,false],\"color\":\"#223\",\"bgcolor\":\"#335\"},{\"id\":1088,\"type\":\"ClownGuides_Beta\",\"pos\":[145.70831298828125,-329.6731872558594],\"size\":[315,450],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"guide_masked\",\"localized_name\":\"guide_masked\",\"type\":\"LATENT\",\"shape\":7,\"link\":3785},{\"name\":\"guide_unmasked\",\"localized_name\":\"guide_unmasked\",\"type\":\"LATENT\",\"shape\":7,\"link\":3786},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":3901},{\"name\":\"weights_masked\",\"localized_name\":\"weights_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"weights_unmasked\",\"localized_name\":\"weights_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3832],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuides_Beta\"},\"widgets_values\":[\"inversion\",false,false,0,1,1,1,\"constant\",\"constant\",0,0,30,30,false],\"color\":\"#232\",\"bgcolor\":\"#353\"}],\"links\":[[2100,731,1,726,2,\"FLOAT\"],[2101,726,0,727,0,\"IMAGE\"],[2102,726,0,727,1,\"IMAGE\"],[2103,728,0,727,2,\"IMAGE\"],[2104,729,0,727,5,\"INT\"],[2105,729,1,727,6,\"INT\"],[2106,726,1,728,0,\"MASK\"],[2108,729,0,731,0,\"*\"],[2109,729,1,731,1,\"*\"],[2125,727,3,737,1,\"LATENT\"],[2153,14,0,745,1,\"VAE\"],[2198,758,0,759,1,\"IMAGE\"],[2199,759,0,760,0,\"IMAGE\"],[2200,759,0,761,1,\"IMAGE\"],[2201,745,0,758,0,\"IMAGE\"],[2204,726,5,758,1,\"INT\"],[2205,726,6,758,2,\"INT\"],[2206,726,3,759,3,\"INT\"],[2207,726,4,759,4,\"INT\"],[2208,745,0,762,0,\"IMAGE\"],[2209,726,0,762,1,\"IMAGE\"],[2210,725,0,761,0,\"IMAGE\"],[2211,725,0,759,0,\"IMAGE\"],[2233,727,2,765,0,\"MASK\"],[2241,745,0,744,0,\"IMAGE\"],[2338,725,0,726,0,\"IMAGE\"],[3508,14,0,727,4,\"VAE\"],[3568,745,0,1022,0,\"IMAGE\"],[3569,1022,0,1024,0,\"IMAGE\"],[3570,765,0,1022,1,\"IMAGE\"],[3602,727,0,1038,0,\"LATENT\"],[3603,727,0,1038,1,\"LATENT\"],[3604,727,2,1038,2,\"MASK\"],[3605,728,0,1039,1,\"IMAGE\"],[3606,726,0,1039,0,\"IMAGE\"],[3607,1039,0,1040,0,\"IMAGE\"],[3700,727,0,1066,3,\"LATENT\"],[3711,1066,0,1069,4,\"LATENT\"],[3720,1071,0,1072,2,\"CLIP_VISION_OUTPUT\"],[3721,726,0,1071,1,\"IMAGE\"],[3724,1073,0,1072,0,\"CONDITIONING\"],[3785,727,0,1088,0,\"LATENT\"],[3786,727,0,1088,1,\"LATENT\"],[3812,13,0,1066,0,\"MODEL\"],[3832,1088,0,1070,5,\"GUIDES\"],[3884,726,2,759,2,\"MASK\"],[3901,727,2,1088,2,\"MASK\"],[4031,1070,0,745,0,\"LATENT\"],[4086,1145,0,1066,6,\"OPTIONS\"],[4087,1073,0,1145,1,\"CONDITIONING\"],[4088,1072,0,1145,0,\"CONDITIONING\"],[4089,1145,0,1069,6,\"OPTIONS\"],[4095,1038,0,1150,3,\"GUIDES\"],[4096,1150,0,1066,5,\"GUIDES\"],[4097,727,0,1150,0,\"LATENT\"],[4102,1072,0,1066,1,\"CONDITIONING\"],[4104,1069,0,1070,4,\"LATENT\"],[4144,1152,0,1153,0,\"MODEL\"],[4146,1152,2,14,0,\"*\"],[4149,1153,1,490,0,\"*\"],[4150,1152,1,1153,1,\"CLIP\"],[4151,1152,4,1072,1,\"STYLE_MODEL\"],[4152,1152,3,1071,0,\"CLIP_VISION\"],[4155,1150,0,1069,5,\"GUIDES\"],[4156,1102,0,725,0,\"*\"],[4157,490,0,1073,0,\"CLIP\"],[4158,1102,1,726,1,\"MASK\"],[4159,1143,0,1069,7,\"OPTIONS\"],[4160,1153,0,1142,0,\"MODEL\"],[4161,1142,0,737,0,\"MODEL\"],[4162,737,0,1149,0,\"MODEL\"],[4163,1149,0,13,0,\"*\"]],\"groups\":[{\"id\":1,\"title\":\"Prepare Input\",\"bounding\":[-240.3173828125,230.5765838623047,755.7755737304688,762.867431640625],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":2,\"title\":\"Patch and Stitch\",\"bounding\":[1762.0626220703125,-449.59136962890625,1387.1339111328125,1156.21923828125],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":3,\"title\":\"Loaders\",\"bounding\":[-1451.647216796875,-453.5611877441406,862.5447998046875,635.2009887695312],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":5,\"title\":\"Sampling\",\"bounding\":[565.7752685546875,-449.1409606933594,1147.30712890625,1118.83447265625],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":6,\"title\":\"Guides\",\"bounding\":[-538.6279296875,-451.06854248046875,1052.895263671875,634.7589721679688],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}}],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.351305709310398,\"offset\":[2774.203337270875,600.0170992273368]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux inpaint area.json",
    "content": "{\"last_node_id\":698,\"last_link_id\":1968,\"nodes\":[{\"id\":670,\"type\":\"SaveImage\",\"pos\":[5481.20751953125,763.7216186523438],\"size\":[315,270],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1883}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":663,\"type\":\"VAEEncodeAdvanced\",\"pos\":[4030,1370],\"size\":[262.4812927246094,278],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":1957},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":1968}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[1885,1886],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1854,1869],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":651,\"type\":\"PreviewImage\",\"pos\":[4060,1710],\"size\":[210,246],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1963}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":624,\"type\":\"CLIPTextEncode\",\"pos\":[4329.92578125,1015.7978515625],\"size\":[306.2455749511719,162.64158630371094],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1966}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1860],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"a close up shot of a red coffee mug on a wooden table\"]},{\"id\":346,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[4034.77978515625,820.2175903320312],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1965},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1869}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1870],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":674,\"type\":\"Note\",\"pos\":[4999.462890625,1603.108642578125],\"size\":[378.7174377441406,179.35989379882812],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"eta is the amount of noise added after each step. It allows the model to change things more aggressively. Try comparing 0.0 vs 0.75.\\n\\nres_2m and res_3m will be sufficient quality samplers in most cases. Try res_2s and res_3s (which are 2x and 3x slower) if you want an extra quality boost.\\n\\nYou can get away with fewer than 40 steps in most cases, but 40 gives the model more time to correct any errors. Mileage may vary, experiment!\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":677,\"type\":\"Note\",\"pos\":[3783.440185546875,820.546142578125],\"size\":[210.66668701171875,91.33430480957031],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"I have found these values often work quite well with img2img work with the beta57 scheduler.\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":678,\"type\":\"Note\",\"pos\":[3748.428466796875,1012.2677612304688],\"size\":[210,107.33900451660156],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"If you wish to inpaint with another model, just replace the model loader and be sure to change CFG to whatever is appropriate for that model.\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":672,\"type\":\"Note\",\"pos\":[3747.097412109375,1187.65576171875],\"size\":[210,104.00474548339844],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Padding will increase or decrease the amount of area included around your mask that will give the model more context.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":637,\"type\":\"Note\",\"pos\":[3412.000732421875,1202.6614990234375],\"size\":[280.681884765625,88],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Draw your mask on your image for the area you would like to inpaint.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":658,\"type\":\"Image Comparer (rgthree)\",\"pos\":[5007.734375,1021.2513427734375],\"size\":[450.5037841796875,521.7816162109375],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":1829},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":1823}],\"outputs\":[],\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_sxifa_00003_.png&type=temp&subfolder=&rand=0.14849022700275727\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_sxifa_00004_.png&type=temp&subfolder=&rand=0.8022985498723256\"}]]},{\"id\":673,\"type\":\"Note\",\"pos\":[4330.9345703125,1766.158203125],\"size\":[488.01611328125,234.97633361816406],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The parameters for \\\"masked\\\" will affect your inpainting area. \\n\\nTry changing weight_masked and end_step_masked. Lower values will allow the model to inpaint more aggressively. Higher will use more information from the original image. \\n\\n   ***   You can think of these like a \\\"denoise\\\" slider!   *** \\n\\n(With lower weight, lower end_step acting like higher denoise).\\n\\nweight_scheduler_masked will change how quickly the value in weight_masked drops to zero. \\\"constant\\\" will never drop. Try linear_quadratic (drops very gradually, then suddenly at the end) or beta57 (drops earlier). These can make the inpainting process a bit smoother.\\n\\nHaving some information from the original image helps the model place objects more accurately, if you are replacing something that is already there.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":617,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[4660,1020],\"size\":[315,690],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1870},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1860},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1854},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":1884},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1936],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"bong_tangent\",40,-1,1,1,17,\"fixed\",\"standard\",true]},{\"id\":619,\"type\":\"VAEDecode\",\"pos\":[4830.248046875,919.5529174804688],\"size\":[140,46],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":1936},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1967}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1882,1902],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":638,\"type\":\"LoadImage\",\"pos\":[3390,1370],\"size\":[315,314],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1828,1829,1955],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1956],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"clipspace/clipspace-mask-147694527.20000002.png [input]\",\"image\"]},{\"id\":667,\"type\":\"ImageResize+\",\"pos\":[5008.23974609375,755.5714111328125],\"size\":[210,218],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1882},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,76],\"widget\":{\"name\":\"width\"},\"link\":1949},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,100],\"widget\":{\"name\":\"height\"},\"link\":1950}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1876],\"slot_index\":0},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ImageResize+\"},\"widgets_values\":[512,512,\"lanczos\",\"stretch\",\"always\",0]},{\"id\":657,\"type\":\"ImageCompositeMasked\",\"pos\":[5242.94482421875,761.7905883789062],\"size\":[210,186],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"IMAGE\",\"link\":1828},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"IMAGE\",\"link\":1876},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":1953},{\"name\":\"x\",\"type\":\"INT\",\"pos\":[10,76],\"widget\":{\"name\":\"x\"},\"link\":1952},{\"name\":\"y\",\"type\":\"INT\",\"pos\":[10,100],\"widget\":{\"name\":\"y\"},\"link\":1951}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1823,1883],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageCompositeMasked\"},\"widgets_values\":[712,800,false]},{\"id\":650,\"type\":\"MaskPreview\",\"pos\":[3778.59765625,1707.707763671875],\"size\":[181.5970001220703,246],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":1962}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":671,\"type\":\"ClownGuides_Beta\",\"pos\":[4331.12109375,1240.1927490234375],\"size\":[303.2622985839844,450],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"guide_masked\",\"localized_name\":\"guide_masked\",\"type\":\"LATENT\",\"shape\":7,\"link\":1885},{\"name\":\"guide_unmasked\",\"localized_name\":\"guide_unmasked\",\"type\":\"LATENT\",\"shape\":7,\"link\":1886},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":1954},{\"name\":\"weights_masked\",\"localized_name\":\"weights_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"weights_unmasked\",\"localized_name\":\"weights_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[1884],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuides_Beta\"},\"widgets_values\":[\"epsilon\",false,true,0.5,1,1,1,\"beta57\",\"constant\",0,0,10,-1,false]},{\"id\":679,\"type\":\"Image Comparer (rgthree)\",\"pos\":[5488.171875,1085.4603271484375],\"size\":[402.1800842285156,455.1059875488281],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":1964},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":1902}],\"outputs\":[],\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_ejvlo_00001_.png&type=temp&subfolder=&rand=0.5455521700112449\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_ejvlo_00002_.png&type=temp&subfolder=&rand=0.8898066636829509\"}]]},{\"id\":676,\"type\":\"Mask Bounding Box Aspect Ratio\",\"pos\":[3742.82421875,1383.6278076171875],\"size\":[252,250],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"shape\":7,\"link\":1955},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":1956}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[1957,1963,1964],\"slot_index\":0},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[1954,1962],\"slot_index\":1},{\"name\":\"mask_blurred\",\"localized_name\":\"mask_blurred\",\"type\":\"MASK\",\"links\":[1953],\"slot_index\":2},{\"name\":\"x\",\"localized_name\":\"x\",\"type\":\"INT\",\"links\":[1952],\"slot_index\":3},{\"name\":\"y\",\"localized_name\":\"y\",\"type\":\"INT\",\"links\":[1951],\"slot_index\":4},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[1949],\"slot_index\":5},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[1950],\"slot_index\":6}],\"properties\":{\"Node name for S&R\":\"Mask Bounding Box Aspect Ratio\"},\"widgets_values\":[20,20,1,false]},{\"id\":615,\"type\":\"FluxLoader\",\"pos\":[3992.056396484375,1016.4193725585938],\"size\":[315,282],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1965],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1966],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1967,1968],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":null},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"default\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\".none\",\".none\"]}],\"links\":[[1823,657,0,658,1,\"IMAGE\"],[1828,638,0,657,0,\"IMAGE\"],[1829,638,0,658,0,\"IMAGE\"],[1854,663,3,617,3,\"LATENT\"],[1860,624,0,617,1,\"CONDITIONING\"],[1869,663,3,346,1,\"LATENT\"],[1870,346,0,617,0,\"MODEL\"],[1876,667,0,657,1,\"IMAGE\"],[1882,619,0,667,0,\"IMAGE\"],[1883,657,0,670,0,\"IMAGE\"],[1884,671,0,617,5,\"GUIDES\"],[1885,663,0,671,0,\"LATENT\"],[1886,663,0,671,1,\"LATENT\"],[1902,619,0,679,1,\"IMAGE\"],[1936,617,0,619,0,\"LATENT\"],[1949,676,5,667,1,\"INT\"],[1950,676,6,667,2,\"INT\"],[1951,676,4,657,4,\"INT\"],[1952,676,3,657,3,\"INT\"],[1953,676,2,657,2,\"MASK\"],[1954,676,1,671,2,\"MASK\"],[1955,638,0,676,0,\"IMAGE\"],[1956,638,1,676,1,\"MASK\"],[1957,676,0,663,0,\"IMAGE\"],[1962,676,1,650,0,\"MASK\"],[1963,676,0,651,0,\"IMAGE\"],[1964,676,0,679,0,\"IMAGE\"],[1965,615,0,346,0,\"MODEL\"],[1966,615,1,624,0,\"CLIP\"],[1967,615,2,619,1,\"VAE\"],[1968,615,2,663,4,\"VAE\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.4864362802414468,\"offset\":[-1333.621998147027,-469.4579733585599]},\"node_versions\":{\"comfy-core\":\"0.3.26\",\"comfyui_controlnet_aux\":\"1e9eac6377c882da8bb360c7544607036904362c\",\"ComfyUI-VideoHelperSuite\":\"c36626c6028faca912eafcedbc71f1d342fb4d2a\"},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux inpaint bongmath.json",
    "content": "{\"last_node_id\":1057,\"last_link_id\":3666,\"nodes\":[{\"id\":758,\"type\":\"ImageResize+\",\"pos\":[1304.9573974609375,-352.7953796386719],\"size\":[210,218],\"flags\":{\"collapsed\":true},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2201},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,76],\"widget\":{\"name\":\"width\"},\"link\":2204},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,100],\"widget\":{\"name\":\"height\"},\"link\":2205}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2198],\"slot_index\":0},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ImageResize+\"},\"widgets_values\":[512,512,\"lanczos\",\"stretch\",\"always\",0]},{\"id\":759,\"type\":\"ImageCompositeMasked\",\"pos\":[1494.957763671875,-352.7953796386719],\"size\":[210,186],\"flags\":{\"collapsed\":true},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"IMAGE\",\"link\":2211},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"IMAGE\",\"link\":2198},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2301},{\"name\":\"x\",\"type\":\"INT\",\"pos\":[10,76],\"widget\":{\"name\":\"x\"},\"link\":2206},{\"name\":\"y\",\"type\":\"INT\",\"pos\":[10,100],\"widget\":{\"name\":\"y\"},\"link\":2207}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2199,2200],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageCompositeMasked\"},\"widgets_values\":[712,800,false]},{\"id\":13,\"type\":\"Reroute\",\"pos\":[-792.117919921875,-60.3060188293457],\"size\":[75,26],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1964}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[2317],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[-792.117919921875,-20.30602264404297],\"size\":[75,26],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1965}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[3656],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":737,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[-972.117919921875,-330.3060302734375],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2318},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2125}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3661],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":786,\"type\":\"TorchCompileModels\",\"pos\":[-1262.117919921875,-360.3060302734375],\"size\":[256.248779296875,178],\"flags\":{},\"order\":13,\"mode\":4,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2317}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2318],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"TorchCompileModels\"},\"widgets_values\":[\"inductor\",false,\"default\",false,64,0]},{\"id\":664,\"type\":\"ReFluxPatcher\",\"pos\":[-857.81005859375,-103.69645690917969],\"size\":[210,82],\"flags\":{\"collapsed\":true},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1963}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1964],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":14,\"type\":\"Reroute\",\"pos\":[-792.117919921875,19.69397735595703],\"size\":[75,26],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1966}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[2153,3508],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":745,\"type\":\"VAEDecode\",\"pos\":[1136.7379150390625,-350.8069152832031],\"size\":[140,46],\"flags\":{\"collapsed\":true},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3665},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2153}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2201,2208,2241,3568],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":663,\"type\":\"FluxLoader\",\"pos\":[-1262.117919921875,-130.3060302734375],\"size\":[374.41741943359375,282],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1963],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1965],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1966],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":[],\"slot_index\":3},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":[],\"slot_index\":4}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"fp8_e4m3fn_fast\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\"sigclip_vision_patch14_384.safetensors\",\"flux1-redux-dev.safetensors\"]},{\"id\":726,\"type\":\"Mask Bounding Box Aspect Ratio\",\"pos\":[-153.93637084960938,317.7193298339844],\"size\":[252,250],\"flags\":{\"collapsed\":false},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2338},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":3659},{\"name\":\"aspect_ratio\",\"type\":\"FLOAT\",\"pos\":[10,204],\"widget\":{\"name\":\"aspect_ratio\"},\"link\":2100}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[2101,2102,2209,3606],\"slot_index\":0},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[2106],\"slot_index\":1},{\"name\":\"mask_blurred\",\"localized_name\":\"mask_blurred\",\"type\":\"MASK\",\"links\":[2301],\"slot_index\":2},{\"name\":\"x\",\"localized_name\":\"x\",\"type\":\"INT\",\"links\":[2206],\"slot_index\":3},{\"name\":\"y\",\"localized_name\":\"y\",\"type\":\"INT\",\"links\":[2207],\"slot_index\":4},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[2204],\"slot_index\":5},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[2205],\"slot_index\":6}],\"properties\":{\"Node name for S&R\":\"Mask Bounding Box Aspect Ratio\"},\"widgets_values\":[300,40,1.75,false]},{\"id\":728,\"type\":\"MaskToImage\",\"pos\":[-151.61874389648438,849.1907348632812],\"size\":[176.39999389648438,26],\"flags\":{\"collapsed\":true},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2106}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2103,3605],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskToImage\"},\"widgets_values\":[]},{\"id\":762,\"type\":\"Image Comparer (rgthree)\",\"pos\":[1584.957763671875,-282.7954406738281],\"size\":[402.1800842285156,455.1059875488281],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":2208},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":2209}],\"outputs\":[],\"title\":\"Compare Inpaint Patch\",\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_hkrer_00001_.png&type=temp&subfolder=&rand=0.04538135261092524\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_hkrer_00002_.png&type=temp&subfolder=&rand=0.5206493331921973\"}]],\"color\":\"#332922\",\"bgcolor\":\"#593930\"},{\"id\":765,\"type\":\"MaskToImage\",\"pos\":[2025.24755859375,225.29702758789062],\"size\":[182.28543090820312,26],\"flags\":{\"collapsed\":true},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2233}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3570],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskToImage\"},\"widgets_values\":[]},{\"id\":1022,\"type\":\"ImageBlend\",\"pos\":[2028.46533203125,274.42523193359375],\"size\":[210,102],\"flags\":{\"collapsed\":true},\"order\":27,\"mode\":0,\"inputs\":[{\"name\":\"image1\",\"localized_name\":\"image1\",\"type\":\"IMAGE\",\"link\":3568},{\"name\":\"image2\",\"localized_name\":\"image2\",\"type\":\"IMAGE\",\"link\":3570}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3569],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageBlend\"},\"widgets_values\":[0.5,\"overlay\"]},{\"id\":1024,\"type\":\"PreviewImage\",\"pos\":[2025.259765625,-279.3157958984375],\"size\":[413.7582092285156,445.8081359863281],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3569}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[],\"color\":\"#332922\",\"bgcolor\":\"#593930\"},{\"id\":760,\"type\":\"SaveImage\",\"pos\":[1124.9569091796875,217.20452880859375],\"size\":[418.26055908203125,456.04608154296875],\"flags\":{},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2199}],\"outputs\":[],\"title\":\"Save Output\",\"properties\":{},\"widgets_values\":[\"ComfyUI\"],\"color\":\"#232\",\"bgcolor\":\"#353\"},{\"id\":761,\"type\":\"Image Comparer (rgthree)\",\"pos\":[1574.957763671875,227.20449829101562],\"size\":[410.4466247558594,447.8973388671875],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":2210},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":2200}],\"outputs\":[],\"title\":\"Compare Output\",\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_eoplx_00001_.png&type=temp&subfolder=&rand=0.7495673665351654\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_eoplx_00002_.png&type=temp&subfolder=&rand=0.17529967707052396\"}]],\"color\":\"#232\",\"bgcolor\":\"#353\"},{\"id\":744,\"type\":\"SaveImage\",\"pos\":[1124.9569091796875,-292.7954406738281],\"size\":[424.53594970703125,455.0760192871094],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2241}],\"outputs\":[],\"title\":\"Save Patch\",\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"],\"color\":\"#332922\",\"bgcolor\":\"#593930\"},{\"id\":725,\"type\":\"Reroute\",\"pos\":[-243.93637084960938,317.7193298339844],\"size\":[75,26],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3658}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[2210,2211,2338],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1040,\"type\":\"PreviewImage\",\"pos\":[-566.879150390625,688.4552001953125],\"size\":[304.98114013671875,265.58380126953125],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3607}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":1039,\"type\":\"ImageBlend\",\"pos\":[-151.61874389648438,949.1907348632812],\"size\":[210,102],\"flags\":{\"collapsed\":true},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"image1\",\"localized_name\":\"image1\",\"type\":\"IMAGE\",\"link\":3606},{\"name\":\"image2\",\"localized_name\":\"image2\",\"type\":\"IMAGE\",\"link\":3605}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3607],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageBlend\"},\"widgets_values\":[0.5,\"overlay\"]},{\"id\":731,\"type\":\"SimpleMath+\",\"pos\":[-151.61874389648438,799.1907348632812],\"size\":[315,98],\"flags\":{\"collapsed\":true},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"a\",\"localized_name\":\"a\",\"type\":\"*\",\"shape\":7,\"link\":2108},{\"name\":\"b\",\"localized_name\":\"b\",\"type\":\"*\",\"shape\":7,\"link\":2109},{\"name\":\"c\",\"localized_name\":\"c\",\"type\":\"*\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"INT\",\"localized_name\":\"INT\",\"type\":\"INT\",\"links\":null},{\"name\":\"FLOAT\",\"localized_name\":\"FLOAT\",\"type\":\"FLOAT\",\"links\":[2100],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"SimpleMath+\"},\"widgets_values\":[\"a/b\"]},{\"id\":729,\"type\":\"SetImageSize\",\"pos\":[-152.42828369140625,628.09228515625],\"size\":[210,102],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[2104,2108],\"slot_index\":0},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[2105,2109],\"slot_index\":1}],\"title\":\"Inpaint Tile Size\",\"properties\":{\"Node name for S&R\":\"SetImageSize\"},\"widgets_values\":[1024,1024]},{\"id\":727,\"type\":\"VAEEncodeAdvanced\",\"pos\":[-151.61874389648438,899.1907348632812],\"size\":[262.4812927246094,298],\"flags\":{\"collapsed\":true},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2101},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2102},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2103},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":3508},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,160],\"widget\":{\"name\":\"width\"},\"link\":2104},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,184],\"widget\":{\"name\":\"height\"},\"link\":2105}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[3602,3603,3611],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[2233,3604],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2125,3660],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[],\"slot_index\":4},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[]}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1344,768,\"red\",false,\"16_channels\"]},{\"id\":1038,\"type\":\"ClownGuides_Beta\",\"pos\":[-570,-350],\"size\":[315,450],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"guide_masked\",\"localized_name\":\"guide_masked\",\"type\":\"LATENT\",\"shape\":7,\"link\":3602},{\"name\":\"guide_unmasked\",\"localized_name\":\"guide_unmasked\",\"type\":\"LATENT\",\"shape\":7,\"link\":3603},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":3604},{\"name\":\"weights_masked\",\"localized_name\":\"weights_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"weights_unmasked\",\"localized_name\":\"weights_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3609,3641],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuides_Beta\"},\"widgets_values\":[\"inversion\",false,false,0,1,1,1,\"constant\",\"constant\",0,0,1,-1,false]},{\"id\":1055,\"type\":\"LoadImage\",\"pos\":[-588.5657958984375,310.53521728515625],\"size\":[315,314],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3658],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[3659],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"clipspace/clipspace-mask-264573735.png [input]\",\"image\"]},{\"id\":1041,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[-210,-350],\"size\":[315,286],\"flags\":{},\"order\":21,\"mode\":4,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":3611},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3609}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,10,false]},{\"id\":1056,\"type\":\"CLIPTextEncode\",\"pos\":[251.24851989746094,-166.23118591308594],\"size\":[311.10028076171875,154.46998596191406],\"flags\":{\"collapsed\":false},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":3656}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[3662],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a soviet T72 tank driving down the middle of a road in a city, crossing over the crosswalk, aiming its gun at the camera\"]},{\"id\":1043,\"type\":\"ClownOptions_SDE_Beta\",\"pos\":[249.46791076660156,47.537593841552734],\"size\":[315,266],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas_substep\",\"localized_name\":\"etas_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3643],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Beta\"},\"widgets_values\":[\"gaussian\",\"gaussian\",\"hard\",\"hard\",0.5,0.75,-1,\"fixed\"]},{\"id\":1018,\"type\":\"ClownOptions_ImplicitSteps_Beta\",\"pos\":[611.24853515625,-371.7803649902344],\"size\":[340.20001220703125,130],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3664],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_ImplicitSteps_Beta\"},\"widgets_values\":[\"bongmath\",\"bongmath\",2,0]},{\"id\":1053,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[611.24853515625,-181.78033447265625],\"size\":[340.55120849609375,730],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":3661},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3662},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3660},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3641},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":3664},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":3643},{\"name\":\"options 3\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3665],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"exponential/res_2s\",\"beta57\",30,-1,1,1,0,\"fixed\",\"standard\",true]}],\"links\":[[1963,663,0,664,0,\"MODEL\"],[1964,664,0,13,0,\"*\"],[1965,663,1,490,0,\"*\"],[1966,663,2,14,0,\"*\"],[2100,731,1,726,2,\"FLOAT\"],[2101,726,0,727,0,\"IMAGE\"],[2102,726,0,727,1,\"IMAGE\"],[2103,728,0,727,2,\"IMAGE\"],[2104,729,0,727,5,\"INT\"],[2105,729,1,727,6,\"INT\"],[2106,726,1,728,0,\"MASK\"],[2108,729,0,731,0,\"*\"],[2109,729,1,731,1,\"*\"],[2125,727,3,737,1,\"LATENT\"],[2153,14,0,745,1,\"VAE\"],[2198,758,0,759,1,\"IMAGE\"],[2199,759,0,760,0,\"IMAGE\"],[2200,759,0,761,1,\"IMAGE\"],[2201,745,0,758,0,\"IMAGE\"],[2204,726,5,758,1,\"INT\"],[2205,726,6,758,2,\"INT\"],[2206,726,3,759,3,\"INT\"],[2207,726,4,759,4,\"INT\"],[2208,745,0,762,0,\"IMAGE\"],[2209,726,0,762,1,\"IMAGE\"],[2210,725,0,761,0,\"IMAGE\"],[2211,725,0,759,0,\"IMAGE\"],[2233,727,2,765,0,\"MASK\"],[2241,745,0,744,0,\"IMAGE\"],[2301,726,2,759,2,\"MASK\"],[2317,13,0,786,0,\"MODEL\"],[2318,786,0,737,0,\"MODEL\"],[2338,725,0,726,0,\"IMAGE\"],[3508,14,0,727,4,\"VAE\"],[3568,745,0,1022,0,\"IMAGE\"],[3569,1022,0,1024,0,\"IMAGE\"],[3570,765,0,1022,1,\"IMAGE\"],[3602,727,0,1038,0,\"LATENT\"],[3603,727,0,1038,1,\"LATENT\"],[3604,727,2,1038,2,\"MASK\"],[3605,728,0,1039,1,\"IMAGE\"],[3606,726,0,1039,0,\"IMAGE\"],[3607,1039,0,1040,0,\"IMAGE\"],[3609,1038,0,1041,3,\"GUIDES\"],[3611,727,0,1041,0,\"LATENT\"],[3641,1038,0,1053,5,\"GUIDES\"],[3643,1043,0,1053,7,\"OPTIONS\"],[3656,490,0,1056,0,\"CLIP\"],[3658,1055,0,725,0,\"*\"],[3659,1055,1,726,1,\"MASK\"],[3660,727,3,1053,3,\"LATENT\"],[3661,737,0,1053,0,\"MODEL\"],[3662,1056,0,1053,1,\"CONDITIONING\"],[3664,1018,0,1053,6,\"OPTIONS\"],[3665,1053,0,745,0,\"LATENT\"]],\"groups\":[{\"id\":1,\"title\":\"Prepare Input\",\"bounding\":[-611.2013549804688,224.80706787109375,755.7755737304688,762.867431640625],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":2,\"title\":\"Patch and Stitch\",\"bounding\":[1079.80078125,-451.0775451660156,1387.1339111328125,1156.21923828125],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":3,\"title\":\"Loaders\",\"bounding\":[-1311.103515625,-459.84735107421875,645.1646118164062,640.0969848632812],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":5,\"title\":\"Sampling\",\"bounding\":[204.55885314941406,-455.63134765625,812.3118896484375,1071.2481689453125],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":6,\"title\":\"Guides\",\"bounding\":[-611.8231811523438,-457.95751953125,755.8380737304688,634.3353271484375],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}}],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.3072020475058177,\"offset\":[3303.9392897394673,741.4045019633804]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux inpainting.json",
    "content": "{\"last_node_id\":637,\"last_link_id\":1778,\"nodes\":[{\"id\":617,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[4647.0654296875,1012.7097778320312],\"size\":[315,690],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1730},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1754},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1733},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":1744},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1756],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",40,30,1,1,15,\"fixed\",\"standard\",true]},{\"id\":619,\"type\":\"VAEDecode\",\"pos\":[5354.6103515625,907.4140014648438],\"size\":[210,46],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":1771},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1740}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1765],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":631,\"type\":\"SaveImage\",\"pos\":[5357.8349609375,1012.29443359375],\"size\":[315,270],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1765}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":624,\"type\":\"CLIPTextEncode\",\"pos\":[4233.03955078125,1015.2553100585938],\"size\":[380.6268615722656,114.73346710205078],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1753}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1754],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"a weird alien tripod with a purple woman's head on top \"]},{\"id\":615,\"type\":\"FluxLoader\",\"pos\":[3883.31982421875,1018.0260620117188],\"size\":[315,282],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1766],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1753],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1723,1740],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":null},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"default\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\".none\",\".none\"]},{\"id\":346,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[3940.993408203125,831.2357177734375],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1766},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1721}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1730],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":620,\"type\":\"ClownGuide_Beta\",\"pos\":[4355.02392578125,1383.0733642578125],\"size\":[264.49530029296875,290],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":1767},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":1745},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[1744],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"flow\",false,false,1,1,\"constant\",0,40,false]},{\"id\":626,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[4988.4580078125,1015.6370239257812],\"size\":[340.20001220703125,509.99993896484375],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1756},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":1770},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1771],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",-1,1,\"resample\",true]},{\"id\":422,\"type\":\"VAEEncodeAdvanced\",\"pos\":[4080.7021484375,1383.7640380859375],\"size\":[240.29074096679688,278],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":1777},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":1723}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[1767,1772],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1721,1733],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":627,\"type\":\"ClownGuide_Beta\",\"pos\":[4701.61572265625,1776.4569091796875],\"size\":[264.49530029296875,290],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":1772},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":1778},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[1770],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"flow\",false,false,1,1,\"constant\",0,40,false]},{\"id\":634,\"type\":\"GrowMask\",\"pos\":[4102.16650390625,1794.78857421875],\"size\":[210,82],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":1774}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1778],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"GrowMask\"},\"widgets_values\":[20,false]},{\"id\":621,\"type\":\"LoadImage\",\"pos\":[3718.762939453125,1384.687255859375],\"size\":[319.33538818359375,313.277587890625],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1777],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1745,1774],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"clipspace/clipspace-mask-150185841.8.png [input]\",\"image\"]},{\"id\":637,\"type\":\"Note\",\"pos\":[3731.639892578125,1771.010009765625],\"size\":[282.0154113769531,88],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Draw your mask on your image for the area you would like to inpaint.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[1721,422,3,346,1,\"LATENT\"],[1723,615,2,422,4,\"VAE\"],[1730,346,0,617,0,\"MODEL\"],[1733,422,3,617,3,\"LATENT\"],[1740,615,2,619,1,\"VAE\"],[1744,620,0,617,5,\"GUIDES\"],[1745,621,1,620,1,\"MASK\"],[1753,615,1,624,0,\"CLIP\"],[1754,624,0,617,1,\"CONDITIONING\"],[1756,617,0,626,4,\"LATENT\"],[1765,619,0,631,0,\"IMAGE\"],[1766,615,0,346,0,\"MODEL\"],[1767,422,0,620,0,\"LATENT\"],[1770,627,0,626,5,\"GUIDES\"],[1771,626,0,619,0,\"LATENT\"],[1772,422,0,627,0,\"LATENT\"],[1774,621,1,634,0,\"MASK\"],[1777,621,0,422,0,\"IMAGE\"],[1778,634,0,627,1,\"MASK\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.3109994191500227,\"offset\":[-1810.8840558767379,-650.1028379746496]},\"node_versions\":{\"comfy-core\":\"0.3.26\",\"comfyui_controlnet_aux\":\"1e9eac6377c882da8bb360c7544607036904362c\",\"ComfyUI-VideoHelperSuite\":\"c36626c6028faca912eafcedbc71f1d342fb4d2a\"},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux regional antiblur.json",
    "content": "{\"last_node_id\":723,\"last_link_id\":2096,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1280,-650],\"size\":[75,26],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1964}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1967],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[1280,-610],\"size\":[75,26],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1965}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[1939,2092],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":14,\"type\":\"Reroute\",\"pos\":[1280,-570],\"size\":[75,26],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1966}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,1328],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":398,\"type\":\"SaveImage\",\"pos\":[1379.9996337890625,-267.2835998535156],\"size\":[341.7508850097656,561.0067749023438],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1329}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":701,\"type\":\"Note\",\"pos\":[80,-520],\"size\":[342.05950927734375,88],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"I usually just lazily draw masks in Load Image nodes (with some random image loaded), but for the sake of reproducibility, here's another approach.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":712,\"type\":\"Note\",\"pos\":[-210,-520],\"size\":[245.76409912109375,91.6677017211914],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"So long as these masks are all the same size, the regional conditioning nodes will handle resizing to the image size for you.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":676,\"type\":\"InvertMask\",\"pos\":[20,-370],\"size\":[142.42074584960938,26],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2073}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2083],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"InvertMask\"},\"widgets_values\":[]},{\"id\":663,\"type\":\"FluxLoader\",\"pos\":[630,-720],\"size\":[374.41741943359375,282],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1963],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1965],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1966],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":[],\"slot_index\":3},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":[],\"slot_index\":4}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"fp8_e4m3fn_fast\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\".none\",\".none\"]},{\"id\":662,\"type\":\"CLIPTextEncode\",\"pos\":[460,-370],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1939}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2094],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a woman wearing a red flannel shirt and a cute shark plush blue hat\"]},{\"id\":723,\"type\":\"CLIPTextEncode\",\"pos\":[460,-240],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2092}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2093],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a college campus\"]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[719.6110229492188,16.752899169921875],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1399],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":710,\"type\":\"MaskPreview\",\"pos\":[180,-190],\"size\":[210,246],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2054}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":664,\"type\":\"ReFluxPatcher\",\"pos\":[1040,-720],\"size\":[210,82],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1963}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1964],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":397,\"type\":\"VAEDecode\",\"pos\":[1382.3662109375,-374.17059326171875],\"size\":[210,46],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":2096},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1328}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1329],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":715,\"type\":\"SolidMask\",\"pos\":[-220,-370],\"size\":[210,106],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2073],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,1024,1024]},{\"id\":716,\"type\":\"SolidMask\",\"pos\":[-220,-220],\"size\":[210,106],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2065],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,384,864]},{\"id\":709,\"type\":\"MaskComposite\",\"pos\":[190,-370],\"size\":[210,126],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"MASK\",\"link\":2083},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"MASK\",\"link\":2065}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2054,2091],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskComposite\"},\"widgets_values\":[256,160,\"add\"]},{\"id\":704,\"type\":\"Note\",\"pos\":[101.74818420410156,112.67951965332031],\"size\":[290.7107238769531,155.35317993164062],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownRegionalConditionings:\\n\\nTry raising or lowering weight, and changing the weight scheduler from beta57 to Karras (weakens more quickly), or to linear quadratic (stronger late).\\n\\nTry changing region_bleed_start_step (earlier will make the image blend together more), and end_step.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":401,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1010,-370],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1967},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2095},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1399},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2096],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"bong_tangent\",30,-1,1,1,3,\"fixed\",\"standard\",true]},{\"id\":722,\"type\":\"ClownRegionalConditioning2\",\"pos\":[690,-370],\"size\":[287.75750732421875,330],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"conditioning_masked\",\"localized_name\":\"conditioning_masked\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2094},{\"name\":\"conditioning_unmasked\",\"localized_name\":\"conditioning_unmasked\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2093},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2091},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":[2095],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning2\"},\"widgets_values\":[1,0,0,\"constant\",0,-1,\"boolean_masked\",32,false]},{\"id\":703,\"type\":\"Note\",\"pos\":[423.10699462890625,-96.14085388183594],\"size\":[241.9689483642578,386.7543640136719],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"edge_width also creates some overlap around the edges of the mask.\\n\\nboolean_masked means that the masked area can \\\"see\\\" the rest of the image, but the unmasked area cannot. \\\"boolean\\\" would mean neither area could see the rest of the image.\\n\\nTry setting to boolean_unmasked and see what happens!\\n\\nIf you still have blur, try reducing edge_width (and if you have seams, try increasing it, or setting end_step to something like 20). \\n\\nAlso verify that you can generate the background prompt alone without blur (if you can't, this won't work). And don't get stuck on one seed.\\n\\nVaguely human-shaped masks also tend to work better than the blocky one used here.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[18,14,0,7,4,\"VAE\"],[1328,14,0,397,1,\"VAE\"],[1329,397,0,398,0,\"IMAGE\"],[1399,7,3,401,3,\"LATENT\"],[1939,490,0,662,0,\"CLIP\"],[1963,663,0,664,0,\"MODEL\"],[1964,664,0,13,0,\"*\"],[1965,663,1,490,0,\"*\"],[1966,663,2,14,0,\"*\"],[1967,13,0,401,0,\"MODEL\"],[2054,709,0,710,0,\"MASK\"],[2065,716,0,709,1,\"MASK\"],[2073,715,0,676,0,\"MASK\"],[2083,676,0,709,0,\"MASK\"],[2091,709,0,722,2,\"MASK\"],[2092,490,0,723,0,\"CLIP\"],[2093,723,0,722,1,\"CONDITIONING\"],[2094,662,0,722,0,\"CONDITIONING\"],[2095,722,0,401,1,\"CONDITIONING\"],[2096,401,0,397,0,\"LATENT\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.91943424957756,\"offset\":[1680.6010824178522,841.7668875984083]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux regional redux (2 zone).json",
    "content": "{\"last_node_id\":704,\"last_link_id\":2042,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1300,-790],\"size\":[75,26],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1964}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1967],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[1300,-750],\"size\":[75,26],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1965}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[1706,1939],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":541,\"type\":\"CLIPTextEncode\",\"pos\":[692.1508178710938,183.7528839111328],\"size\":[265.775390625,113.01970672607422],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1706}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1732],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, low quality, bad quality, low detail, mutated, jpeg artifacts, compression artifacts,\"]},{\"id\":14,\"type\":\"Reroute\",\"pos\":[1300,-710],\"size\":[75,26],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1966}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,1328],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":397,\"type\":\"VAEDecode\",\"pos\":[1403.6392822265625,-371.9699401855469],\"size\":[210,46],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":1988},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1328}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1329],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":680,\"type\":\"Reroute\",\"pos\":[1310,-660],\"size\":[75,26],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2001}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP_VISION\",\"links\":[2004,2009]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":678,\"type\":\"StyleModelApply\",\"pos\":[101.3630142211914,-560.2020874023438],\"size\":[262,122],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":2005},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":1999},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":2003}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2002],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":683,\"type\":\"CLIPVisionEncode\",\"pos\":[-170,-220],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":2009},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2035}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[2008]}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":682,\"type\":\"StyleModelApply\",\"pos\":[100,-250],\"size\":[262,122],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":2006},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":2007},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":2008}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2020],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":681,\"type\":\"CLIPVisionEncode\",\"pos\":[-173.92124938964844,-524.1537475585938],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":2004},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2028}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[2003]}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":694,\"type\":\"LoadImage\",\"pos\":[-536.0714111328125,-640.6544189453125],\"size\":[315,314],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2028],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ChatGPT Image Apr 29, 2025, 07_47_12 PM.png\",\"image\"]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[696.7778930664062,-164.97328186035156],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1399],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",768,1344,\"red\",false,\"16_channels\"]},{\"id\":596,\"type\":\"ClownRegionalConditioning\",\"pos\":[425.9762268066406,-243.12513732910156],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":null},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2042}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[1937],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,256]},{\"id\":401,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1010,-370],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1967},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1735},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1732},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1399},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1988],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"exponential/res_2s\",\"bong_tangent\",20,-1,1,1,109,\"fixed\",\"standard\",true]},{\"id\":560,\"type\":\"ClownRegionalConditionings\",\"pos\":[676.1644897460938,-499.31219482421875],\"size\":[278.4758605957031,266],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":1938},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":[1735],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditionings\"},\"widgets_values\":[0.5,1,14,\"beta57\",0,20,\"boolean\",false]},{\"id\":690,\"type\":\"LoadImage\",\"pos\":[-531.4011840820312,-234.04151916503906],\"size\":[315,314],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2035],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_00452_.png\",\"image\"]},{\"id\":676,\"type\":\"InvertMask\",\"pos\":[-1270,-450],\"size\":[140,26],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":1990}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1991],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"InvertMask\"},\"widgets_values\":[]},{\"id\":666,\"type\":\"SolidMask\",\"pos\":[-1500,-450],\"size\":[210,106],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1990],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,1344,768]},{\"id\":667,\"type\":\"MaskPreview\",\"pos\":[-840,-570],\"size\":[210,246],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":1969}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":670,\"type\":\"MaskPreview\",\"pos\":[-840,-280],\"size\":[210,246],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2041}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":661,\"type\":\"ClownRegionalConditioning\",\"pos\":[411.9298095703125,-539.053955078125],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":1937},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2002},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2036}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[1938],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,256]},{\"id\":665,\"type\":\"MaskComposite\",\"pos\":[-1100,-450],\"size\":[210,126],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"MASK\",\"link\":1991},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"MASK\",\"link\":1995}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1969,2036,2038],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskComposite\"},\"widgets_values\":[0,0,\"add\"]},{\"id\":700,\"type\":\"MaskFlip+\",\"pos\":[-1098.6136474609375,-267.628173828125],\"size\":[210,58],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2038}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2041,2042],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskFlip+\"},\"widgets_values\":[\"x\"]},{\"id\":668,\"type\":\"SolidMask\",\"pos\":[-1502.6644287109375,-289.3330993652344],\"size\":[210,106],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1995],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,768,768]},{\"id\":701,\"type\":\"Note\",\"pos\":[-1378.6959228515625,-637.0702514648438],\"size\":[342.05950927734375,88],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"I usually just lazily draw masks in Load Image nodes (with some random image loaded), but for the sake of reproducibility, here's another approach.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":663,\"type\":\"FluxLoader\",\"pos\":[654.6221923828125,-858.3792724609375],\"size\":[374.41741943359375,282],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1963],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1965],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1966],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":[2001],\"slot_index\":3},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":[2000],\"slot_index\":4}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"fp8_e4m3fn_fast\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\"sigclip_vision_patch14_384.safetensors\",\"flux1-redux-dev.safetensors\"]},{\"id\":664,\"type\":\"ReFluxPatcher\",\"pos\":[1064.7325439453125,-863.0516967773438],\"size\":[210,82],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1963}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1964],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":679,\"type\":\"Reroute\",\"pos\":[1300,-610],\"size\":[75,26],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2000}],\"outputs\":[{\"name\":\"\",\"type\":\"STYLE_MODEL\",\"links\":[1999,2007]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":662,\"type\":\"CLIPTextEncode\",\"pos\":[-140.3179168701172,-670.337158203125],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1939}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2005,2006],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"\"]},{\"id\":702,\"type\":\"Note\",\"pos\":[-1222.3177490234375,-134.59034729003906],\"size\":[278.04071044921875,88],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Note that these masks are overlapping.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":703,\"type\":\"Note\",\"pos\":[358.4803466796875,-41.564422607421875],\"size\":[278.04071044921875,88],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"edge_width also creates some overlap around the edges of the mask.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":704,\"type\":\"Note\",\"pos\":[324.8023986816406,-781.4505004882812],\"size\":[290.7107238769531,155.35317993164062],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownRegionalConditionings:\\n\\nTry raising or lowering weight, and changing the weight scheduler from beta57 to Karras (weakens more quickly), or to linear quadratic (stronger late).\\n\\nTry changing region_bleed_start_step, and end_step.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":398,\"type\":\"SaveImage\",\"pos\":[1379.9996337890625,-267.2835998535156],\"size\":[341.7508850097656,561.0067749023438],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1329}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]}],\"links\":[[18,14,0,7,4,\"VAE\"],[1328,14,0,397,1,\"VAE\"],[1329,397,0,398,0,\"IMAGE\"],[1399,7,3,401,3,\"LATENT\"],[1706,490,0,541,0,\"CLIP\"],[1732,541,0,401,2,\"CONDITIONING\"],[1735,560,0,401,1,\"CONDITIONING\"],[1937,596,0,661,0,\"COND_REGIONS\"],[1938,661,0,560,0,\"COND_REGIONS\"],[1939,490,0,662,0,\"CLIP\"],[1963,663,0,664,0,\"MODEL\"],[1964,664,0,13,0,\"*\"],[1965,663,1,490,0,\"*\"],[1966,663,2,14,0,\"*\"],[1967,13,0,401,0,\"MODEL\"],[1969,665,0,667,0,\"MASK\"],[1988,401,0,397,0,\"LATENT\"],[1990,666,0,676,0,\"MASK\"],[1991,676,0,665,0,\"MASK\"],[1995,668,0,665,1,\"MASK\"],[1999,679,0,678,1,\"STYLE_MODEL\"],[2000,663,4,679,0,\"*\"],[2001,663,3,680,0,\"*\"],[2002,678,0,661,1,\"CONDITIONING\"],[2003,681,0,678,2,\"CLIP_VISION_OUTPUT\"],[2004,680,0,681,0,\"CLIP_VISION\"],[2005,662,0,678,0,\"CONDITIONING\"],[2006,662,0,682,0,\"CONDITIONING\"],[2007,679,0,682,1,\"STYLE_MODEL\"],[2008,683,0,682,2,\"CLIP_VISION_OUTPUT\"],[2009,680,0,683,0,\"CLIP_VISION\"],[2020,682,0,596,1,\"CONDITIONING\"],[2028,694,0,681,1,\"IMAGE\"],[2035,690,0,683,1,\"IMAGE\"],[2036,665,0,661,2,\"MASK\"],[2038,665,0,700,0,\"MASK\"],[2041,700,0,670,0,\"MASK\"],[2042,700,0,596,2,\"MASK\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.7449402268886907,\"offset\":[2753.5015634091214,978.5823037629943]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux regional redux (3 zone, nested).json",
    "content": "{\"last_node_id\":720,\"last_link_id\":2082,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1300,-790],\"size\":[75,26],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1964}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1967],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[1300,-750],\"size\":[75,26],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1965}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[1706,1939],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":541,\"type\":\"CLIPTextEncode\",\"pos\":[692.1508178710938,183.7528839111328],\"size\":[265.775390625,113.01970672607422],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1706}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1732],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, low quality, bad quality, low detail, mutated, jpeg artifacts, compression artifacts,\"]},{\"id\":14,\"type\":\"Reroute\",\"pos\":[1300,-710],\"size\":[75,26],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1966}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,1328],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":680,\"type\":\"Reroute\",\"pos\":[1310,-660],\"size\":[75,26],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2001}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP_VISION\",\"links\":[2004,2009,2043]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":678,\"type\":\"StyleModelApply\",\"pos\":[101.3630142211914,-560.2020874023438],\"size\":[262,122],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":2005},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":1999},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":2003}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2002],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":681,\"type\":\"CLIPVisionEncode\",\"pos\":[-173.92124938964844,-524.1537475585938],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":2004},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2082}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[2003]}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":663,\"type\":\"FluxLoader\",\"pos\":[654.6221923828125,-858.3792724609375],\"size\":[374.41741943359375,282],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1963],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1965],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1966],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":[2001],\"slot_index\":3},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":[2000],\"slot_index\":4}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"fp8_e4m3fn_fast\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\"sigclip_vision_patch14_384.safetensors\",\"flux1-redux-dev.safetensors\"]},{\"id\":664,\"type\":\"ReFluxPatcher\",\"pos\":[1064.7325439453125,-863.0516967773438],\"size\":[210,82],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1963}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1964],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":679,\"type\":\"Reroute\",\"pos\":[1300,-610],\"size\":[75,26],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2000}],\"outputs\":[{\"name\":\"\",\"type\":\"STYLE_MODEL\",\"links\":[1999,2007,2046]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":662,\"type\":\"CLIPTextEncode\",\"pos\":[-140.3179168701172,-670.337158203125],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1939}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2005,2006,2045],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"\"]},{\"id\":398,\"type\":\"SaveImage\",\"pos\":[1379.9996337890625,-267.2835998535156],\"size\":[341.7508850097656,561.0067749023438],\"flags\":{},\"order\":40,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1329}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":683,\"type\":\"CLIPVisionEncode\",\"pos\":[-170,-220],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":2009},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2062}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[2008]}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":682,\"type\":\"StyleModelApply\",\"pos\":[100,-250],\"size\":[262,122],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":2006},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":2007},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":2008}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2020],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":706,\"type\":\"CLIPVisionEncode\",\"pos\":[-180,180],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":2043},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2081}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[2047]}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[696.7778930664062,-164.97328186035156],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1399],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1344,768,\"red\",false,\"16_channels\"]},{\"id\":690,\"type\":\"LoadImage\",\"pos\":[-549.7396240234375,-227.43971252441406],\"size\":[315,314],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2062],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_00464_.png\",\"image\"]},{\"id\":704,\"type\":\"Note\",\"pos\":[324.8023986816406,-781.4505004882812],\"size\":[290.7107238769531,155.35317993164062],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownRegionalConditionings:\\n\\nTry raising or lowering weight, and changing the weight scheduler from beta57 to Karras (weakens more quickly), or to linear quadratic (stronger late).\\n\\nTry changing region_bleed_start_step (earlier will make the image blend together more), and end_step.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":703,\"type\":\"Note\",\"pos\":[384.9622802734375,346.1895751953125],\"size\":[278.04071044921875,88],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"edge_width also creates some overlap around the edges of the mask.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":397,\"type\":\"VAEDecode\",\"pos\":[1403.6392822265625,-371.9699401855469],\"size\":[210,46],\"flags\":{},\"order\":39,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":2077},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1328}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1329],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":710,\"type\":\"MaskPreview\",\"pos\":[-809.6506958007812,-582.2230834960938],\"size\":[210,246],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2054}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":715,\"type\":\"SolidMask\",\"pos\":[-1501.8455810546875,-483.931884765625],\"size\":[210,106],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2064,2073],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,1536,1536]},{\"id\":667,\"type\":\"MaskPreview\",\"pos\":[-800.4617309570312,225.60794067382812],\"size\":[210,246],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":1969}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":676,\"type\":\"InvertMask\",\"pos\":[-1225.793212890625,220.8433380126953],\"size\":[140,26],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2073}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1991],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"InvertMask\"},\"widgets_values\":[]},{\"id\":719,\"type\":\"MaskPreview\",\"pos\":[-806.2830810546875,-181.18017578125],\"size\":[210,246],\"flags\":{},\"order\":34,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2072}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":717,\"type\":\"MaskComposite\",\"pos\":[-1232.8262939453125,-171.98712158203125],\"size\":[210,126],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"MASK\",\"link\":2068},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"MASK\",\"link\":2069}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2071],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskComposite\"},\"widgets_values\":[512,512,\"add\"]},{\"id\":718,\"type\":\"SolidMask\",\"pos\":[-1510.0887451171875,-5.13049840927124],\"size\":[210,106],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2069,2076],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,512,512]},{\"id\":716,\"type\":\"SolidMask\",\"pos\":[-1504.66015625,-322.68243408203125],\"size\":[210,106],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2065],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,1024,1024]},{\"id\":701,\"type\":\"Note\",\"pos\":[-1262.5018310546875,-634.6495971679688],\"size\":[342.05950927734375,88],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"I usually just lazily draw masks in Load Image nodes (with some random image loaded), but for the sake of reproducibility, here's another approach.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":712,\"type\":\"Note\",\"pos\":[-1551.669921875,-639.0407104492188],\"size\":[245.76409912109375,91.6677017211914],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"So long as these masks are all the same size, the regional conditioning nodes will handle resizing to the image size for you.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":720,\"type\":\"InvertMask\",\"pos\":[-989.771240234375,-173.28375244140625],\"size\":[140,26],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2071}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2072,2078],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"InvertMask\"},\"widgets_values\":[]},{\"id\":709,\"type\":\"MaskComposite\",\"pos\":[-1250.3681640625,-473.0709228515625],\"size\":[210,126],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"MASK\",\"link\":2064},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"MASK\",\"link\":2065}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2054,2068,2079],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskComposite\"},\"widgets_values\":[256,256,\"subtract\"]},{\"id\":665,\"type\":\"MaskComposite\",\"pos\":[-1049.337646484375,223.26406860351562],\"size\":[210,126],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"MASK\",\"link\":1991},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"MASK\",\"link\":2076}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1969,2080],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskComposite\"},\"widgets_values\":[512,512,\"add\"]},{\"id\":705,\"type\":\"LoadImage\",\"pos\":[-548.5830688476562,-622.7470092773438],\"size\":[315,314],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2082],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_00479_.png\",\"image\"]},{\"id\":694,\"type\":\"LoadImage\",\"pos\":[-545.7549438476562,175.12576293945312],\"size\":[315,314],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2081],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ChatGPT Image Apr 29, 2025, 08_07_01 PM.png\",\"image\"]},{\"id\":401,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1010,-370],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":38,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1967},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1735},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1732},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1399},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2077],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"exponential/res_2s\",\"bong_tangent\",30,-1,1,1,109,\"fixed\",\"standard\",true]},{\"id\":560,\"type\":\"ClownRegionalConditionings\",\"pos\":[676.1644897460938,-499.31219482421875],\"size\":[278.4758605957031,266],\"flags\":{},\"order\":37,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":1938},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":[1735],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditionings\"},\"widgets_values\":[0.5,1,15,\"beta57\",0,30,\"boolean\",false]},{\"id\":707,\"type\":\"StyleModelApply\",\"pos\":[95.6487045288086,150],\"size\":[262,122],\"flags\":{},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":2045},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":2046},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":2047}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2048],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":708,\"type\":\"ClownRegionalConditioning\",\"pos\":[404.6683044433594,155.1585693359375],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":33,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":null},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2048},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2080}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[2050],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,128]},{\"id\":661,\"type\":\"ClownRegionalConditioning\",\"pos\":[409.5088806152344,-556.8058471679688],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":36,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":1937},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2002},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2079}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[1938],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,128]},{\"id\":596,\"type\":\"ClownRegionalConditioning\",\"pos\":[407.416748046875,-245.54579162597656],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":35,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":2050},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2078}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[1937],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,128]}],\"links\":[[18,14,0,7,4,\"VAE\"],[1328,14,0,397,1,\"VAE\"],[1329,397,0,398,0,\"IMAGE\"],[1399,7,3,401,3,\"LATENT\"],[1706,490,0,541,0,\"CLIP\"],[1732,541,0,401,2,\"CONDITIONING\"],[1735,560,0,401,1,\"CONDITIONING\"],[1937,596,0,661,0,\"COND_REGIONS\"],[1938,661,0,560,0,\"COND_REGIONS\"],[1939,490,0,662,0,\"CLIP\"],[1963,663,0,664,0,\"MODEL\"],[1964,664,0,13,0,\"*\"],[1965,663,1,490,0,\"*\"],[1966,663,2,14,0,\"*\"],[1967,13,0,401,0,\"MODEL\"],[1969,665,0,667,0,\"MASK\"],[1991,676,0,665,0,\"MASK\"],[1999,679,0,678,1,\"STYLE_MODEL\"],[2000,663,4,679,0,\"*\"],[2001,663,3,680,0,\"*\"],[2002,678,0,661,1,\"CONDITIONING\"],[2003,681,0,678,2,\"CLIP_VISION_OUTPUT\"],[2004,680,0,681,0,\"CLIP_VISION\"],[2005,662,0,678,0,\"CONDITIONING\"],[2006,662,0,682,0,\"CONDITIONING\"],[2007,679,0,682,1,\"STYLE_MODEL\"],[2008,683,0,682,2,\"CLIP_VISION_OUTPUT\"],[2009,680,0,683,0,\"CLIP_VISION\"],[2020,682,0,596,1,\"CONDITIONING\"],[2043,680,0,706,0,\"CLIP_VISION\"],[2045,662,0,707,0,\"CONDITIONING\"],[2046,679,0,707,1,\"STYLE_MODEL\"],[2047,706,0,707,2,\"CLIP_VISION_OUTPUT\"],[2048,707,0,708,1,\"CONDITIONING\"],[2050,708,0,596,0,\"COND_REGIONS\"],[2054,709,0,710,0,\"MASK\"],[2062,690,0,683,1,\"IMAGE\"],[2064,715,0,709,0,\"MASK\"],[2065,716,0,709,1,\"MASK\"],[2068,709,0,717,0,\"MASK\"],[2069,718,0,717,1,\"MASK\"],[2071,717,0,720,0,\"MASK\"],[2072,720,0,719,0,\"MASK\"],[2073,715,0,676,0,\"MASK\"],[2076,718,0,665,1,\"MASK\"],[2077,401,0,397,0,\"LATENT\"],[2078,720,0,596,2,\"MASK\"],[2079,709,0,661,2,\"MASK\"],[2080,665,0,708,2,\"MASK\"],[2081,694,0,706,1,\"IMAGE\"],[2082,705,0,681,1,\"IMAGE\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.4420993610650337,\"offset\":[3089.9291694729854,951.347346350063]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux regional redux (3 zone, overlapping).json",
    "content": "{\"last_node_id\":715,\"last_link_id\":2063,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1300,-790],\"size\":[75,26],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1964}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1967],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[1300,-750],\"size\":[75,26],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1965}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[1706,1939],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":541,\"type\":\"CLIPTextEncode\",\"pos\":[692.1508178710938,183.7528839111328],\"size\":[265.775390625,113.01970672607422],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1706}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1732],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, low quality, bad quality, low detail, mutated, jpeg artifacts, compression artifacts,\"]},{\"id\":14,\"type\":\"Reroute\",\"pos\":[1300,-710],\"size\":[75,26],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1966}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,1328],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":680,\"type\":\"Reroute\",\"pos\":[1310,-660],\"size\":[75,26],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2001}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP_VISION\",\"links\":[2004,2009,2043]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":678,\"type\":\"StyleModelApply\",\"pos\":[101.3630142211914,-560.2020874023438],\"size\":[262,122],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":2005},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":1999},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":2003}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2002],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":681,\"type\":\"CLIPVisionEncode\",\"pos\":[-173.92124938964844,-524.1537475585938],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":2004},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2028}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[2003]}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":676,\"type\":\"InvertMask\",\"pos\":[-1270,-450],\"size\":[140,26],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":1990}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1991,2051],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"InvertMask\"},\"widgets_values\":[]},{\"id\":667,\"type\":\"MaskPreview\",\"pos\":[-840,-570],\"size\":[210,246],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":1969}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":661,\"type\":\"ClownRegionalConditioning\",\"pos\":[411.9298095703125,-539.053955078125],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":35,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":1937},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2002},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2036}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[1938],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,256]},{\"id\":701,\"type\":\"Note\",\"pos\":[-1378.6959228515625,-637.0702514648438],\"size\":[342.05950927734375,88],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"I usually just lazily draw masks in Load Image nodes (with some random image loaded), but for the sake of reproducibility, here's another approach.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":663,\"type\":\"FluxLoader\",\"pos\":[654.6221923828125,-858.3792724609375],\"size\":[374.41741943359375,282],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1963],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1965],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1966],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":[2001],\"slot_index\":3},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":[2000],\"slot_index\":4}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"fp8_e4m3fn_fast\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\"sigclip_vision_patch14_384.safetensors\",\"flux1-redux-dev.safetensors\"]},{\"id\":664,\"type\":\"ReFluxPatcher\",\"pos\":[1064.7325439453125,-863.0516967773438],\"size\":[210,82],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1963}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1964],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":679,\"type\":\"Reroute\",\"pos\":[1300,-610],\"size\":[75,26],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2000}],\"outputs\":[{\"name\":\"\",\"type\":\"STYLE_MODEL\",\"links\":[1999,2007,2046]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":662,\"type\":\"CLIPTextEncode\",\"pos\":[-140.3179168701172,-670.337158203125],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1939}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2005,2006,2045],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"\"]},{\"id\":398,\"type\":\"SaveImage\",\"pos\":[1379.9996337890625,-267.2835998535156],\"size\":[341.7508850097656,561.0067749023438],\"flags\":{},\"order\":39,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1329}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":683,\"type\":\"CLIPVisionEncode\",\"pos\":[-170,-220],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":2009},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2062}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[2008]}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":682,\"type\":\"StyleModelApply\",\"pos\":[100,-250],\"size\":[262,122],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":2006},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":2007},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":2008}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2020],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":596,\"type\":\"ClownRegionalConditioning\",\"pos\":[425.9762268066406,-243.12513732910156],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":34,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":2050},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2042}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[1937],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,256]},{\"id\":706,\"type\":\"CLIPVisionEncode\",\"pos\":[-180,180],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":2043},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2061}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[2047]}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":707,\"type\":\"StyleModelApply\",\"pos\":[90,150],\"size\":[262,122],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":2045},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":2046},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":2047}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2048],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":708,\"type\":\"ClownRegionalConditioning\",\"pos\":[420,160],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":null},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2048},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2057}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[2050],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,256]},{\"id\":665,\"type\":\"MaskComposite\",\"pos\":[-1100,-450],\"size\":[210,126],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"MASK\",\"link\":1991},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"MASK\",\"link\":1995}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1969,2036,2038],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskComposite\"},\"widgets_values\":[0,0,\"add\"]},{\"id\":670,\"type\":\"MaskPreview\",\"pos\":[-840.8076782226562,-235.62042236328125],\"size\":[210,246],\"flags\":{},\"order\":33,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2041}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":700,\"type\":\"MaskFlip+\",\"pos\":[-1099.420166015625,-236.15890502929688],\"size\":[210,58],\"flags\":{},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2038}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2041,2042],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskFlip+\"},\"widgets_values\":[\"x\"]},{\"id\":710,\"type\":\"MaskPreview\",\"pos\":[-847.5751953125,166.58413696289062],\"size\":[210,246],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2054}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":397,\"type\":\"VAEDecode\",\"pos\":[1403.6392822265625,-371.9699401855469],\"size\":[210,46],\"flags\":{},\"order\":38,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":2056},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1328}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1329],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":401,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1010,-370],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":37,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1967},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1735},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1732},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1399},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2056],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"exponential/res_2s\",\"bong_tangent\",20,-1,1,1,109,\"fixed\",\"standard\",true]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[696.7778930664062,-164.97328186035156],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1399],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1344,768,\"red\",false,\"16_channels\"]},{\"id\":694,\"type\":\"LoadImage\",\"pos\":[-536.0714111328125,-640.6544189453125],\"size\":[315,314],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2028],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ChatGPT Image Apr 29, 2025, 08_07_01 PM.png\",\"image\"]},{\"id\":666,\"type\":\"SolidMask\",\"pos\":[-1500,-450],\"size\":[210,106],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1990],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,1536,512]},{\"id\":712,\"type\":\"Note\",\"pos\":[-1511.985107421875,-66.87181854248047],\"size\":[245.76409912109375,91.6677017211914],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"So long as these masks are all the same size, the regional conditioning nodes will handle resizing to the image size for you.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":668,\"type\":\"SolidMask\",\"pos\":[-1502.6644287109375,-289.3330993652344],\"size\":[210,106],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1995],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,512,512]},{\"id\":690,\"type\":\"LoadImage\",\"pos\":[-549.7396240234375,-227.43971252441406],\"size\":[315,314],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2062],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_00464_.png\",\"image\"]},{\"id\":705,\"type\":\"LoadImage\",\"pos\":[-551.003662109375,157.5296173095703],\"size\":[315,314],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2061],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_00479_.png\",\"image\"]},{\"id\":560,\"type\":\"ClownRegionalConditionings\",\"pos\":[676.1644897460938,-499.31219482421875],\"size\":[278.4758605957031,266],\"flags\":{},\"order\":36,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":1938},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":[1735],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditionings\"},\"widgets_values\":[0.5,1,10,\"beta57\",0,20,\"boolean\",false]},{\"id\":704,\"type\":\"Note\",\"pos\":[324.8023986816406,-781.4505004882812],\"size\":[290.7107238769531,155.35317993164062],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownRegionalConditionings:\\n\\nTry raising or lowering weight, and changing the weight scheduler from beta57 to Karras (weakens more quickly), or to linear quadratic (stronger late).\\n\\nTry changing region_bleed_start_step (earlier will make the image blend together more), and end_step.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":715,\"type\":\"SolidMask\",\"pos\":[-1486.6612548828125,192.47415161132812],\"size\":[210,106],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2063],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,1280,512]},{\"id\":709,\"type\":\"MaskComposite\",\"pos\":[-1104.1712646484375,170.6186981201172],\"size\":[210,126],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"MASK\",\"link\":2051},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"MASK\",\"link\":2063}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2054,2057],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskComposite\"},\"widgets_values\":[128,0,\"add\"]},{\"id\":703,\"type\":\"Note\",\"pos\":[384.9622802734375,346.1895751953125],\"size\":[278.04071044921875,88],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"edge_width also creates some overlap around the edges of the mask.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[18,14,0,7,4,\"VAE\"],[1328,14,0,397,1,\"VAE\"],[1329,397,0,398,0,\"IMAGE\"],[1399,7,3,401,3,\"LATENT\"],[1706,490,0,541,0,\"CLIP\"],[1732,541,0,401,2,\"CONDITIONING\"],[1735,560,0,401,1,\"CONDITIONING\"],[1937,596,0,661,0,\"COND_REGIONS\"],[1938,661,0,560,0,\"COND_REGIONS\"],[1939,490,0,662,0,\"CLIP\"],[1963,663,0,664,0,\"MODEL\"],[1964,664,0,13,0,\"*\"],[1965,663,1,490,0,\"*\"],[1966,663,2,14,0,\"*\"],[1967,13,0,401,0,\"MODEL\"],[1969,665,0,667,0,\"MASK\"],[1990,666,0,676,0,\"MASK\"],[1991,676,0,665,0,\"MASK\"],[1995,668,0,665,1,\"MASK\"],[1999,679,0,678,1,\"STYLE_MODEL\"],[2000,663,4,679,0,\"*\"],[2001,663,3,680,0,\"*\"],[2002,678,0,661,1,\"CONDITIONING\"],[2003,681,0,678,2,\"CLIP_VISION_OUTPUT\"],[2004,680,0,681,0,\"CLIP_VISION\"],[2005,662,0,678,0,\"CONDITIONING\"],[2006,662,0,682,0,\"CONDITIONING\"],[2007,679,0,682,1,\"STYLE_MODEL\"],[2008,683,0,682,2,\"CLIP_VISION_OUTPUT\"],[2009,680,0,683,0,\"CLIP_VISION\"],[2020,682,0,596,1,\"CONDITIONING\"],[2028,694,0,681,1,\"IMAGE\"],[2036,665,0,661,2,\"MASK\"],[2038,665,0,700,0,\"MASK\"],[2041,700,0,670,0,\"MASK\"],[2042,700,0,596,2,\"MASK\"],[2043,680,0,706,0,\"CLIP_VISION\"],[2045,662,0,707,0,\"CONDITIONING\"],[2046,679,0,707,1,\"STYLE_MODEL\"],[2047,706,0,707,2,\"CLIP_VISION_OUTPUT\"],[2048,707,0,708,1,\"CONDITIONING\"],[2050,708,0,596,0,\"COND_REGIONS\"],[2051,676,0,709,0,\"MASK\"],[2054,709,0,710,0,\"MASK\"],[2056,401,0,397,0,\"LATENT\"],[2057,709,0,708,2,\"MASK\"],[2061,705,0,706,1,\"IMAGE\"],[2062,690,0,683,1,\"IMAGE\"],[2063,715,0,709,1,\"MASK\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.5863092971715371,\"offset\":[2841.6279889989714,922.4028503570233]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux regional redux (3 zones).json",
    "content": "{\"last_node_id\":714,\"last_link_id\":2062,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1300,-790],\"size\":[75,26],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1964}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1967],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[1300,-750],\"size\":[75,26],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1965}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[1706,1939],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":541,\"type\":\"CLIPTextEncode\",\"pos\":[692.1508178710938,183.7528839111328],\"size\":[265.775390625,113.01970672607422],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1706}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1732],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, low quality, bad quality, low detail, mutated, jpeg artifacts, compression artifacts,\"]},{\"id\":14,\"type\":\"Reroute\",\"pos\":[1300,-710],\"size\":[75,26],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1966}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,1328],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":680,\"type\":\"Reroute\",\"pos\":[1310,-660],\"size\":[75,26],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2001}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP_VISION\",\"links\":[2004,2009,2043]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":678,\"type\":\"StyleModelApply\",\"pos\":[101.3630142211914,-560.2020874023438],\"size\":[262,122],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":2005},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":1999},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":2003}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2002],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":681,\"type\":\"CLIPVisionEncode\",\"pos\":[-173.92124938964844,-524.1537475585938],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":2004},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2028}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[2003]}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":560,\"type\":\"ClownRegionalConditionings\",\"pos\":[676.1644897460938,-499.31219482421875],\"size\":[278.4758605957031,266],\"flags\":{},\"order\":35,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":1938},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":[1735],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditionings\"},\"widgets_values\":[0.5,1,14,\"beta57\",0,20,\"boolean\",false]},{\"id\":676,\"type\":\"InvertMask\",\"pos\":[-1270,-450],\"size\":[140,26],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":1990}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1991,2051],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"InvertMask\"},\"widgets_values\":[]},{\"id\":667,\"type\":\"MaskPreview\",\"pos\":[-840,-570],\"size\":[210,246],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":1969}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":661,\"type\":\"ClownRegionalConditioning\",\"pos\":[411.9298095703125,-539.053955078125],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":34,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":1937},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2002},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2036}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[1938],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,256]},{\"id\":701,\"type\":\"Note\",\"pos\":[-1378.6959228515625,-637.0702514648438],\"size\":[342.05950927734375,88],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"I usually just lazily draw masks in Load Image nodes (with some random image loaded), but for the sake of reproducibility, here's another approach.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":663,\"type\":\"FluxLoader\",\"pos\":[654.6221923828125,-858.3792724609375],\"size\":[374.41741943359375,282],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1963],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1965],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1966],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":[2001],\"slot_index\":3},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":[2000],\"slot_index\":4}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"fp8_e4m3fn_fast\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\"sigclip_vision_patch14_384.safetensors\",\"flux1-redux-dev.safetensors\"]},{\"id\":664,\"type\":\"ReFluxPatcher\",\"pos\":[1064.7325439453125,-863.0516967773438],\"size\":[210,82],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1963}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1964],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":679,\"type\":\"Reroute\",\"pos\":[1300,-610],\"size\":[75,26],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2000}],\"outputs\":[{\"name\":\"\",\"type\":\"STYLE_MODEL\",\"links\":[1999,2007,2046]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":662,\"type\":\"CLIPTextEncode\",\"pos\":[-140.3179168701172,-670.337158203125],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1939}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2005,2006,2045],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"\"]},{\"id\":704,\"type\":\"Note\",\"pos\":[324.8023986816406,-781.4505004882812],\"size\":[290.7107238769531,155.35317993164062],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownRegionalConditionings:\\n\\nTry raising or lowering weight, and changing the weight scheduler from beta57 to Karras (weakens more quickly), or to linear quadratic (stronger late).\\n\\nTry changing region_bleed_start_step, and end_step.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":398,\"type\":\"SaveImage\",\"pos\":[1379.9996337890625,-267.2835998535156],\"size\":[341.7508850097656,561.0067749023438],\"flags\":{},\"order\":38,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1329}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":703,\"type\":\"Note\",\"pos\":[-84.50921630859375,-859.7656860351562],\"size\":[278.04071044921875,88],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"edge_width also creates some overlap around the edges of the mask.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":683,\"type\":\"CLIPVisionEncode\",\"pos\":[-170,-220],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":2009},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2062}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[2008]}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":682,\"type\":\"StyleModelApply\",\"pos\":[100,-250],\"size\":[262,122],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":2006},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":2007},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":2008}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2020],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":596,\"type\":\"ClownRegionalConditioning\",\"pos\":[425.9762268066406,-243.12513732910156],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":33,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":2050},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2042}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[1937],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,256]},{\"id\":706,\"type\":\"CLIPVisionEncode\",\"pos\":[-180,180],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":2043},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2061}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[2047]}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":707,\"type\":\"StyleModelApply\",\"pos\":[90,150],\"size\":[262,122],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":2045},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":2046},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":2047}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2048],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":708,\"type\":\"ClownRegionalConditioning\",\"pos\":[420,160],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":null},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2048},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2057}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[2050],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,256]},{\"id\":665,\"type\":\"MaskComposite\",\"pos\":[-1100,-450],\"size\":[210,126],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"MASK\",\"link\":1991},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"MASK\",\"link\":1995}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1969,2036,2038],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskComposite\"},\"widgets_values\":[0,0,\"add\"]},{\"id\":670,\"type\":\"MaskPreview\",\"pos\":[-840.8076782226562,-235.62042236328125],\"size\":[210,246],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2041}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":700,\"type\":\"MaskFlip+\",\"pos\":[-1099.420166015625,-236.15890502929688],\"size\":[210,58],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2038}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2041,2042],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskFlip+\"},\"widgets_values\":[\"x\"]},{\"id\":710,\"type\":\"MaskPreview\",\"pos\":[-847.5751953125,166.58413696289062],\"size\":[210,246],\"flags\":{},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2054}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":397,\"type\":\"VAEDecode\",\"pos\":[1403.6392822265625,-371.9699401855469],\"size\":[210,46],\"flags\":{},\"order\":37,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":2056},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1328}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1329],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":401,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1010,-370],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":36,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1967},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1735},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1732},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1399},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2056],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"exponential/res_2s\",\"bong_tangent\",20,-1,1,1,109,\"fixed\",\"standard\",true]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[696.7778930664062,-164.97328186035156],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1399],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1344,768,\"red\",false,\"16_channels\"]},{\"id\":694,\"type\":\"LoadImage\",\"pos\":[-536.0714111328125,-640.6544189453125],\"size\":[315,314],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2028],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ChatGPT Image Apr 29, 2025, 08_07_01 PM.png\",\"image\"]},{\"id\":666,\"type\":\"SolidMask\",\"pos\":[-1500,-450],\"size\":[210,106],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1990],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,1536,512]},{\"id\":712,\"type\":\"Note\",\"pos\":[-1511.985107421875,-66.87181854248047],\"size\":[245.76409912109375,91.6677017211914],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"So long as these masks are all the same size, the regional conditioning nodes will handle resizing to the image size for you.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":709,\"type\":\"MaskComposite\",\"pos\":[-1104.1712646484375,170.6186981201172],\"size\":[210,126],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"MASK\",\"link\":2051},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"MASK\",\"link\":2060}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2054,2057],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskComposite\"},\"widgets_values\":[512,0,\"add\"]},{\"id\":668,\"type\":\"SolidMask\",\"pos\":[-1502.6644287109375,-289.3330993652344],\"size\":[210,106],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1995,2060],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,512,512]},{\"id\":690,\"type\":\"LoadImage\",\"pos\":[-549.7396240234375,-227.43971252441406],\"size\":[315,314],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2062],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_00464_.png\",\"image\"]},{\"id\":705,\"type\":\"LoadImage\",\"pos\":[-551.003662109375,157.5296173095703],\"size\":[315,314],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2061],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_00479_.png\",\"image\"]}],\"links\":[[18,14,0,7,4,\"VAE\"],[1328,14,0,397,1,\"VAE\"],[1329,397,0,398,0,\"IMAGE\"],[1399,7,3,401,3,\"LATENT\"],[1706,490,0,541,0,\"CLIP\"],[1732,541,0,401,2,\"CONDITIONING\"],[1735,560,0,401,1,\"CONDITIONING\"],[1937,596,0,661,0,\"COND_REGIONS\"],[1938,661,0,560,0,\"COND_REGIONS\"],[1939,490,0,662,0,\"CLIP\"],[1963,663,0,664,0,\"MODEL\"],[1964,664,0,13,0,\"*\"],[1965,663,1,490,0,\"*\"],[1966,663,2,14,0,\"*\"],[1967,13,0,401,0,\"MODEL\"],[1969,665,0,667,0,\"MASK\"],[1990,666,0,676,0,\"MASK\"],[1991,676,0,665,0,\"MASK\"],[1995,668,0,665,1,\"MASK\"],[1999,679,0,678,1,\"STYLE_MODEL\"],[2000,663,4,679,0,\"*\"],[2001,663,3,680,0,\"*\"],[2002,678,0,661,1,\"CONDITIONING\"],[2003,681,0,678,2,\"CLIP_VISION_OUTPUT\"],[2004,680,0,681,0,\"CLIP_VISION\"],[2005,662,0,678,0,\"CONDITIONING\"],[2006,662,0,682,0,\"CONDITIONING\"],[2007,679,0,682,1,\"STYLE_MODEL\"],[2008,683,0,682,2,\"CLIP_VISION_OUTPUT\"],[2009,680,0,683,0,\"CLIP_VISION\"],[2020,682,0,596,1,\"CONDITIONING\"],[2028,694,0,681,1,\"IMAGE\"],[2036,665,0,661,2,\"MASK\"],[2038,665,0,700,0,\"MASK\"],[2041,700,0,670,0,\"MASK\"],[2042,700,0,596,2,\"MASK\"],[2043,680,0,706,0,\"CLIP_VISION\"],[2045,662,0,707,0,\"CONDITIONING\"],[2046,679,0,707,1,\"STYLE_MODEL\"],[2047,706,0,707,2,\"CLIP_VISION_OUTPUT\"],[2048,707,0,708,1,\"CONDITIONING\"],[2050,708,0,596,0,\"COND_REGIONS\"],[2051,676,0,709,0,\"MASK\"],[2054,709,0,710,0,\"MASK\"],[2056,401,0,397,0,\"LATENT\"],[2057,709,0,708,2,\"MASK\"],[2060,668,0,709,1,\"MASK\"],[2061,705,0,706,1,\"IMAGE\"],[2062,690,0,683,1,\"IMAGE\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.586309297171537,\"offset\":[2736.1731738476205,939.9577246808323]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux style antiblur.json",
    "content": "{\"last_node_id\":739,\"last_link_id\":2113,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1280,-650],\"size\":[75,26],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1964}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1967],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[1280,-610],\"size\":[75,26],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1965}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[1939],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":14,\"type\":\"Reroute\",\"pos\":[1280,-570],\"size\":[75,26],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1966}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,1328],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":398,\"type\":\"SaveImage\",\"pos\":[1379.9996337890625,-267.2835998535156],\"size\":[341.7508850097656,561.0067749023438],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1329}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":663,\"type\":\"FluxLoader\",\"pos\":[630,-720],\"size\":[374.41741943359375,282],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1963],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1965],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1966],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":[],\"slot_index\":3},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":[],\"slot_index\":4}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"fp8_e4m3fn_fast\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\".none\",\".none\"]},{\"id\":664,\"type\":\"ReFluxPatcher\",\"pos\":[1040,-720],\"size\":[210,82],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1963}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1964],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":397,\"type\":\"VAEDecode\",\"pos\":[1382.3662109375,-374.17059326171875],\"size\":[210,46],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":2096},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1328}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1329],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[412.2475280761719,-199.0681915283203],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2113},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2100],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1399],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":662,\"type\":\"CLIPTextEncode\",\"pos\":[761.3005981445312,-357.2689208984375],\"size\":[210,102.54972839355469],\"flags\":{\"collapsed\":false},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1939}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2098],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a woman wearing a red flannel shirt and a cute shark plush blue hat, a college campus, brick buildings\"]},{\"id\":727,\"type\":\"Note\",\"pos\":[412.8926086425781,-351.8606872558594],\"size\":[272.4425048828125,88],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This approach can be combined with the regional conditioning anti-blur approach for an even more powerful effect.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":401,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1010,-370],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1967},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2098},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1399},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2099},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2096],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"bong_tangent\",30,-1,1,1,7,\"fixed\",\"standard\",true]},{\"id\":724,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[703.7374267578125,-198.63233947753906],\"size\":[262.8634033203125,286],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2100},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2099],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,10,false]},{\"id\":739,\"type\":\"LoadImage\",\"pos\":[70.82455444335938,-201.66342163085938],\"size\":[315,314],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2113],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (655).png\",\"image\"]},{\"id\":726,\"type\":\"Note\",\"pos\":[415.7740478515625,153.59271240234375],\"size\":[364.5906677246094,164.38613891601562],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The best style guide images will share the lighting and color composition of your desired scene. Some are just inexplicably ineffective at killing blur. Just gather up a bunch of images to try, you'll find some good ones that can be reused for many things. I'm including the one used here in the example_workflows directory, be sure to check for it.\\n\\nAnd don't forget to change seeds. Don't optimize for one seed only. Don't get stuck on one seed! Sometimes one is just not going to work out for whatever you're doing.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[18,14,0,7,4,\"VAE\"],[1328,14,0,397,1,\"VAE\"],[1329,397,0,398,0,\"IMAGE\"],[1399,7,3,401,3,\"LATENT\"],[1939,490,0,662,0,\"CLIP\"],[1963,663,0,664,0,\"MODEL\"],[1964,664,0,13,0,\"*\"],[1965,663,1,490,0,\"*\"],[1966,663,2,14,0,\"*\"],[1967,13,0,401,0,\"MODEL\"],[2096,401,0,397,0,\"LATENT\"],[2098,662,0,401,1,\"CONDITIONING\"],[2099,724,0,401,5,\"GUIDES\"],[2100,7,0,724,0,\"LATENT\"],[2113,739,0,7,0,\"IMAGE\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.91943424957756,\"offset\":[1140.4413839969193,798.117449447068]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux style transfer gguf.json",
    "content": "{\"last_node_id\":1392,\"last_link_id\":3739,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-109.2831802368164],\"size\":[75,26],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3737}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1395],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":14,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-29.283178329467773],\"size\":[75,26],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3739}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,2696],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-69.28317260742188],\"size\":[75,26],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3738}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[2881,3581],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1308,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[14108.255859375,675.60693359375],\"size\":[246.31312561035156,286],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":3709},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3699}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3604],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":431,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[13218.9013671875,-309.28314208984375],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1395},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1398}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2692],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":970,\"type\":\"CLIPTextEncode\",\"pos\":[13688.255859375,165.60690307617188],\"size\":[281.9206848144531,109.87118530273438],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2881}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2882,3627],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, jpeg artifacts, low quality, bad quality, unsharp\"]},{\"id\":1378,\"type\":\"Reroute\",\"pos\":[13184.07421875,533.128662109375],\"size\":[75,26],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3721}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[3724,3729],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1379,\"type\":\"Reroute\",\"pos\":[13185.853515625,168.15780639648438],\"size\":[75,26],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3725}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[3726],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":909,\"type\":\"SaveImage\",\"pos\":[15220,-259.5838928222656],\"size\":[457.3382263183594,422.2065124511719],\"flags\":{},\"order\":34,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2697}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[13400,560],\"size\":[261.2217712402344,298],\"flags\":{\"collapsed\":true},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":3688},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":3727},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,160.00003051757812],\"widget\":{\"name\":\"width\"},\"link\":3732},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,184.00003051757812],\"widget\":{\"name\":\"height\"},\"link\":3733}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2983,3710],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[3709],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1398],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[],\"slot_index\":4},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[],\"slot_index\":5}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1344,768,\"red\",false,\"16_channels\"]},{\"id\":1371,\"type\":\"Image Repeat Tile To Size\",\"pos\":[13390,500],\"size\":[210,146],\"flags\":{\"collapsed\":true},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":3726},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,36],\"widget\":{\"name\":\"width\"},\"link\":3730},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,60],\"widget\":{\"name\":\"height\"},\"link\":3731}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[3727,3728],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Image Repeat Tile To Size\"},\"widgets_values\":[1024,1024,true]},{\"id\":1380,\"type\":\"SetImageSize\",\"pos\":[13380,320],\"size\":[210,102],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[3730,3732],\"slot_index\":0},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[3731,3733],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"SetImageSize\"},\"widgets_values\":[1344,768]},{\"id\":1377,\"type\":\"Image Comparer (rgthree)\",\"pos\":[15742.4619140625,-253.3526153564453],\"size\":[461.9190368652344,413.5953369140625],\"flags\":{},\"order\":35,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":3720},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":3729}],\"outputs\":[],\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_clqis_00009_.png&type=temp&subfolder=&rand=0.8606788093916207\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_clqis_00010_.png&type=temp&subfolder=&rand=0.7775594190958295\"}]]},{\"id\":908,\"type\":\"VAEDecode\",\"pos\":[15217.7802734375,-312.1965637207031],\"size\":[210,46],\"flags\":{\"collapsed\":true},\"order\":33,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3469},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2696}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2697,3720],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":1383,\"type\":\"Note\",\"pos\":[14428.40234375,580.1749877929688],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Samplers like res_2s in this cycling node will also work and are faster. res_2m and res_3m are even faster, but sometimes the effect takes longer in wall time to fully kick in.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1384,\"type\":\"Note\",\"pos\":[14793.0322265625,518.4120483398438],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"res_2m or res_3m can be used here instead and are faster, but are less likely to fully clean up lingering artifacts.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1385,\"type\":\"Note\",\"pos\":[14398.345703125,768.2096557617188],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"method = AdaIN is faster and uses less memory, but is less accurate. Some prefer the effect.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1328,\"type\":\"ClownOptions_SDE_Beta\",\"pos\":[14186.4755859375,-132.6126251220703],\"size\":[315,266],\"flags\":{\"collapsed\":true},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas_substep\",\"localized_name\":\"etas_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3707],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Beta\"},\"widgets_values\":[\"gaussian\",\"gaussian\",\"hard\",\"hard\",0.5,0.75,-1,\"fixed\"]},{\"id\":1381,\"type\":\"Note\",\"pos\":[13881.6279296875,-217.62835693359375],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increase or decrease \\\"steps_to_run\\\" in ClownsharKSampler to change the effective denoise level.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1382,\"type\":\"Note\",\"pos\":[14718.0498046875,-295.4144592285156],\"size\":[268.1851806640625,124.49711608886719],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increasing cycles will increase the amount of change, but take longer.\\n\\nCycles will rerun the same step over and over, forwards and backwards, iteratively refining an image at a controlled noise level.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1387,\"type\":\"ReFluxPatcher\",\"pos\":[13262.294921875,-130.79653930664062],\"size\":[210,82],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3736}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3737],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":1386,\"type\":\"UnetLoaderGGUF\",\"pos\":[12817.208984375,-323.9640808105469],\"size\":[315,58],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[3736],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UnetLoaderGGUF\"},\"widgets_values\":[\"flux1-dev-Q4_K_S.gguf\"]},{\"id\":1389,\"type\":\"VAELoader\",\"pos\":[12824.330078125,-56.021827697753906],\"size\":[315,58],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"VAE\",\"localized_name\":\"VAE\",\"type\":\"VAE\",\"links\":[3739],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAELoader\"},\"widgets_values\":[\"ae.sft\"]},{\"id\":980,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[14378.255859375,-64.39308166503906],\"size\":[340.20001220703125,570],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3626},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3627},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3578},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3604},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":3533},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":3707},{\"name\":\"options 3\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3698],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_2s\",1,1,\"resample\",true]},{\"id\":981,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[14758.255859375,-64.39308166503906],\"size\":[340.20001220703125,510],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3698},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3469],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_2s\",-1,1,\"resample\",true]},{\"id\":1388,\"type\":\"DualCLIPLoaderGGUF\",\"pos\":[12819.8798828125,-213.58253479003906],\"size\":[315,106],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[3738],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"DualCLIPLoaderGGUF\"},\"widgets_values\":[\"clip_l_flux.safetensors\",\"t5xxl_fp16.safetensors\",\"flux\"]},{\"id\":907,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[14008.255859375,-64.39308166503906],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2692},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3602},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2882},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2983},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3708},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3578],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"beta57\",20,14,1,1,201,\"fixed\",\"unsample\",true]},{\"id\":1333,\"type\":\"CLIPTextEncode\",\"pos\":[13688.255859375,-44.393089294433594],\"size\":[280.6252746582031,164.06936645507812],\"flags\":{\"collapsed\":false},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":3581}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[3602,3626],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"black and white anime cartoon of the inside of a car driving down a creepy road\"]},{\"id\":1374,\"type\":\"LoadImage\",\"pos\":[12805.896484375,167.56053161621094],\"size\":[315,314],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3725],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"title\":\"Load Image (Style Guide)\",\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14651_.png\",\"image\"]},{\"id\":1373,\"type\":\"LoadImage\",\"pos\":[12810.2314453125,534.0346069335938],\"size\":[315,314],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3721],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"title\":\"Load Image (Composition)\",\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (476).png\",\"image\"]},{\"id\":1362,\"type\":\"PreviewImage\",\"pos\":[13380,620],\"size\":[210,246],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3682}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":1390,\"type\":\"Note\",\"pos\":[13148.0439453125,257.643310546875],\"size\":[210,88],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Color Match SOMETIMES helps accelerate style transfer.\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1318,\"type\":\"ClownGuide_Beta\",\"pos\":[13828.255859375,675.60693359375],\"size\":[263.102783203125,290],\"flags\":{},\"order\":27,\"mode\":4,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":3710},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3699,3708],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"inversion\",false,false,0.7,1,\"constant\",0,-1,false]},{\"id\":1376,\"type\":\"Note\",\"pos\":[13710.3271484375,473.56817626953125],\"size\":[265.1909484863281,137.36415100097656],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increase or decrease weight in ClownGuide to alter adherence to the input image.\\n\\nFor now, set to low weights or bypass if using any model except HiDream. The HiDream code was adapted so that this composition guide doesn't fight the style guide. Others will be added soon.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1317,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[14418.0478515625,-325.06365966796875],\"size\":[265.2884826660156,202],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3533],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[10,1,-1,\"none\",-1,1,false]},{\"id\":1350,\"type\":\"ColorMatch\",\"pos\":[13380,160],\"size\":[210,102],\"flags\":{\"collapsed\":false},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"image_ref\",\"localized_name\":\"image_ref\",\"type\":\"IMAGE\",\"link\":3728},{\"name\":\"image_target\",\"localized_name\":\"image_target\",\"type\":\"IMAGE\",\"link\":3724}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[3682,3688],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ColorMatch\"},\"widgets_values\":[\"mkl\",0]}],\"links\":[[18,14,0,7,4,\"VAE\"],[1395,13,0,431,0,\"MODEL\"],[1398,7,3,431,1,\"LATENT\"],[2692,431,0,907,0,\"MODEL\"],[2696,14,0,908,1,\"VAE\"],[2697,908,0,909,0,\"IMAGE\"],[2881,490,0,970,0,\"CLIP\"],[2882,970,0,907,2,\"CONDITIONING\"],[2983,7,0,907,3,\"LATENT\"],[3469,981,0,908,0,\"LATENT\"],[3533,1317,0,980,6,\"OPTIONS\"],[3578,907,0,980,4,\"LATENT\"],[3581,490,0,1333,0,\"CLIP\"],[3602,1333,0,907,1,\"CONDITIONING\"],[3604,1308,0,980,5,\"GUIDES\"],[3626,1333,0,980,1,\"CONDITIONING\"],[3627,970,0,980,2,\"CONDITIONING\"],[3682,1350,0,1362,0,\"IMAGE\"],[3688,1350,0,7,0,\"IMAGE\"],[3698,980,0,981,4,\"LATENT\"],[3699,1318,0,1308,3,\"GUIDES\"],[3707,1328,0,980,7,\"OPTIONS\"],[3708,1318,0,907,5,\"GUIDES\"],[3709,7,1,1308,0,\"LATENT\"],[3710,7,0,1318,0,\"LATENT\"],[3720,908,0,1377,0,\"IMAGE\"],[3721,1373,0,1378,0,\"*\"],[3724,1378,0,1350,1,\"IMAGE\"],[3725,1374,0,1379,0,\"*\"],[3726,1379,0,1371,0,\"IMAGE\"],[3727,1371,0,7,1,\"IMAGE\"],[3728,1371,0,1350,0,\"IMAGE\"],[3729,1378,0,1377,1,\"IMAGE\"],[3730,1380,0,1371,1,\"INT\"],[3731,1380,1,1371,2,\"INT\"],[3732,1380,0,7,5,\"INT\"],[3733,1380,1,7,6,\"INT\"],[3736,1386,0,1387,0,\"MODEL\"],[3737,1387,0,13,0,\"*\"],[3738,1388,0,490,0,\"*\"],[3739,1389,0,14,0,\"*\"]],\"groups\":[{\"id\":1,\"title\":\"Model Loaders\",\"bounding\":[12796.72265625,-401.9004211425781,822.762451171875,436.0693359375],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":2,\"title\":\"Sampling\",\"bounding\":[13652.6533203125,-402.70721435546875,1470.8076171875,1409.0289306640625],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":3,\"title\":\"Input Prep\",\"bounding\":[12797.1396484375,77.69412231445312,817.4218139648438,820.6239624023438],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":4,\"title\":\"Save and Compare\",\"bounding\":[15180.705078125,-399.09112548828125,1050.6468505859375,615.8845825195312],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}}],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.4379222522564015,\"offset\":[-11124.689104031433,546.0824398349012]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux upscale thumbnail large multistage.json",
    "content": "{\"last_node_id\":431,\"last_link_id\":1176,\"nodes\":[{\"id\":361,\"type\":\"CLIPVisionEncode\",\"pos\":[860,820],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":1004},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1107}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[1006],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":364,\"type\":\"CLIPTextEncode\",\"pos\":[899.5093383789062,952.8309936523438],\"size\":[210,88],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1007}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1008,1055],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"\"]},{\"id\":369,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[1138.06640625,1574.328857421875],\"size\":[231.30213928222656,286],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":1101},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[1099],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":374,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[2403.98583984375,1081.333740234375],\"size\":[274.9878234863281,528.6721801757812],\"flags\":{},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1134},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1097},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1088],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",-1,1,\"resample\",true]},{\"id\":372,\"type\":\"SaveImage\",\"pos\":[2740,1080],\"size\":[442.38494873046875,530.0809936523438],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1030}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":355,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[1134.0809326171875,1057.9874267578125],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1047},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1111}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1024],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":368,\"type\":\"ReFluxPatcher\",\"pos\":[897.4150390625,1095.9840087890625],\"size\":[210,82],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1022}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1047],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float32\",true]},{\"id\":349,\"type\":\"FluxLoader\",\"pos\":[554.6767578125,1099.277099609375],\"size\":[315,282],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1022,1144],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1007,1137],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1029,1038,1058,1155,1164,1168],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":[1004,1135],\"slot_index\":3},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":[1009,1172]}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"default\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\"sigclip_vision_patch14_384.safetensors\",\"flux1-redux-dev.safetensors\"]},{\"id\":373,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[1740,1080],\"size\":[272.9876403808594,526.665771484375],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1118},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1031},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":1099},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1044},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1053],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",1,1,\"resample\",true]},{\"id\":370,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1417.3414306640625,1078.0023193359375],\"size\":[277.65570068359375,627.99951171875],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1024},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1117},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1102},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1031],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",30,14,1,1,0,\"fixed\",\"unsample\",true]},{\"id\":380,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[2078.66015625,1080.6669921875],\"size\":[263.6514892578125,527.99951171875],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1053},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1051},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1097],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",1,1,\"resample\",true]},{\"id\":403,\"type\":\"Note\",\"pos\":[2098.053466796875,680.7237548828125],\"size\":[215.7804412841797,88],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Raise cycles here if you see halos. It doesn't hurt to go as high as 20. (About 20 seconds on a 4090 at 1024x1024).\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":402,\"type\":\"Note\",\"pos\":[1755.3779296875,678.1484985351562],\"size\":[241.524658203125,132.7487030029297],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Lower cycles here if you see halos.\\n\\nThese step(s)/cycle(s) (that use the ClownGuide Style node) are needed to prevent blurring when upscaling tiny thumbnail images.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":382,\"type\":\"ControlNetApplyAdvanced\",\"pos\":[1440,830],\"size\":[210,186],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"link\":1108},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"link\":1055},{\"name\":\"control_net\",\"localized_name\":\"control_net\",\"type\":\"CONTROL_NET\",\"link\":1056},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1112},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":1058}],\"outputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"links\":[1118],\"slot_index\":0},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ControlNetApplyAdvanced\"},\"widgets_values\":[1,0,1]},{\"id\":404,\"type\":\"Image Repeat Tile To Size\",\"pos\":[899.620361328125,1259.9044189453125],\"size\":[210,106],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1123}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[1124],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Image Repeat Tile To Size\"},\"widgets_values\":[1024,1024,false]},{\"id\":375,\"type\":\"VAEEncodeAdvanced\",\"pos\":[1140,1240],\"size\":[228.90342712402344,278],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":1113},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":1124},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":1038}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[1102,1111],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[1101],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":null,\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":401,\"type\":\"LoadImage\",\"pos\":[608.10400390625,1453.0382080078125],\"size\":[315,314],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1122],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (579).png\",\"image\"]},{\"id\":359,\"type\":\"ControlNetLoader\",\"pos\":[596.1650390625,977.5371704101562],\"size\":[270.0880432128906,58],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"CONTROL_NET\",\"localized_name\":\"CONTROL_NET\",\"type\":\"CONTROL_NET\",\"links\":[1056,1162],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ControlNetLoader\"},\"widgets_values\":[\"flux_tile.safetensors\"]},{\"id\":362,\"type\":\"StyleModelApply\",\"pos\":[1138.0474853515625,827.8412475585938],\"size\":[270.06890869140625,122],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":1008},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":1009},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":1006}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1108,1117,1134],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":408,\"type\":\"CLIPVisionEncode\",\"pos\":[3300,810],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":1135},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1176}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[1173],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":409,\"type\":\"CLIPTextEncode\",\"pos\":[3340,940],\"size\":[210,88],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1137}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1161,1171],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"\"]},{\"id\":410,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[3570,1560],\"size\":[231.30213928222656,286],\"flags\":{},\"order\":38,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":1138},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[1147],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":411,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[4840,1070],\"size\":[274.9878234863281,528.6721801757812],\"flags\":{},\"order\":42,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1139},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1140},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1154],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",-1,1,\"resample\",true]},{\"id\":412,\"type\":\"SaveImage\",\"pos\":[5180,1070],\"size\":[442.38494873046875,530.0809936523438],\"flags\":{},\"order\":44,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1141}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":413,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[3570,1050],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":37,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1142},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1143}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1149],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":414,\"type\":\"ReFluxPatcher\",\"pos\":[3330,1080],\"size\":[210,82],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1144}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1142],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float32\",true]},{\"id\":415,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[4180,1070],\"size\":[272.9876403808594,526.665771484375],\"flags\":{},\"order\":40,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1145},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1146},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":1147},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1148},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1152],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",1,1,\"resample\",true]},{\"id\":417,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[4510,1070],\"size\":[263.6514892578125,527.99951171875],\"flags\":{},\"order\":41,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1152},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1153},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1140],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",1,1,\"resample\",true]},{\"id\":418,\"type\":\"Note\",\"pos\":[4530,670],\"size\":[215.7804412841797,88],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Raise cycles here if you see halos. It doesn't hurt to go as high as 20. (About 20 seconds on a 4090 at 1024x1024).\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":419,\"type\":\"Note\",\"pos\":[4190,670],\"size\":[241.524658203125,132.7487030029297],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Lower cycles here if you see halos.\\n\\nThese step(s)/cycle(s) (that use the ClownGuide Style node) are needed to prevent blurring when upscaling tiny thumbnail images.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":420,\"type\":\"VAEDecode\",\"pos\":[5180,960],\"size\":[140,46],\"flags\":{},\"order\":43,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":1154},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1155}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1141,1169],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.26\",\"widget_ue_connectable\":{}},\"widgets_values\":[]},{\"id\":421,\"type\":\"Reroute\",\"pos\":[3470,1450],\"size\":[75,26],\"flags\":{},\"order\":34,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1174}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[1165,1166,1170]}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":425,\"type\":\"ControlNetApplyAdvanced\",\"pos\":[3880,820],\"size\":[210,186],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"link\":1160},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"link\":1161},{\"name\":\"control_net\",\"localized_name\":\"control_net\",\"type\":\"CONTROL_NET\",\"link\":1162},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1175},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":1164}],\"outputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"links\":[1145],\"slot_index\":0},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ControlNetApplyAdvanced\"},\"widgets_values\":[1,0,1]},{\"id\":429,\"type\":\"Image Comparer (rgthree)\",\"pos\":[5170,1650],\"size\":[446.2193603515625,494.8704528808594],\"flags\":{},\"order\":45,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":1169},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":1170}],\"outputs\":[],\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_txgkm_00005_.png&type=temp&subfolder=&rand=0.44944358112719196\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_txgkm_00006_.png&type=temp&subfolder=&rand=0.15903319456700227\"}]]},{\"id\":430,\"type\":\"StyleModelApply\",\"pos\":[3570,820],\"size\":[270.06890869140625,122],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":1171},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":1172},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":1173}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1139,1150,1160],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":387,\"type\":\"Image Comparer (rgthree)\",\"pos\":[2732.6875,1661.954833984375],\"size\":[446.2193603515625,494.8704528808594],\"flags\":{},\"order\":33,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":1068},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":1115}],\"outputs\":[],\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_lvxiv_00017_.png&type=temp&subfolder=&rand=0.23193425033461956\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_lvxiv_00018_.png&type=temp&subfolder=&rand=0.4600603671403143\"}]]},{\"id\":416,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[3850,1070],\"size\":[277.65570068359375,627.99951171875],\"flags\":{},\"order\":39,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1149},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1150},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1151},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1146],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",30,14,1,1,0,\"fixed\",\"unsample\",true]},{\"id\":427,\"type\":\"Image Repeat Tile To Size\",\"pos\":[3340,1250],\"size\":[210,106],\"flags\":{},\"order\":35,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1165}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[1167],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Image Repeat Tile To Size\"},\"widgets_values\":[1536,1536,false]},{\"id\":428,\"type\":\"VAEEncodeAdvanced\",\"pos\":[3580,1230],\"size\":[228.90342712402344,278],\"flags\":{},\"order\":36,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":1166},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":1167},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":1168}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[1143,1151],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[1138],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":null,\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1536,1536,\"red\",false,\"16_channels\"]},{\"id\":371,\"type\":\"VAEDecode\",\"pos\":[2741.197265625,974.4011840820312],\"size\":[140,46],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":1088},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1029}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1030,1068,1174],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.26\",\"widget_ue_connectable\":{}},\"widgets_values\":[]},{\"id\":378,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[1768.675537109375,881.3336791992188],\"size\":[210,130],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1044]}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[5,1,0.5,1]},{\"id\":381,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[2103.203857421875,881.467041015625],\"size\":[210,130],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1051]}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[5,1,0.5,1]},{\"id\":426,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[4200,870],\"size\":[210,130],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1148]}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[5,1,0.5,1]},{\"id\":424,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[4540,870],\"size\":[210,130],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1153]}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[5,1,0.5,1]},{\"id\":398,\"type\":\"Reroute\",\"pos\":[1034.667724609375,1458.654541015625],\"size\":[75,26],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1122}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[1107,1112,1113,1115,1123,1175,1176],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":431,\"type\":\"Note\",\"pos\":[356.2033386230469,1583.169677734375],\"size\":[210,88],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Used a 384x384 image.\\n\\nAny size will work.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[141,151,0,8,1,\"VAE\"],[142,151,0,72,1,\"VAE\"],[143,151,0,35,1,\"VAE\"],[144,151,0,154,7,\"VAE\"],[159,151,0,72,1,\"VAE\"],[160,151,0,157,7,\"VAE\"],[161,151,0,8,1,\"VAE\"],[162,151,0,154,7,\"VAE\"],[163,151,0,72,1,\"VAE\"],[164,151,0,8,1,\"VAE\"],[165,151,0,154,7,\"VAE\"],[171,151,0,8,1,\"VAE\"],[172,151,0,72,1,\"VAE\"],[173,151,0,154,7,\"VAE\"],[174,151,0,157,7,\"VAE\"],[176,151,0,8,1,\"VAE\"],[177,151,0,72,1,\"VAE\"],[178,151,0,154,7,\"VAE\"],[179,151,0,157,7,\"VAE\"],[195,151,0,8,1,\"VAE\"],[196,151,0,72,1,\"VAE\"],[197,151,0,154,7,\"VAE\"],[198,151,0,157,7,\"VAE\"],[199,151,0,160,7,\"VAE\"],[200,151,0,8,1,\"VAE\"],[201,151,0,72,1,\"VAE\"],[202,151,0,154,7,\"VAE\"],[203,151,0,157,7,\"VAE\"],[204,151,0,160,7,\"VAE\"],[217,151,0,8,1,\"VAE\"],[218,151,0,72,1,\"VAE\"],[219,151,0,154,7,\"VAE\"],[220,151,0,157,7,\"VAE\"],[221,151,0,160,7,\"VAE\"],[222,151,0,8,1,\"VAE\"],[223,151,0,72,1,\"VAE\"],[224,151,0,157,7,\"VAE\"],[225,151,0,8,1,\"VAE\"],[226,151,0,72,1,\"VAE\"],[227,151,0,157,7,\"VAE\"],[250,151,0,62,1,\"VAE\"],[251,151,0,157,7,\"VAE\"],[252,151,0,8,1,\"VAE\"],[253,151,0,72,1,\"VAE\"],[254,151,0,62,1,\"VAE\"],[255,151,0,157,7,\"VAE\"],[256,151,0,8,1,\"VAE\"],[257,151,0,72,1,\"VAE\"],[258,151,0,160,7,\"VAE\"],[271,151,0,62,1,\"VAE\"],[272,151,0,157,7,\"VAE\"],[273,151,0,8,1,\"VAE\"],[274,151,0,72,1,\"VAE\"],[275,151,0,160,7,\"VAE\"],[276,151,0,154,7,\"VAE\"],[277,151,0,62,1,\"VAE\"],[278,151,0,157,7,\"VAE\"],[279,151,0,8,1,\"VAE\"],[280,151,0,72,1,\"VAE\"],[281,151,0,160,7,\"VAE\"],[282,151,0,154,7,\"VAE\"],[294,151,0,157,7,\"VAE\"],[295,151,0,72,1,\"VAE\"],[296,151,0,160,7,\"VAE\"],[297,151,0,154,7,\"VAE\"],[298,151,0,8,1,\"VAE\"],[299,151,0,313,1,\"VAE\"],[300,151,0,62,1,\"VAE\"],[301,151,0,157,7,\"VAE\"],[302,151,0,72,1,\"VAE\"],[303,151,0,160,7,\"VAE\"],[304,151,0,8,1,\"VAE\"],[305,151,0,313,1,\"VAE\"],[306,151,0,62,1,\"VAE\"],[307,151,0,154,7,\"VAE\"],[309,151,0,157,7,\"VAE\"],[310,151,0,72,1,\"VAE\"],[311,151,0,160,7,\"VAE\"],[312,151,0,8,1,\"VAE\"],[313,151,0,313,1,\"VAE\"],[314,151,0,62,1,\"VAE\"],[315,151,0,154,7,\"VAE\"],[316,151,0,157,7,\"VAE\"],[317,151,0,72,1,\"VAE\"],[318,151,0,160,7,\"VAE\"],[319,151,0,8,1,\"VAE\"],[320,151,0,313,1,\"VAE\"],[321,151,0,62,1,\"VAE\"],[322,151,0,154,7,\"VAE\"],[327,151,0,157,7,\"VAE\"],[328,151,0,72,1,\"VAE\"],[329,151,0,8,1,\"VAE\"],[330,151,0,313,1,\"VAE\"],[331,151,0,62,1,\"VAE\"],[332,151,0,154,7,\"VAE\"],[333,151,0,160,7,\"VAE\"],[343,151,0,157,7,\"VAE\"],[344,151,0,72,1,\"VAE\"],[345,151,0,8,1,\"VAE\"],[346,151,0,313,1,\"VAE\"],[347,151,0,62,1,\"VAE\"],[348,151,0,160,7,\"VAE\"],[349,151,0,154,7,\"VAE\"],[351,151,0,157,7,\"VAE\"],[352,151,0,72,1,\"VAE\"],[353,151,0,8,1,\"VAE\"],[354,151,0,313,1,\"VAE\"],[355,151,0,62,1,\"VAE\"],[356,151,0,160,7,\"VAE\"],[357,151,0,154,7,\"VAE\"],[363,151,0,157,7,\"VAE\"],[364,151,0,72,1,\"VAE\"],[365,151,0,8,1,\"VAE\"],[366,151,0,160,7,\"VAE\"],[367,151,0,154,7,\"VAE\"],[368,151,0,62,1,\"VAE\"],[370,151,0,157,7,\"VAE\"],[371,151,0,72,1,\"VAE\"],[372,151,0,8,1,\"VAE\"],[373,151,0,160,7,\"VAE\"],[374,151,0,154,7,\"VAE\"],[375,151,0,62,1,\"VAE\"],[377,151,0,157,7,\"VAE\"],[378,151,0,72,1,\"VAE\"],[379,151,0,8,1,\"VAE\"],[380,151,0,160,7,\"VAE\"],[381,151,0,154,7,\"VAE\"],[382,151,0,62,1,\"VAE\"],[383,151,0,157,7,\"VAE\"],[384,151,0,72,1,\"VAE\"],[385,151,0,8,1,\"VAE\"],[386,151,0,160,7,\"VAE\"],[387,151,0,154,7,\"VAE\"],[388,151,0,62,1,\"VAE\"],[391,151,0,157,7,\"VAE\"],[392,151,0,72,1,\"VAE\"],[393,151,0,8,1,\"VAE\"],[394,151,0,160,7,\"VAE\"],[395,151,0,154,7,\"VAE\"],[396,151,0,62,1,\"VAE\"],[402,151,0,157,7,\"VAE\"],[403,151,0,72,1,\"VAE\"],[404,151,0,8,1,\"VAE\"],[405,151,0,160,7,\"VAE\"],[406,151,0,154,7,\"VAE\"],[407,151,0,62,1,\"VAE\"],[408,151,0,157,7,\"VAE\"],[409,151,0,72,1,\"VAE\"],[410,151,0,8,1,\"VAE\"],[411,151,0,160,7,\"VAE\"],[412,151,0,154,7,\"VAE\"],[413,151,0,62,1,\"VAE\"],[421,151,0,157,7,\"VAE\"],[422,151,0,72,1,\"VAE\"],[423,151,0,8,1,\"VAE\"],[424,151,0,160,7,\"VAE\"],[425,151,0,154,7,\"VAE\"],[426,151,0,62,1,\"VAE\"],[427,151,0,157,7,\"VAE\"],[428,151,0,72,1,\"VAE\"],[429,151,0,8,1,\"VAE\"],[430,151,0,160,7,\"VAE\"],[431,151,0,154,7,\"VAE\"],[432,151,0,62,1,\"VAE\"],[1004,349,3,361,0,\"CLIP_VISION\"],[1006,361,0,362,2,\"CLIP_VISION_OUTPUT\"],[1007,349,1,364,0,\"CLIP\"],[1008,364,0,362,0,\"CONDITIONING\"],[1009,349,4,362,1,\"STYLE_MODEL\"],[1022,349,0,368,0,\"MODEL\"],[1024,355,0,370,0,\"MODEL\"],[1029,349,2,371,1,\"VAE\"],[1030,371,0,372,0,\"IMAGE\"],[1031,370,0,373,4,\"LATENT\"],[1038,349,2,375,4,\"VAE\"],[1044,378,0,373,6,\"OPTIONS\"],[1047,368,0,355,0,\"MODEL\"],[1051,381,0,380,6,\"OPTIONS\"],[1053,373,0,380,4,\"LATENT\"],[1055,364,0,382,1,\"CONDITIONING\"],[1056,359,0,382,2,\"CONTROL_NET\"],[1058,349,2,382,4,\"VAE\"],[1068,371,0,387,0,\"IMAGE\"],[1088,374,0,371,0,\"LATENT\"],[1097,380,0,374,4,\"LATENT\"],[1099,369,0,373,5,\"GUIDES\"],[1101,375,1,369,0,\"LATENT\"],[1102,375,0,370,3,\"LATENT\"],[1107,398,0,361,1,\"IMAGE\"],[1108,362,0,382,0,\"CONDITIONING\"],[1111,375,0,355,1,\"LATENT\"],[1112,398,0,382,3,\"IMAGE\"],[1113,398,0,375,0,\"IMAGE\"],[1115,398,0,387,1,\"IMAGE\"],[1117,362,0,370,1,\"CONDITIONING\"],[1118,382,0,373,1,\"CONDITIONING\"],[1122,401,0,398,0,\"*\"],[1123,398,0,404,0,\"IMAGE\"],[1124,404,0,375,1,\"IMAGE\"],[1134,362,0,374,1,\"CONDITIONING\"],[1135,349,3,408,0,\"CLIP_VISION\"],[1137,349,1,409,0,\"CLIP\"],[1138,428,1,410,0,\"LATENT\"],[1139,430,0,411,1,\"CONDITIONING\"],[1140,417,0,411,4,\"LATENT\"],[1141,420,0,412,0,\"IMAGE\"],[1142,414,0,413,0,\"MODEL\"],[1143,428,0,413,1,\"LATENT\"],[1144,349,0,414,0,\"MODEL\"],[1145,425,0,415,1,\"CONDITIONING\"],[1146,416,0,415,4,\"LATENT\"],[1147,410,0,415,5,\"GUIDES\"],[1148,426,0,415,6,\"OPTIONS\"],[1149,413,0,416,0,\"MODEL\"],[1150,430,0,416,1,\"CONDITIONING\"],[1151,428,0,416,3,\"LATENT\"],[1152,415,0,417,4,\"LATENT\"],[1153,424,0,417,6,\"OPTIONS\"],[1154,411,0,420,0,\"LATENT\"],[1155,349,2,420,1,\"VAE\"],[1160,430,0,425,0,\"CONDITIONING\"],[1161,409,0,425,1,\"CONDITIONING\"],[1162,359,0,425,2,\"CONTROL_NET\"],[1164,349,2,425,4,\"VAE\"],[1165,421,0,427,0,\"IMAGE\"],[1166,421,0,428,0,\"IMAGE\"],[1167,427,0,428,1,\"IMAGE\"],[1168,349,2,428,4,\"VAE\"],[1169,420,0,429,0,\"IMAGE\"],[1170,421,0,429,1,\"IMAGE\"],[1171,409,0,430,0,\"CONDITIONING\"],[1172,349,4,430,1,\"STYLE_MODEL\"],[1173,408,0,430,2,\"CLIP_VISION_OUTPUT\"],[1174,371,0,421,0,\"*\"],[1175,398,0,425,3,\"IMAGE\"],[1176,398,0,408,1,\"IMAGE\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.3109994191500252,\"offset\":[916.9662500305632,-478.4961303433991]},\"ue_links\":[{\"downstream\":157,\"downstream_slot\":7,\"upstream\":\"151\",\"upstream_slot\":0,\"controller\":64,\"type\":\"VAE\"},{\"downstream\":154,\"downstream_slot\":7,\"upstream\":\"151\",\"upstream_slot\":0,\"controller\":64,\"type\":\"VAE\"},{\"downstream\":72,\"downstream_slot\":1,\"upstream\":\"151\",\"upstream_slot\":0,\"controller\":64,\"type\":\"VAE\"},{\"downstream\":62,\"downstream_slot\":1,\"upstream\":\"151\",\"upstream_slot\":0,\"controller\":64,\"type\":\"VAE\"}],\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true,\"links_added_by_ue\":[959,960,961,962],\"frontendVersion\":\"1.18.6\"},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux upscale thumbnail large.json",
    "content": "{\"last_node_id\":408,\"last_link_id\":1127,\"nodes\":[{\"id\":369,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[1138.06640625,1574.328857421875],\"size\":[231.30213928222656,286],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":1101},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[1099],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":374,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[2403.98583984375,1081.333740234375],\"size\":[274.9878234863281,528.6721801757812],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1109},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1097},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1088],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",-1,1,\"resample\",true]},{\"id\":372,\"type\":\"SaveImage\",\"pos\":[2740,1080],\"size\":[442.38494873046875,530.0809936523438],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1030}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":355,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[1134.0809326171875,1057.9874267578125],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1047},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1111}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1024],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":368,\"type\":\"ReFluxPatcher\",\"pos\":[897.4150390625,1095.9840087890625],\"size\":[210,82],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1022}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1047],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float32\",true]},{\"id\":349,\"type\":\"FluxLoader\",\"pos\":[554.6767578125,1099.277099609375],\"size\":[315,282],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1022],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1007],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1029,1038,1058],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":[1004],\"slot_index\":3},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":[1009]}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"default\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\"sigclip_vision_patch14_384.safetensors\",\"flux1-redux-dev.safetensors\"]},{\"id\":387,\"type\":\"Image Comparer (rgthree)\",\"pos\":[3228.67529296875,1082.0006103515625],\"size\":[502.8477478027344,526.1139526367188],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":1068},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":1115}],\"outputs\":[],\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_lvxiv_00003_.png&type=temp&subfolder=&rand=0.3715711256758052\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_lvxiv_00004_.png&type=temp&subfolder=&rand=0.9911994449338102\"}]]},{\"id\":373,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[1740,1080],\"size\":[272.9876403808594,526.665771484375],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1118},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1031},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":1099},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1044},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1053],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",1,1,\"resample\",true]},{\"id\":370,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1417.3414306640625,1078.0023193359375],\"size\":[277.65570068359375,627.99951171875],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1024},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1117},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1102},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1031],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",30,14,1,1,0,\"fixed\",\"unsample\",true]},{\"id\":382,\"type\":\"ControlNetApplyAdvanced\",\"pos\":[1440,830],\"size\":[210,186],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"link\":1108},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"link\":1055},{\"name\":\"control_net\",\"localized_name\":\"control_net\",\"type\":\"CONTROL_NET\",\"link\":1056},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1112},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":1058}],\"outputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"links\":[1118],\"slot_index\":0},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ControlNetApplyAdvanced\"},\"widgets_values\":[1,0,1]},{\"id\":380,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[2078.66015625,1080.6669921875],\"size\":[263.6514892578125,527.99951171875],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1053},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1051},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1097],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",1,1,\"resample\",true]},{\"id\":401,\"type\":\"LoadImage\",\"pos\":[660.8270874023438,1457.920166015625],\"size\":[315,314],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1122],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (579).png\",\"image\"]},{\"id\":371,\"type\":\"VAEDecode\",\"pos\":[2741.197265625,974.4011840820312],\"size\":[140,46],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":1088},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1029}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1030,1068],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.26\",\"widget_ue_connectable\":{}},\"widgets_values\":[]},{\"id\":378,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[1768.675537109375,881.3336791992188],\"size\":[210,130],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1044]}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[5,1,0.5,1]},{\"id\":398,\"type\":\"Reroute\",\"pos\":[1034.667724609375,1458.654541015625],\"size\":[75,26],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1122}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[1107,1112,1113,1115,1123],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":404,\"type\":\"Image Repeat Tile To Size\",\"pos\":[899.620361328125,1259.9044189453125],\"size\":[210,106],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1123}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[1124],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Image Repeat Tile To Size\"},\"widgets_values\":[1536,1536,false]},{\"id\":375,\"type\":\"VAEEncodeAdvanced\",\"pos\":[1140,1240],\"size\":[228.90342712402344,278],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":1113},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":1124},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":1038}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[1102,1111],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[1101],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":null,\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1536,1536,\"red\",false,\"16_channels\"]},{\"id\":381,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[2103.203857421875,881.467041015625],\"size\":[210,130],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1051]}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[20,1,0.5,1]},{\"id\":403,\"type\":\"Note\",\"pos\":[2098.053466796875,680.7237548828125],\"size\":[215.7804412841797,88],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Raise cycles here if you see halos. It doesn't hurt to go as high as 20. Minimum of 5 recommended.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":402,\"type\":\"Note\",\"pos\":[1755.3779296875,678.1484985351562],\"size\":[241.524658203125,132.7487030029297],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Lower cycles here if you see halos. Minimum of 1 or 2 recommended.\\n\\nThese step(s)/cycle(s) (that use the ClownGuide Style node) are needed to prevent blurring when upscaling tiny thumbnail images.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":359,\"type\":\"ControlNetLoader\",\"pos\":[597.9067993164062,977.3353881835938],\"size\":[270.0880432128906,58],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"CONTROL_NET\",\"localized_name\":\"CONTROL_NET\",\"type\":\"CONTROL_NET\",\"links\":[1056],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ControlNetLoader\"},\"widgets_values\":[\"flux_tile.safetensors\"]},{\"id\":362,\"type\":\"StyleModelApply\",\"pos\":[1141.4669189453125,829.1477661132812],\"size\":[270.06890869140625,122],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":1008},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":1009},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":1006}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1108,1109,1117],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":361,\"type\":\"CLIPVisionEncode\",\"pos\":[862.2003784179688,825.134765625],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":1004},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1107}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[1006],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":364,\"type\":\"CLIPTextEncode\",\"pos\":[899.5093383789062,952.8309936523438],\"size\":[210,88],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1007}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1008,1055],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"\"]},{\"id\":408,\"type\":\"Note\",\"pos\":[583.3265380859375,830.6437377929688],\"size\":[248.87789916992188,88],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Jasper's tile controlnet was used.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":407,\"type\":\"Note\",\"pos\":[424.7425537109375,1579.1385498046875],\"size\":[210,88],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Input image was 384x384.\\n\\nAny size can be used.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[141,151,0,8,1,\"VAE\"],[142,151,0,72,1,\"VAE\"],[143,151,0,35,1,\"VAE\"],[144,151,0,154,7,\"VAE\"],[159,151,0,72,1,\"VAE\"],[160,151,0,157,7,\"VAE\"],[161,151,0,8,1,\"VAE\"],[162,151,0,154,7,\"VAE\"],[163,151,0,72,1,\"VAE\"],[164,151,0,8,1,\"VAE\"],[165,151,0,154,7,\"VAE\"],[171,151,0,8,1,\"VAE\"],[172,151,0,72,1,\"VAE\"],[173,151,0,154,7,\"VAE\"],[174,151,0,157,7,\"VAE\"],[176,151,0,8,1,\"VAE\"],[177,151,0,72,1,\"VAE\"],[178,151,0,154,7,\"VAE\"],[179,151,0,157,7,\"VAE\"],[195,151,0,8,1,\"VAE\"],[196,151,0,72,1,\"VAE\"],[197,151,0,154,7,\"VAE\"],[198,151,0,157,7,\"VAE\"],[199,151,0,160,7,\"VAE\"],[200,151,0,8,1,\"VAE\"],[201,151,0,72,1,\"VAE\"],[202,151,0,154,7,\"VAE\"],[203,151,0,157,7,\"VAE\"],[204,151,0,160,7,\"VAE\"],[217,151,0,8,1,\"VAE\"],[218,151,0,72,1,\"VAE\"],[219,151,0,154,7,\"VAE\"],[220,151,0,157,7,\"VAE\"],[221,151,0,160,7,\"VAE\"],[222,151,0,8,1,\"VAE\"],[223,151,0,72,1,\"VAE\"],[224,151,0,157,7,\"VAE\"],[225,151,0,8,1,\"VAE\"],[226,151,0,72,1,\"VAE\"],[227,151,0,157,7,\"VAE\"],[250,151,0,62,1,\"VAE\"],[251,151,0,157,7,\"VAE\"],[252,151,0,8,1,\"VAE\"],[253,151,0,72,1,\"VAE\"],[254,151,0,62,1,\"VAE\"],[255,151,0,157,7,\"VAE\"],[256,151,0,8,1,\"VAE\"],[257,151,0,72,1,\"VAE\"],[258,151,0,160,7,\"VAE\"],[271,151,0,62,1,\"VAE\"],[272,151,0,157,7,\"VAE\"],[273,151,0,8,1,\"VAE\"],[274,151,0,72,1,\"VAE\"],[275,151,0,160,7,\"VAE\"],[276,151,0,154,7,\"VAE\"],[277,151,0,62,1,\"VAE\"],[278,151,0,157,7,\"VAE\"],[279,151,0,8,1,\"VAE\"],[280,151,0,72,1,\"VAE\"],[281,151,0,160,7,\"VAE\"],[282,151,0,154,7,\"VAE\"],[294,151,0,157,7,\"VAE\"],[295,151,0,72,1,\"VAE\"],[296,151,0,160,7,\"VAE\"],[297,151,0,154,7,\"VAE\"],[298,151,0,8,1,\"VAE\"],[299,151,0,313,1,\"VAE\"],[300,151,0,62,1,\"VAE\"],[301,151,0,157,7,\"VAE\"],[302,151,0,72,1,\"VAE\"],[303,151,0,160,7,\"VAE\"],[304,151,0,8,1,\"VAE\"],[305,151,0,313,1,\"VAE\"],[306,151,0,62,1,\"VAE\"],[307,151,0,154,7,\"VAE\"],[309,151,0,157,7,\"VAE\"],[310,151,0,72,1,\"VAE\"],[311,151,0,160,7,\"VAE\"],[312,151,0,8,1,\"VAE\"],[313,151,0,313,1,\"VAE\"],[314,151,0,62,1,\"VAE\"],[315,151,0,154,7,\"VAE\"],[316,151,0,157,7,\"VAE\"],[317,151,0,72,1,\"VAE\"],[318,151,0,160,7,\"VAE\"],[319,151,0,8,1,\"VAE\"],[320,151,0,313,1,\"VAE\"],[321,151,0,62,1,\"VAE\"],[322,151,0,154,7,\"VAE\"],[327,151,0,157,7,\"VAE\"],[328,151,0,72,1,\"VAE\"],[329,151,0,8,1,\"VAE\"],[330,151,0,313,1,\"VAE\"],[331,151,0,62,1,\"VAE\"],[332,151,0,154,7,\"VAE\"],[333,151,0,160,7,\"VAE\"],[343,151,0,157,7,\"VAE\"],[344,151,0,72,1,\"VAE\"],[345,151,0,8,1,\"VAE\"],[346,151,0,313,1,\"VAE\"],[347,151,0,62,1,\"VAE\"],[348,151,0,160,7,\"VAE\"],[349,151,0,154,7,\"VAE\"],[351,151,0,157,7,\"VAE\"],[352,151,0,72,1,\"VAE\"],[353,151,0,8,1,\"VAE\"],[354,151,0,313,1,\"VAE\"],[355,151,0,62,1,\"VAE\"],[356,151,0,160,7,\"VAE\"],[357,151,0,154,7,\"VAE\"],[363,151,0,157,7,\"VAE\"],[364,151,0,72,1,\"VAE\"],[365,151,0,8,1,\"VAE\"],[366,151,0,160,7,\"VAE\"],[367,151,0,154,7,\"VAE\"],[368,151,0,62,1,\"VAE\"],[370,151,0,157,7,\"VAE\"],[371,151,0,72,1,\"VAE\"],[372,151,0,8,1,\"VAE\"],[373,151,0,160,7,\"VAE\"],[374,151,0,154,7,\"VAE\"],[375,151,0,62,1,\"VAE\"],[377,151,0,157,7,\"VAE\"],[378,151,0,72,1,\"VAE\"],[379,151,0,8,1,\"VAE\"],[380,151,0,160,7,\"VAE\"],[381,151,0,154,7,\"VAE\"],[382,151,0,62,1,\"VAE\"],[383,151,0,157,7,\"VAE\"],[384,151,0,72,1,\"VAE\"],[385,151,0,8,1,\"VAE\"],[386,151,0,160,7,\"VAE\"],[387,151,0,154,7,\"VAE\"],[388,151,0,62,1,\"VAE\"],[391,151,0,157,7,\"VAE\"],[392,151,0,72,1,\"VAE\"],[393,151,0,8,1,\"VAE\"],[394,151,0,160,7,\"VAE\"],[395,151,0,154,7,\"VAE\"],[396,151,0,62,1,\"VAE\"],[402,151,0,157,7,\"VAE\"],[403,151,0,72,1,\"VAE\"],[404,151,0,8,1,\"VAE\"],[405,151,0,160,7,\"VAE\"],[406,151,0,154,7,\"VAE\"],[407,151,0,62,1,\"VAE\"],[408,151,0,157,7,\"VAE\"],[409,151,0,72,1,\"VAE\"],[410,151,0,8,1,\"VAE\"],[411,151,0,160,7,\"VAE\"],[412,151,0,154,7,\"VAE\"],[413,151,0,62,1,\"VAE\"],[421,151,0,157,7,\"VAE\"],[422,151,0,72,1,\"VAE\"],[423,151,0,8,1,\"VAE\"],[424,151,0,160,7,\"VAE\"],[425,151,0,154,7,\"VAE\"],[426,151,0,62,1,\"VAE\"],[427,151,0,157,7,\"VAE\"],[428,151,0,72,1,\"VAE\"],[429,151,0,8,1,\"VAE\"],[430,151,0,160,7,\"VAE\"],[431,151,0,154,7,\"VAE\"],[432,151,0,62,1,\"VAE\"],[1004,349,3,361,0,\"CLIP_VISION\"],[1006,361,0,362,2,\"CLIP_VISION_OUTPUT\"],[1007,349,1,364,0,\"CLIP\"],[1008,364,0,362,0,\"CONDITIONING\"],[1009,349,4,362,1,\"STYLE_MODEL\"],[1022,349,0,368,0,\"MODEL\"],[1024,355,0,370,0,\"MODEL\"],[1029,349,2,371,1,\"VAE\"],[1030,371,0,372,0,\"IMAGE\"],[1031,370,0,373,4,\"LATENT\"],[1038,349,2,375,4,\"VAE\"],[1044,378,0,373,6,\"OPTIONS\"],[1047,368,0,355,0,\"MODEL\"],[1051,381,0,380,6,\"OPTIONS\"],[1053,373,0,380,4,\"LATENT\"],[1055,364,0,382,1,\"CONDITIONING\"],[1056,359,0,382,2,\"CONTROL_NET\"],[1058,349,2,382,4,\"VAE\"],[1068,371,0,387,0,\"IMAGE\"],[1088,374,0,371,0,\"LATENT\"],[1097,380,0,374,4,\"LATENT\"],[1099,369,0,373,5,\"GUIDES\"],[1101,375,1,369,0,\"LATENT\"],[1102,375,0,370,3,\"LATENT\"],[1107,398,0,361,1,\"IMAGE\"],[1108,362,0,382,0,\"CONDITIONING\"],[1109,362,0,374,1,\"CONDITIONING\"],[1111,375,0,355,1,\"LATENT\"],[1112,398,0,382,3,\"IMAGE\"],[1113,398,0,375,0,\"IMAGE\"],[1115,398,0,387,1,\"IMAGE\"],[1117,362,0,370,1,\"CONDITIONING\"],[1118,382,0,373,1,\"CONDITIONING\"],[1122,401,0,398,0,\"*\"],[1123,398,0,404,0,\"IMAGE\"],[1124,404,0,375,1,\"IMAGE\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.3109994191500252,\"offset\":[1512.0539235106066,-356.0468640337415]},\"ue_links\":[{\"downstream\":157,\"downstream_slot\":7,\"upstream\":\"151\",\"upstream_slot\":0,\"controller\":64,\"type\":\"VAE\"},{\"downstream\":154,\"downstream_slot\":7,\"upstream\":\"151\",\"upstream_slot\":0,\"controller\":64,\"type\":\"VAE\"},{\"downstream\":72,\"downstream_slot\":1,\"upstream\":\"151\",\"upstream_slot\":0,\"controller\":64,\"type\":\"VAE\"},{\"downstream\":62,\"downstream_slot\":1,\"upstream\":\"151\",\"upstream_slot\":0,\"controller\":64,\"type\":\"VAE\"}],\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true,\"links_added_by_ue\":[959,960,961,962],\"frontendVersion\":\"1.18.6\"},\"version\":0.4}"
  },
  {
    "path": "example_workflows/flux upscale thumbnail widescreen.json",
    "content": "{\"last_node_id\":411,\"last_link_id\":1130,\"nodes\":[{\"id\":369,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[1138.06640625,1574.328857421875],\"size\":[231.30213928222656,286],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":1101},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[1099],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":374,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[2403.98583984375,1081.333740234375],\"size\":[274.9878234863281,528.6721801757812],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1109},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1097},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1088],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",-1,1,\"resample\",true]},{\"id\":372,\"type\":\"SaveImage\",\"pos\":[2740,1080],\"size\":[442.38494873046875,530.0809936523438],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1030}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":355,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[1134.0809326171875,1057.9874267578125],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1047},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1111}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1024],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":368,\"type\":\"ReFluxPatcher\",\"pos\":[897.4150390625,1095.9840087890625],\"size\":[210,82],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1022}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1047],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float32\",true]},{\"id\":349,\"type\":\"FluxLoader\",\"pos\":[554.6767578125,1099.277099609375],\"size\":[315,282],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1022],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1007],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1029,1038,1058],\"slot_index\":2},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":[1004],\"slot_index\":3},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":[1009]}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"colossusProjectFlux_v42AIO.safetensors\",\"default\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\"sigclip_vision_patch14_384.safetensors\",\"flux1-redux-dev.safetensors\"]},{\"id\":387,\"type\":\"Image Comparer (rgthree)\",\"pos\":[3228.67529296875,1082.0006103515625],\"size\":[502.8477478027344,526.1139526367188],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":1068},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":1115}],\"outputs\":[],\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_klodp_00033_.png&type=temp&subfolder=&rand=0.5892199958912905\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_klodp_00034_.png&type=temp&subfolder=&rand=0.10900460801823297\"}]]},{\"id\":373,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[1740,1080],\"size\":[272.9876403808594,526.665771484375],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1118},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1031},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":1099},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1044},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1053],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",1,1,\"resample\",true]},{\"id\":382,\"type\":\"ControlNetApplyAdvanced\",\"pos\":[1440,830],\"size\":[210,186],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"link\":1108},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"link\":1055},{\"name\":\"control_net\",\"localized_name\":\"control_net\",\"type\":\"CONTROL_NET\",\"link\":1056},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1112},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":1058}],\"outputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"links\":[1118],\"slot_index\":0},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ControlNetApplyAdvanced\"},\"widgets_values\":[1,0,1]},{\"id\":380,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[2078.66015625,1080.6669921875],\"size\":[263.6514892578125,527.99951171875],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1053},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1051},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1097],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",1,1,\"resample\",true]},{\"id\":371,\"type\":\"VAEDecode\",\"pos\":[2741.197265625,974.4011840820312],\"size\":[140,46],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":1088},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1029}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1030,1068],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.26\",\"widget_ue_connectable\":{}},\"widgets_values\":[]},{\"id\":378,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[1768.675537109375,881.3336791992188],\"size\":[210,130],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1044]}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[5,1,0.5,1]},{\"id\":381,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[2103.203857421875,881.467041015625],\"size\":[210,130],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1051]}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[20,1,0.5,1]},{\"id\":403,\"type\":\"Note\",\"pos\":[2098.053466796875,680.7237548828125],\"size\":[215.7804412841797,88],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Raise cycles here if you see halos. It doesn't hurt to go as high as 20. Minimum of 5 recommended.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":402,\"type\":\"Note\",\"pos\":[1755.3779296875,678.1484985351562],\"size\":[241.524658203125,132.7487030029297],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Lower cycles here if you see halos. Minimum of 1 or 2 recommended.\\n\\nThese step(s)/cycle(s) (that use the ClownGuide Style node) are needed to prevent blurring when upscaling tiny thumbnail images.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":359,\"type\":\"ControlNetLoader\",\"pos\":[597.9067993164062,977.3353881835938],\"size\":[270.0880432128906,58],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"CONTROL_NET\",\"localized_name\":\"CONTROL_NET\",\"type\":\"CONTROL_NET\",\"links\":[1056],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ControlNetLoader\"},\"widgets_values\":[\"flux_tile.safetensors\"]},{\"id\":362,\"type\":\"StyleModelApply\",\"pos\":[1141.4669189453125,829.1477661132812],\"size\":[270.06890869140625,122],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"link\":1008},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"link\":1009},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"link\":1006}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1108,1109,1117],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"StyleModelApply\"},\"widgets_values\":[1,\"multiply\"]},{\"id\":361,\"type\":\"CLIPVisionEncode\",\"pos\":[862.2003784179688,825.134765625],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":1004},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1107}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[1006],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"center\"]},{\"id\":364,\"type\":\"CLIPTextEncode\",\"pos\":[899.5093383789062,952.8309936523438],\"size\":[210,88],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1007}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1008,1055],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"\"]},{\"id\":408,\"type\":\"Note\",\"pos\":[549.5983276367188,826.2056884765625],\"size\":[294.1452331542969,99.538818359375],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Jasper's tile controlnet was used.\\n\\nhttps://huggingface.co/jasperai/Flux.1-dev-Controlnet-Upscaler/blob/main/diffusion_pytorch_model.safetensors\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":370,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1417.3414306640625,1078.0023193359375],\"size\":[277.65570068359375,627.99951171875],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1024},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1117},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1102},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1031],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",30,14,1,1,0,\"fixed\",\"unsample\",true]},{\"id\":404,\"type\":\"Image Repeat Tile To Size\",\"pos\":[899.620361328125,1259.9044189453125],\"size\":[210,106],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1123}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[1124],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Image Repeat Tile To Size\"},\"widgets_values\":[1792,1024,true]},{\"id\":375,\"type\":\"VAEEncodeAdvanced\",\"pos\":[1140,1240],\"size\":[228.90342712402344,278],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":1113},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":1124},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":1038}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[1102,1111],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[1101],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":null,\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"image_2\",1,1,\"red\",false,\"16_channels\"]},{\"id\":398,\"type\":\"Reroute\",\"pos\":[1034.0006103515625,1404.638671875],\"size\":[75,26],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1130}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[1107,1112,1113,1115,1123],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":411,\"type\":\"LoadImage\",\"pos\":[791.842041015625,1491.6041259765625],\"size\":[315,314],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1130],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (595).png\",\"image\"]},{\"id\":407,\"type\":\"Note\",\"pos\":[552.9491577148438,1493.21923828125],\"size\":[210.6668243408203,166.69004821777344],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Input image was 672x384.\\n\\nAny size can be used. Just be sure to keep the aspect ratio the same, per usual.\\n\\nBest results will be with minimum size = 384 (height and/or width), due to that being what SigCLIP was trained on (which is what Redux uses).\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[141,151,0,8,1,\"VAE\"],[142,151,0,72,1,\"VAE\"],[143,151,0,35,1,\"VAE\"],[144,151,0,154,7,\"VAE\"],[159,151,0,72,1,\"VAE\"],[160,151,0,157,7,\"VAE\"],[161,151,0,8,1,\"VAE\"],[162,151,0,154,7,\"VAE\"],[163,151,0,72,1,\"VAE\"],[164,151,0,8,1,\"VAE\"],[165,151,0,154,7,\"VAE\"],[171,151,0,8,1,\"VAE\"],[172,151,0,72,1,\"VAE\"],[173,151,0,154,7,\"VAE\"],[174,151,0,157,7,\"VAE\"],[176,151,0,8,1,\"VAE\"],[177,151,0,72,1,\"VAE\"],[178,151,0,154,7,\"VAE\"],[179,151,0,157,7,\"VAE\"],[195,151,0,8,1,\"VAE\"],[196,151,0,72,1,\"VAE\"],[197,151,0,154,7,\"VAE\"],[198,151,0,157,7,\"VAE\"],[199,151,0,160,7,\"VAE\"],[200,151,0,8,1,\"VAE\"],[201,151,0,72,1,\"VAE\"],[202,151,0,154,7,\"VAE\"],[203,151,0,157,7,\"VAE\"],[204,151,0,160,7,\"VAE\"],[217,151,0,8,1,\"VAE\"],[218,151,0,72,1,\"VAE\"],[219,151,0,154,7,\"VAE\"],[220,151,0,157,7,\"VAE\"],[221,151,0,160,7,\"VAE\"],[222,151,0,8,1,\"VAE\"],[223,151,0,72,1,\"VAE\"],[224,151,0,157,7,\"VAE\"],[225,151,0,8,1,\"VAE\"],[226,151,0,72,1,\"VAE\"],[227,151,0,157,7,\"VAE\"],[250,151,0,62,1,\"VAE\"],[251,151,0,157,7,\"VAE\"],[252,151,0,8,1,\"VAE\"],[253,151,0,72,1,\"VAE\"],[254,151,0,62,1,\"VAE\"],[255,151,0,157,7,\"VAE\"],[256,151,0,8,1,\"VAE\"],[257,151,0,72,1,\"VAE\"],[258,151,0,160,7,\"VAE\"],[271,151,0,62,1,\"VAE\"],[272,151,0,157,7,\"VAE\"],[273,151,0,8,1,\"VAE\"],[274,151,0,72,1,\"VAE\"],[275,151,0,160,7,\"VAE\"],[276,151,0,154,7,\"VAE\"],[277,151,0,62,1,\"VAE\"],[278,151,0,157,7,\"VAE\"],[279,151,0,8,1,\"VAE\"],[280,151,0,72,1,\"VAE\"],[281,151,0,160,7,\"VAE\"],[282,151,0,154,7,\"VAE\"],[294,151,0,157,7,\"VAE\"],[295,151,0,72,1,\"VAE\"],[296,151,0,160,7,\"VAE\"],[297,151,0,154,7,\"VAE\"],[298,151,0,8,1,\"VAE\"],[299,151,0,313,1,\"VAE\"],[300,151,0,62,1,\"VAE\"],[301,151,0,157,7,\"VAE\"],[302,151,0,72,1,\"VAE\"],[303,151,0,160,7,\"VAE\"],[304,151,0,8,1,\"VAE\"],[305,151,0,313,1,\"VAE\"],[306,151,0,62,1,\"VAE\"],[307,151,0,154,7,\"VAE\"],[309,151,0,157,7,\"VAE\"],[310,151,0,72,1,\"VAE\"],[311,151,0,160,7,\"VAE\"],[312,151,0,8,1,\"VAE\"],[313,151,0,313,1,\"VAE\"],[314,151,0,62,1,\"VAE\"],[315,151,0,154,7,\"VAE\"],[316,151,0,157,7,\"VAE\"],[317,151,0,72,1,\"VAE\"],[318,151,0,160,7,\"VAE\"],[319,151,0,8,1,\"VAE\"],[320,151,0,313,1,\"VAE\"],[321,151,0,62,1,\"VAE\"],[322,151,0,154,7,\"VAE\"],[327,151,0,157,7,\"VAE\"],[328,151,0,72,1,\"VAE\"],[329,151,0,8,1,\"VAE\"],[330,151,0,313,1,\"VAE\"],[331,151,0,62,1,\"VAE\"],[332,151,0,154,7,\"VAE\"],[333,151,0,160,7,\"VAE\"],[343,151,0,157,7,\"VAE\"],[344,151,0,72,1,\"VAE\"],[345,151,0,8,1,\"VAE\"],[346,151,0,313,1,\"VAE\"],[347,151,0,62,1,\"VAE\"],[348,151,0,160,7,\"VAE\"],[349,151,0,154,7,\"VAE\"],[351,151,0,157,7,\"VAE\"],[352,151,0,72,1,\"VAE\"],[353,151,0,8,1,\"VAE\"],[354,151,0,313,1,\"VAE\"],[355,151,0,62,1,\"VAE\"],[356,151,0,160,7,\"VAE\"],[357,151,0,154,7,\"VAE\"],[363,151,0,157,7,\"VAE\"],[364,151,0,72,1,\"VAE\"],[365,151,0,8,1,\"VAE\"],[366,151,0,160,7,\"VAE\"],[367,151,0,154,7,\"VAE\"],[368,151,0,62,1,\"VAE\"],[370,151,0,157,7,\"VAE\"],[371,151,0,72,1,\"VAE\"],[372,151,0,8,1,\"VAE\"],[373,151,0,160,7,\"VAE\"],[374,151,0,154,7,\"VAE\"],[375,151,0,62,1,\"VAE\"],[377,151,0,157,7,\"VAE\"],[378,151,0,72,1,\"VAE\"],[379,151,0,8,1,\"VAE\"],[380,151,0,160,7,\"VAE\"],[381,151,0,154,7,\"VAE\"],[382,151,0,62,1,\"VAE\"],[383,151,0,157,7,\"VAE\"],[384,151,0,72,1,\"VAE\"],[385,151,0,8,1,\"VAE\"],[386,151,0,160,7,\"VAE\"],[387,151,0,154,7,\"VAE\"],[388,151,0,62,1,\"VAE\"],[391,151,0,157,7,\"VAE\"],[392,151,0,72,1,\"VAE\"],[393,151,0,8,1,\"VAE\"],[394,151,0,160,7,\"VAE\"],[395,151,0,154,7,\"VAE\"],[396,151,0,62,1,\"VAE\"],[402,151,0,157,7,\"VAE\"],[403,151,0,72,1,\"VAE\"],[404,151,0,8,1,\"VAE\"],[405,151,0,160,7,\"VAE\"],[406,151,0,154,7,\"VAE\"],[407,151,0,62,1,\"VAE\"],[408,151,0,157,7,\"VAE\"],[409,151,0,72,1,\"VAE\"],[410,151,0,8,1,\"VAE\"],[411,151,0,160,7,\"VAE\"],[412,151,0,154,7,\"VAE\"],[413,151,0,62,1,\"VAE\"],[421,151,0,157,7,\"VAE\"],[422,151,0,72,1,\"VAE\"],[423,151,0,8,1,\"VAE\"],[424,151,0,160,7,\"VAE\"],[425,151,0,154,7,\"VAE\"],[426,151,0,62,1,\"VAE\"],[427,151,0,157,7,\"VAE\"],[428,151,0,72,1,\"VAE\"],[429,151,0,8,1,\"VAE\"],[430,151,0,160,7,\"VAE\"],[431,151,0,154,7,\"VAE\"],[432,151,0,62,1,\"VAE\"],[1004,349,3,361,0,\"CLIP_VISION\"],[1006,361,0,362,2,\"CLIP_VISION_OUTPUT\"],[1007,349,1,364,0,\"CLIP\"],[1008,364,0,362,0,\"CONDITIONING\"],[1009,349,4,362,1,\"STYLE_MODEL\"],[1022,349,0,368,0,\"MODEL\"],[1024,355,0,370,0,\"MODEL\"],[1029,349,2,371,1,\"VAE\"],[1030,371,0,372,0,\"IMAGE\"],[1031,370,0,373,4,\"LATENT\"],[1038,349,2,375,4,\"VAE\"],[1044,378,0,373,6,\"OPTIONS\"],[1047,368,0,355,0,\"MODEL\"],[1051,381,0,380,6,\"OPTIONS\"],[1053,373,0,380,4,\"LATENT\"],[1055,364,0,382,1,\"CONDITIONING\"],[1056,359,0,382,2,\"CONTROL_NET\"],[1058,349,2,382,4,\"VAE\"],[1068,371,0,387,0,\"IMAGE\"],[1088,374,0,371,0,\"LATENT\"],[1097,380,0,374,4,\"LATENT\"],[1099,369,0,373,5,\"GUIDES\"],[1101,375,1,369,0,\"LATENT\"],[1102,375,0,370,3,\"LATENT\"],[1107,398,0,361,1,\"IMAGE\"],[1108,362,0,382,0,\"CONDITIONING\"],[1109,362,0,374,1,\"CONDITIONING\"],[1111,375,0,355,1,\"LATENT\"],[1112,398,0,382,3,\"IMAGE\"],[1113,398,0,375,0,\"IMAGE\"],[1115,398,0,387,1,\"IMAGE\"],[1117,362,0,370,1,\"CONDITIONING\"],[1118,382,0,373,1,\"CONDITIONING\"],[1123,398,0,404,0,\"IMAGE\"],[1124,404,0,375,1,\"IMAGE\"],[1130,411,0,398,0,\"*\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.7449402268886842,\"offset\":[634.5784677482833,-682.7929436822943]},\"ue_links\":[{\"downstream\":157,\"downstream_slot\":7,\"upstream\":\"151\",\"upstream_slot\":0,\"controller\":64,\"type\":\"VAE\"},{\"downstream\":154,\"downstream_slot\":7,\"upstream\":\"151\",\"upstream_slot\":0,\"controller\":64,\"type\":\"VAE\"},{\"downstream\":72,\"downstream_slot\":1,\"upstream\":\"151\",\"upstream_slot\":0,\"controller\":64,\"type\":\"VAE\"},{\"downstream\":62,\"downstream_slot\":1,\"upstream\":\"151\",\"upstream_slot\":0,\"controller\":64,\"type\":\"VAE\"}],\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true,\"links_added_by_ue\":[959,960,961,962],\"frontendVersion\":\"1.18.6\"},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream guide data projection.json",
    "content": "{\"last_node_id\":641,\"last_link_id\":2035,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\"size\":[315,314],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2017]},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14254_.png\",\"image\"]},{\"id\":632,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[962.5586547851562,-316.3705139160156],\"size\":[277.62237548828125,126],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2025},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2015}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2016],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":636,\"type\":\"ClownModelLoader\",\"pos\":[599.3463745117188,-176.31788635253906],\"size\":[315,266],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2025],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2024,2028],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2026,2027],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":591,\"type\":\"VAEDecode\",\"pos\":[1610,-230],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"label\":\"samples\",\"type\":\"LATENT\",\"link\":2030},{\"name\":\"vae\",\"localized_name\":\"vae\",\"label\":\"vae\",\"type\":\"VAE\",\"link\":2027}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"label\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[2019],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":633,\"type\":\"SaveImage\",\"pos\":[1610,-120],\"size\":[436.4179382324219,508.5302429199219],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2019}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":629,\"type\":\"VAEEncodeAdvanced\",\"pos\":[961.6965942382812,242.70477294921875],\"size\":[278.0284423828125,280.5834045410156],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2017},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":2026}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2013,2020],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2015]},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":630,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1271.7001953125,-124.3408432006836],\"size\":[291.7499084472656,650],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2016},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2018},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2029},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2013},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2021},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2030],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",30,-1,1,4,0,\"fixed\",\"standard\",true]},{\"id\":637,\"type\":\"CLIPTextEncode\",\"pos\":[962.297607421875,99.93917846679688],\"size\":[278.4529113769531,88],\"flags\":{\"collapsed\":false},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2028}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2029],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"low quality, low detail, blurry, shallow depth of field, mutated, symmetrical, generic\"]},{\"id\":107,\"type\":\"CLIPTextEncode\",\"pos\":[959.4713745117188,-123.3353500366211],\"size\":[282.33453369140625,173.58438110351562],\"flags\":{\"collapsed\":false},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2024}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2018],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"the mournful lamentations of of a female rock singer on stage with chaos behind her, her face screaming her sorrowful refrains the despairing cries of anguished screams howling agonized moans, her pained whispers mournful sighs distant echoes across the smoky stage, fading memories of lost loves, forgotten dreams, shattered hopes, crushed spirits, broken hearts\"]},{\"id\":634,\"type\":\"ClownGuide_Beta\",\"pos\":[1276.0064697265625,-480.84442138671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2021],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"data\",false,true,1,1,\"beta57\",0,2,false]}],\"links\":[[2013,629,0,630,3,\"LATENT\"],[2015,629,3,632,1,\"LATENT\"],[2016,632,0,630,0,\"MODEL\"],[2017,628,0,629,0,\"IMAGE\"],[2018,107,0,630,1,\"CONDITIONING\"],[2019,591,0,633,0,\"IMAGE\"],[2020,629,0,634,0,\"LATENT\"],[2021,634,0,630,5,\"GUIDES\"],[2024,636,1,107,0,\"CLIP\"],[2025,636,0,632,0,\"MODEL\"],[2026,636,2,629,4,\"VAE\"],[2027,636,2,591,1,\"VAE\"],[2028,636,1,637,0,\"CLIP\"],[2029,637,0,630,2,\"CONDITIONING\"],[2030,630,0,591,0,\"LATENT\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.7985878990923265,\"offset\":[1686.8845871920696,637.6012821508443]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream guide epsilon projection.json",
    "content": "{\"last_node_id\":641,\"last_link_id\":2035,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\"size\":[315,314],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2017]},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14254_.png\",\"image\"]},{\"id\":632,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[962.5586547851562,-316.3705139160156],\"size\":[277.62237548828125,126],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2025},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2015}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2016],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":636,\"type\":\"ClownModelLoader\",\"pos\":[599.3463745117188,-176.31788635253906],\"size\":[315,266],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2025],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2024,2028],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2026,2027],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":591,\"type\":\"VAEDecode\",\"pos\":[1610,-230],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"label\":\"samples\",\"type\":\"LATENT\",\"link\":2030},{\"name\":\"vae\",\"localized_name\":\"vae\",\"label\":\"vae\",\"type\":\"VAE\",\"link\":2027}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"label\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[2019],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":633,\"type\":\"SaveImage\",\"pos\":[1610,-120],\"size\":[436.4179382324219,508.5302429199219],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2019}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":629,\"type\":\"VAEEncodeAdvanced\",\"pos\":[961.6965942382812,242.70477294921875],\"size\":[278.0284423828125,280.5834045410156],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2017},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":2026}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2013,2020],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2015]},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":630,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1271.7001953125,-124.3408432006836],\"size\":[291.7499084472656,650],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2016},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2018},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2029},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2013},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2021},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2030],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",30,-1,1,4,0,\"fixed\",\"standard\",true]},{\"id\":637,\"type\":\"CLIPTextEncode\",\"pos\":[962.297607421875,99.93917846679688],\"size\":[278.4529113769531,88],\"flags\":{\"collapsed\":false},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2028}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2029],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"low quality, low detail, blurry, shallow depth of field, mutated, symmetrical, generic\"]},{\"id\":107,\"type\":\"CLIPTextEncode\",\"pos\":[959.4713745117188,-123.3353500366211],\"size\":[282.33453369140625,173.58438110351562],\"flags\":{\"collapsed\":false},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2024}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2018],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"the mournful lamentations of of a female rock singer on stage with chaos behind her, her face screaming her sorrowful refrains the despairing cries of anguished screams howling agonized moans, her pained whispers mournful sighs distant echoes across the smoky stage, fading memories of lost loves, forgotten dreams, shattered hopes, crushed spirits, broken hearts\"]},{\"id\":634,\"type\":\"ClownGuide_Beta\",\"pos\":[1276.0064697265625,-480.84442138671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2021],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"epsilon\",false,true,1,1,\"beta57\",0,6,false]}],\"links\":[[2013,629,0,630,3,\"LATENT\"],[2015,629,3,632,1,\"LATENT\"],[2016,632,0,630,0,\"MODEL\"],[2017,628,0,629,0,\"IMAGE\"],[2018,107,0,630,1,\"CONDITIONING\"],[2019,591,0,633,0,\"IMAGE\"],[2020,629,0,634,0,\"LATENT\"],[2021,634,0,630,5,\"GUIDES\"],[2024,636,1,107,0,\"CLIP\"],[2025,636,0,632,0,\"MODEL\"],[2026,636,2,629,4,\"VAE\"],[2027,636,2,591,1,\"VAE\"],[2028,636,1,637,0,\"CLIP\"],[2029,637,0,630,2,\"CONDITIONING\"],[2030,630,0,591,0,\"LATENT\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.7985878990923265,\"offset\":[1138.2513303928165,621.4269926638877]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream guide flow.json",
    "content": "{\"last_node_id\":640,\"last_link_id\":2035,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\"size\":[315,314],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2017]},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14254_.png\",\"image\"]},{\"id\":632,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[962.5586547851562,-316.3705139160156],\"size\":[277.62237548828125,126],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2025},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2015}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2016],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":636,\"type\":\"ClownModelLoader\",\"pos\":[599.3463745117188,-176.31788635253906],\"size\":[315,266],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2025],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2024,2028,2034],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2026,2027],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":591,\"type\":\"VAEDecode\",\"pos\":[1610,-230],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"label\":\"samples\",\"type\":\"LATENT\",\"link\":2030},{\"name\":\"vae\",\"localized_name\":\"vae\",\"label\":\"vae\",\"type\":\"VAE\",\"link\":2027}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"label\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[2019],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":633,\"type\":\"SaveImage\",\"pos\":[1610,-120],\"size\":[436.4179382324219,508.5302429199219],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2019}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":629,\"type\":\"VAEEncodeAdvanced\",\"pos\":[961.6965942382812,242.70477294921875],\"size\":[278.0284423828125,280.5834045410156],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2017},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":2026}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2013,2020],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2015]},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":634,\"type\":\"ClownGuide_Beta\",\"pos\":[1276.0064697265625,-480.84442138671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2021],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"flow\",false,false,1,1,\"beta57\",0,10,false]},{\"id\":630,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1271.7001953125,-124.3408432006836],\"size\":[291.7499084472656,650],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2016},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2018},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2029},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2013},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2021},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":2032},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2030],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",30,-1,1,4,0,\"fixed\",\"standard\",true]},{\"id\":638,\"type\":\"SharkOptions_GuideCond_Beta\",\"pos\":[955.9966430664062,585.7319946289062],\"size\":[284.5923156738281,98],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2035},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2033},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[2032],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_GuideCond_Beta\"},\"widgets_values\":[4]},{\"id\":637,\"type\":\"CLIPTextEncode\",\"pos\":[962.297607421875,99.93917846679688],\"size\":[278.4529113769531,88],\"flags\":{\"collapsed\":false},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2028}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2029,2033],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"low quality, low detail, blurry, shallow depth of field, mutated, symmetrical, generic\"]},{\"id\":107,\"type\":\"CLIPTextEncode\",\"pos\":[959.4713745117188,-123.3353500366211],\"size\":[282.33453369140625,173.58438110351562],\"flags\":{\"collapsed\":false},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2024}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2018],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"the mournful lamentations of of a female rock singer on stage with chaos behind her, her face screaming her sorrowful refrains the despairing cries of anguished screams howling agonized moans, her pained whispers mournful sighs distant echoes across the smoky stage, fading memories of lost loves, forgotten dreams, shattered hopes, crushed spirits, broken hearts\"]},{\"id\":639,\"type\":\"CLIPTextEncode\",\"pos\":[599.5145263671875,565.6756591796875],\"size\":[315.33026123046875,117.94475555419922],\"flags\":{\"collapsed\":false},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2034}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2035],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"illustration of a singing clock with huge teeth in a surreal forest with torquiose mountains and a red and yellow sky, ragged trees and a pool of black oil on the ground, dripping paint oozing off the clock\"]},{\"id\":640,\"type\":\"Note\",\"pos\":[246.91494750976562,519.0934448242188],\"size\":[323.0928649902344,167.39759826660156],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"With the \\\"flow\\\" mode it is usually beneficial to use the supplemental GuideCond node, which allows you to set conditionings for the guide itself. With \\\"flow\\\", the guide changes during the sampling process. Without GuideCond in use, it will default to reusing your main prompt, which may result in some loss of adherence to the guide image.\\n\\n\\\"Lure\\\" is the only other mode that will use GuideCond.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[2013,629,0,630,3,\"LATENT\"],[2015,629,3,632,1,\"LATENT\"],[2016,632,0,630,0,\"MODEL\"],[2017,628,0,629,0,\"IMAGE\"],[2018,107,0,630,1,\"CONDITIONING\"],[2019,591,0,633,0,\"IMAGE\"],[2020,629,0,634,0,\"LATENT\"],[2021,634,0,630,5,\"GUIDES\"],[2024,636,1,107,0,\"CLIP\"],[2025,636,0,632,0,\"MODEL\"],[2026,636,2,629,4,\"VAE\"],[2027,636,2,591,1,\"VAE\"],[2028,636,1,637,0,\"CLIP\"],[2029,637,0,630,2,\"CONDITIONING\"],[2030,630,0,591,0,\"LATENT\"],[2032,638,0,630,6,\"OPTIONS\"],[2033,637,0,638,1,\"CONDITIONING\"],[2034,636,1,639,0,\"CLIP\"],[2035,639,0,638,0,\"CONDITIONING\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.7985878990923265,\"offset\":[1119.4904101845082,499.1497204604395]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream guide fully_pseudoimplicit.json",
    "content": "{\"last_node_id\":643,\"last_link_id\":2036,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\"size\":[315,314],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2017]},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14254_.png\",\"image\"]},{\"id\":632,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[962.5586547851562,-316.3705139160156],\"size\":[277.62237548828125,126],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2025},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2015}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2016],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":636,\"type\":\"ClownModelLoader\",\"pos\":[599.3463745117188,-176.31788635253906],\"size\":[315,266],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2025],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2024,2028],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2026,2027],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":591,\"type\":\"VAEDecode\",\"pos\":[1610,-230],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"label\":\"samples\",\"type\":\"LATENT\",\"link\":2030},{\"name\":\"vae\",\"localized_name\":\"vae\",\"label\":\"vae\",\"type\":\"VAE\",\"link\":2027}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"label\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[2019],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":633,\"type\":\"SaveImage\",\"pos\":[1610,-120],\"size\":[436.4179382324219,508.5302429199219],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2019}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":629,\"type\":\"VAEEncodeAdvanced\",\"pos\":[961.6965942382812,242.70477294921875],\"size\":[278.0284423828125,280.5834045410156],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2017},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":2026}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2013,2020],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2015]},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":637,\"type\":\"CLIPTextEncode\",\"pos\":[962.297607421875,99.93917846679688],\"size\":[278.4529113769531,88],\"flags\":{\"collapsed\":false},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2028}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2029],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"low quality, low detail, blurry, shallow depth of field, mutated, symmetrical, generic\"]},{\"id\":107,\"type\":\"CLIPTextEncode\",\"pos\":[959.4713745117188,-123.3353500366211],\"size\":[282.33453369140625,173.58438110351562],\"flags\":{\"collapsed\":false},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2024}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2018],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"the mournful lamentations of of a female rock singer on stage with chaos behind her, her face screaming her sorrowful refrains the despairing cries of anguished screams howling agonized moans, her pained whispers mournful sighs distant echoes across the smoky stage, fading memories of lost loves, forgotten dreams, shattered hopes, crushed spirits, broken hearts\"]},{\"id\":630,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1271.7001953125,-124.3408432006836],\"size\":[291.7499084472656,650],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2016},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2018},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2029},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2013},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2021},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2030],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"fully_implicit/gauss-legendre_2s\",\"beta57\",30,-1,1,4,0,\"fixed\",\"standard\",true]},{\"id\":634,\"type\":\"ClownGuide_Beta\",\"pos\":[1276.0064697265625,-480.84442138671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2021],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"fully_pseudoimplicit\",false,false,0.75,1,\"linear_quadratic\",0,10,false]},{\"id\":643,\"type\":\"Note\",\"pos\":[1599.7352294921875,-422.8976135253906],\"size\":[258.39599609375,111.11077880859375],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"fully_pseudoimplicit only works with \\\"fully_implicit\\\" sampler types. With all others, it will revert automatically to pseudoimplicit.\\n\\npseudoimplicit may, however, be used with \\\"fully_implicit\\\" samplers.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[2013,629,0,630,3,\"LATENT\"],[2015,629,3,632,1,\"LATENT\"],[2016,632,0,630,0,\"MODEL\"],[2017,628,0,629,0,\"IMAGE\"],[2018,107,0,630,1,\"CONDITIONING\"],[2019,591,0,633,0,\"IMAGE\"],[2020,629,0,634,0,\"LATENT\"],[2021,634,0,630,5,\"GUIDES\"],[2024,636,1,107,0,\"CLIP\"],[2025,636,0,632,0,\"MODEL\"],[2026,636,2,629,4,\"VAE\"],[2027,636,2,591,1,\"VAE\"],[2028,636,1,637,0,\"CLIP\"],[2029,637,0,630,2,\"CONDITIONING\"],[2030,630,0,591,0,\"LATENT\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.7985878990923265,\"offset\":[840.6440644823947,678.3605934631012]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream guide lure.json",
    "content": "{\"last_node_id\":640,\"last_link_id\":2035,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\"size\":[315,314],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2017]},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14254_.png\",\"image\"]},{\"id\":632,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[962.5586547851562,-316.3705139160156],\"size\":[277.62237548828125,126],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2025},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2015}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2016],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":636,\"type\":\"ClownModelLoader\",\"pos\":[599.3463745117188,-176.31788635253906],\"size\":[315,266],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2025],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2024,2028,2034],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2026,2027],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":591,\"type\":\"VAEDecode\",\"pos\":[1610,-230],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"label\":\"samples\",\"type\":\"LATENT\",\"link\":2030},{\"name\":\"vae\",\"localized_name\":\"vae\",\"label\":\"vae\",\"type\":\"VAE\",\"link\":2027}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"label\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[2019],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":633,\"type\":\"SaveImage\",\"pos\":[1610,-120],\"size\":[436.4179382324219,508.5302429199219],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2019}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":629,\"type\":\"VAEEncodeAdvanced\",\"pos\":[961.6965942382812,242.70477294921875],\"size\":[278.0284423828125,280.5834045410156],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2017},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":2026}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2013,2020],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2015]},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":630,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1271.7001953125,-124.3408432006836],\"size\":[291.7499084472656,650],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2016},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2018},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2029},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2013},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2021},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":2032},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2030],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",30,-1,1,4,0,\"fixed\",\"standard\",true]},{\"id\":638,\"type\":\"SharkOptions_GuideCond_Beta\",\"pos\":[955.9966430664062,585.7319946289062],\"size\":[284.5923156738281,98],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2035},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2033},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[2032],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_GuideCond_Beta\"},\"widgets_values\":[4]},{\"id\":637,\"type\":\"CLIPTextEncode\",\"pos\":[962.297607421875,99.93917846679688],\"size\":[278.4529113769531,88],\"flags\":{\"collapsed\":false},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2028}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2029,2033],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"low quality, low detail, blurry, shallow depth of field, mutated, symmetrical, generic\"]},{\"id\":107,\"type\":\"CLIPTextEncode\",\"pos\":[959.4713745117188,-123.3353500366211],\"size\":[282.33453369140625,173.58438110351562],\"flags\":{\"collapsed\":false},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2024}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2018],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"the mournful lamentations of of a female rock singer on stage with chaos behind her, her face screaming her sorrowful refrains the despairing cries of anguished screams howling agonized moans, her pained whispers mournful sighs distant echoes across the smoky stage, fading memories of lost loves, forgotten dreams, shattered hopes, crushed spirits, broken hearts\"]},{\"id\":639,\"type\":\"CLIPTextEncode\",\"pos\":[599.5145263671875,565.6756591796875],\"size\":[315.33026123046875,117.94475555419922],\"flags\":{\"collapsed\":false},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2034}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2035],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"illustration of a singing clock with huge teeth in a surreal forest with torquiose mountains and a red and yellow sky, ragged trees and a pool of black oil on the ground, dripping paint oozing off the clock\"]},{\"id\":640,\"type\":\"Note\",\"pos\":[245.6206512451172,517.1527709960938],\"size\":[323.0928649902344,167.39759826660156],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"With the \\\"flow\\\" mode it is usually beneficial to use the supplemental GuideCond node, which allows you to set conditionings for the guide itself. With \\\"flow\\\", the guide changes during the sampling process. Without GuideCond in use, it will default to reusing your main prompt, which may result in some loss of adherence to the guide image.\\n\\n\\\"Lure\\\" is the only other mode that will use GuideCond.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":634,\"type\":\"ClownGuide_Beta\",\"pos\":[1276.0064697265625,-480.84442138671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2021],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"lure\",false,false,1,1,\"linear_quadratic\",0,13,false]}],\"links\":[[2013,629,0,630,3,\"LATENT\"],[2015,629,3,632,1,\"LATENT\"],[2016,632,0,630,0,\"MODEL\"],[2017,628,0,629,0,\"IMAGE\"],[2018,107,0,630,1,\"CONDITIONING\"],[2019,591,0,633,0,\"IMAGE\"],[2020,629,0,634,0,\"LATENT\"],[2021,634,0,630,5,\"GUIDES\"],[2024,636,1,107,0,\"CLIP\"],[2025,636,0,632,0,\"MODEL\"],[2026,636,2,629,4,\"VAE\"],[2027,636,2,591,1,\"VAE\"],[2028,636,1,637,0,\"CLIP\"],[2029,637,0,630,2,\"CONDITIONING\"],[2030,630,0,591,0,\"LATENT\"],[2032,638,0,630,6,\"OPTIONS\"],[2033,637,0,638,1,\"CONDITIONING\"],[2034,636,1,639,0,\"CLIP\"],[2035,639,0,638,0,\"CONDITIONING\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.7985878990923265,\"offset\":[1342.694620988285,531.4979770514516]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream guide pseudoimplicit.json",
    "content": "{\"last_node_id\":641,\"last_link_id\":2035,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\"size\":[315,314],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2017]},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14254_.png\",\"image\"]},{\"id\":632,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[962.5586547851562,-316.3705139160156],\"size\":[277.62237548828125,126],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2025},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2015}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2016],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":636,\"type\":\"ClownModelLoader\",\"pos\":[599.3463745117188,-176.31788635253906],\"size\":[315,266],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2025],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2024,2028],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2026,2027],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":591,\"type\":\"VAEDecode\",\"pos\":[1610,-230],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"label\":\"samples\",\"type\":\"LATENT\",\"link\":2030},{\"name\":\"vae\",\"localized_name\":\"vae\",\"label\":\"vae\",\"type\":\"VAE\",\"link\":2027}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"label\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[2019],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":633,\"type\":\"SaveImage\",\"pos\":[1610,-120],\"size\":[436.4179382324219,508.5302429199219],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2019}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":629,\"type\":\"VAEEncodeAdvanced\",\"pos\":[961.6965942382812,242.70477294921875],\"size\":[278.0284423828125,280.5834045410156],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2017},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":2026}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2013,2020],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2015]},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":630,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1271.7001953125,-124.3408432006836],\"size\":[291.7499084472656,650],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2016},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2018},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2029},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2013},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2021},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2030],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",30,-1,1,4,0,\"fixed\",\"standard\",true]},{\"id\":637,\"type\":\"CLIPTextEncode\",\"pos\":[962.297607421875,99.93917846679688],\"size\":[278.4529113769531,88],\"flags\":{\"collapsed\":false},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2028}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2029],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"low quality, low detail, blurry, shallow depth of field, mutated, symmetrical, generic\"]},{\"id\":107,\"type\":\"CLIPTextEncode\",\"pos\":[959.4713745117188,-123.3353500366211],\"size\":[282.33453369140625,173.58438110351562],\"flags\":{\"collapsed\":false},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2024}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2018],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"the mournful lamentations of of a female rock singer on stage with chaos behind her, her face screaming her sorrowful refrains the despairing cries of anguished screams howling agonized moans, her pained whispers mournful sighs distant echoes across the smoky stage, fading memories of lost loves, forgotten dreams, shattered hopes, crushed spirits, broken hearts\"]},{\"id\":634,\"type\":\"ClownGuide_Beta\",\"pos\":[1276.0064697265625,-480.84442138671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2021],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"pseudoimplicit\",false,false,0.1,1,\"beta57\",0,5,false]}],\"links\":[[2013,629,0,630,3,\"LATENT\"],[2015,629,3,632,1,\"LATENT\"],[2016,632,0,630,0,\"MODEL\"],[2017,628,0,629,0,\"IMAGE\"],[2018,107,0,630,1,\"CONDITIONING\"],[2019,591,0,633,0,\"IMAGE\"],[2020,629,0,634,0,\"LATENT\"],[2021,634,0,630,5,\"GUIDES\"],[2024,636,1,107,0,\"CLIP\"],[2025,636,0,632,0,\"MODEL\"],[2026,636,2,629,4,\"VAE\"],[2027,636,2,591,1,\"VAE\"],[2028,636,1,637,0,\"CLIP\"],[2029,637,0,630,2,\"CONDITIONING\"],[2030,630,0,591,0,\"LATENT\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.7985878990923265,\"offset\":[1182.8926069221118,636.9542766363238]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream hires fix.json",
    "content": "{\"last_node_id\":1358,\"last_link_id\":3624,\"nodes\":[{\"id\":490,\"type\":\"Reroute\",\"pos\":[13130,-70],\"size\":[75,26],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3534}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[2881,3323],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1317,\"type\":\"ClownModelLoader\",\"pos\":[12770,-90],\"size\":[315,266],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3539],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[3534],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[3535],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn_fast\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp16.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[13253.044921875,283.4559020996094],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[3540],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1536,768,\"red\",false,\"16_channels\"]},{\"id\":13,\"type\":\"Reroute\",\"pos\":[13130,-110],\"size\":[75,26],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3539}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[3548,3597],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1224,\"type\":\"CLIPTextEncode\",\"pos\":[13250,-90],\"size\":[269.0397644042969,155.65545654296875],\"flags\":{\"collapsed\":false},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":3323}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[3480,3599],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a cold war era photograph from 1983 of a group of four friends holding up their hands inside an antique living room in a victorian era mansion\"]},{\"id\":970,\"type\":\"CLIPTextEncode\",\"pos\":[13253.0546875,116.28263854980469],\"size\":[261.8798522949219,111.21334838867188],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2881}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2882,3600],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, low quality, bad quality, low detail, mutated, jpeg artifacts, compression artifacts,\"]},{\"id\":14,\"type\":\"Reroute\",\"pos\":[13130,-30],\"size\":[75,26],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3535}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,2696],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1322,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[14503.9365234375,-99.09358978271484],\"size\":[281.6568603515625,542.124755859375],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3612},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":3610},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3550],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",-1,4,\"resample\",true]},{\"id\":1350,\"type\":\"ClownOptions_Tile_Beta\",\"pos\":[14700,540],\"size\":[210,82],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":3614}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3615],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Tile_Beta\"},\"widgets_values\":[1216,832]},{\"id\":1351,\"type\":\"ClownOptions_Tile_Beta\",\"pos\":[14940,540],\"size\":[210,82],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":3615}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Tile_Beta\"},\"widgets_values\":[1152,896]},{\"id\":1349,\"type\":\"ClownOptions_Tile_Beta\",\"pos\":[14470,540],\"size\":[210,82],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":3616}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3614],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Tile_Beta\"},\"widgets_values\":[1536,768]},{\"id\":1352,\"type\":\"ClownOptions_Tile_Beta\",\"pos\":[14233.716796875,538.3314819335938],\"size\":[210,82],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3616],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Tile_Beta\"},\"widgets_values\":[2048,1024]},{\"id\":1353,\"type\":\"ClownOptions_Tile_Beta\",\"pos\":[14232.0498046875,680.947998046875],\"size\":[210,82],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Tile_Beta\"},\"widgets_values\":[-1,-1]},{\"id\":1354,\"type\":\"Note\",\"pos\":[14476.6044921875,675.5231323242188],\"size\":[258.67279052734375,88],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"As with the rest of RES4LYF nodes, \\\"-1\\\" means \\\"go to the end\\\" or \\\"max value\\\". In this case, that means \\\"use full image sizes\\\". So, the node to the left will be equivalent to the one above.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":907,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[13550.5615234375,-92.92960357666016],\"size\":[301.752197265625,657.727294921875],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":3548},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3480},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2882},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3540},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3618],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"bong_tangent\",30,15,1,4,4,\"fixed\",\"standard\",true]},{\"id\":1355,\"type\":\"LatentUpscale\",\"pos\":[13877.537109375,-92.35859680175781],\"size\":[286.32501220703125,130],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3618}],\"outputs\":[{\"name\":\"LATENT\",\"localized_name\":\"LATENT\",\"type\":\"LATENT\",\"links\":[3619],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"LatentUpscale\"},\"widgets_values\":[\"nearest-exact\",2048,1024,\"disabled\"]},{\"id\":1345,\"type\":\"ClownOptions_Tile_Beta\",\"pos\":[13953.123046875,285.76708984375],\"size\":[210,82],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3609,3610],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Tile_Beta\"},\"widgets_values\":[1536,768]},{\"id\":909,\"type\":\"SaveImage\",\"pos\":[14811.001953125,-99.0184555053711],\"size\":[457.3382263183594,422.2065124511719],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2697}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":908,\"type\":\"VAEDecode\",\"pos\":[14808.998046875,-201.5235595703125],\"size\":[140,46],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3550},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2696}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2697],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":1356,\"type\":\"Note\",\"pos\":[12793.412109375,-250.5360870361328],\"size\":[276.617431640625,88],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Want to use this workflow with another model? Just hook up a different model! You may need to set CFG = 1.0 if you're going to use a distilled model, such as HiDream Dev (or Fast) or Flux Dev.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1321,\"type\":\"Note\",\"pos\":[12769.740234375,239.9431915283203],\"size\":[345.97113037109375,161.35496520996094],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"There are many samplers to try, but res_2m, res_3m, res_2s, and res_3s are very reliable. If you want to push quality a bit higher in exchange for time, you could even try res_5s.\\n\\nres_2m and res_3m begin with higher order steps (one res_2s step, and two res_3s steps, respectively) to initialize the sampling process. Ultimately, the result is faster convergence in terms of wall time, as fewer steps end up being necessary.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1347,\"type\":\"Note\",\"pos\":[13505.927734375,-326.1947937011719],\"size\":[348.3962097167969,172.26731872558594],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Connect \\\"Upscale Latent\\\" directly to the last chainsampler to skip the iterative refinement steps (which is what implicit steps are: they use the output of a step as the input, then re-run it to refine). They help minimize mutations with a \\\"hires fix\\\" workflow like this.\\n\\n\\\"rebound\\\" is the highest quality implicit_type, but is also slightly slower.\\n\\nYou may also use ClownOptions Cycles instead of ClownOptions Implicit Steps.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1346,\"type\":\"Note\",\"pos\":[13898.4658203125,421.6622314453125],\"size\":[261.7038269042969,363.83868408203125],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"If you use tiled sampling, be sure to choose tile sizes that will need to overlap each other, or you might see seams. For example, for 2048x1024, it would be unwise to choose 1024x1024 or 512x512 as your only tile size, as 2048 / 1024 = 1.0, 2048 / 512 = 4.0, etc.\\n\\nThis workflow will upscale to 2048x1024. 2048 is not  divisible by 1536, and 1024 is not divisible by 768, thereofer they will have overlapping areas.\\n\\nIt's best to pick tile sizes that you know the model is trained at, with which you can generate txt2img without hallucination, doubling, mutations, \\\"grid\\\" artifacts, etc.\\n\\nTiled sampling will be slower, but can prevent drifts in luminosity, hue, artifacts around the edge of the image, and mutations, while reducing VRAM use. However, it can also cause parts of the image to look \\\"out of sync\\\". You can alternate tile sizes like shown to the right, which can sometimes help.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1324,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[14189.3935546875,-89.69397735595703],\"size\":[285.5440673828125,552.053955078125],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":3597},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3599},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3600},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3619},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":3624},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":3609},{\"name\":\"options 3\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3612],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",1,4,\"resample\",true]},{\"id\":1325,\"type\":\"ClownOptions_ImplicitSteps_Beta\",\"pos\":[13884.9677734375,94.86456298828125],\"size\":[278.0316467285156,130],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_ImplicitSteps_Beta\"},\"widgets_values\":[\"rebound\",\"bongmath\",10,0]},{\"id\":1357,\"type\":\"Note\",\"pos\":[14184.4599609375,-302.5225830078125],\"size\":[305.0502014160156,150.26080322265625],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The sampling will appear to froze for a minute at this node, but it is not actually frozen. Reducing implicit_steps or cycles will speed things up.\\n\\nIf you are willing to use a slower sampler to improve quality, the biggest bang for your buck will be with this first chainsampler. Try changing the sampler_name to res_3s, or gauss-legendre_2s.\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1358,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[13880.7060546875,-310.925537109375],\"size\":[280.4444274902344,154],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3624],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[6,1,0.5,\"none\",4]}],\"links\":[[18,14,0,7,4,\"VAE\"],[2696,14,0,908,1,\"VAE\"],[2697,908,0,909,0,\"IMAGE\"],[2881,490,0,970,0,\"CLIP\"],[2882,970,0,907,2,\"CONDITIONING\"],[3323,490,0,1224,0,\"CLIP\"],[3480,1224,0,907,1,\"CONDITIONING\"],[3534,1317,1,490,0,\"*\"],[3535,1317,2,14,0,\"*\"],[3539,1317,0,13,0,\"*\"],[3540,7,3,907,3,\"LATENT\"],[3548,13,0,907,0,\"MODEL\"],[3550,1322,0,908,0,\"LATENT\"],[3597,13,0,1324,0,\"MODEL\"],[3599,1224,0,1324,1,\"CONDITIONING\"],[3600,970,0,1324,2,\"CONDITIONING\"],[3609,1345,0,1324,7,\"OPTIONS\"],[3610,1345,0,1322,6,\"OPTIONS\"],[3612,1324,0,1322,4,\"LATENT\"],[3614,1349,0,1350,0,\"OPTIONS\"],[3615,1350,0,1351,0,\"OPTIONS\"],[3616,1352,0,1349,0,\"OPTIONS\"],[3618,907,0,1355,0,\"LATENT\"],[3619,1355,0,1324,4,\"LATENT\"],[3624,1358,0,1324,6,\"OPTIONS\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.9194342495775452,\"offset\":[-11744.076730306608,403.1731222243355]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream regional 3 zones.json",
    "content": "{\"last_node_id\":612,\"last_link_id\":1834,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[580,-180],\"size\":[75,26],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1611}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1395],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":404,\"type\":\"VAELoader\",\"pos\":[328.6705627441406,5.664919376373291],\"size\":[210,58],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"VAE\",\"localized_name\":\"VAE\",\"type\":\"VAE\",\"links\":[1344],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAELoader\"},\"widgets_values\":[\"ae.sft\"]},{\"id\":402,\"type\":\"QuadrupleCLIPLoader\",\"pos\":[130,-170],\"size\":[407.7720031738281,130],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[1552],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"QuadrupleCLIPLoader\"},\"widgets_values\":[\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\"]},{\"id\":403,\"type\":\"UNETLoader\",\"pos\":[216.5030059814453,-297.7170715332031],\"size\":[320.7802429199219,82],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[1610],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UNETLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn\"]},{\"id\":14,\"type\":\"Reroute\",\"pos\":[580,-100],\"size\":[75,26],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1344}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,1328],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":431,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[695.769287109375,-369.69635009765625],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1395},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1398}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1680],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":490,\"type\":\"Reroute\",\"pos\":[580.390380859375,-139.51483154296875],\"size\":[75,26],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1552}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[1559,1691,1693,1707],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":394,\"type\":\"CLIPTextEncode\",\"pos\":[694.6102905273438,168.60507202148438],\"size\":[264.9925842285156,127.11075592041016],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1559}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1355],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"bright light, well-lit, daylight, monotone, desaturated, professional photography, blurry, out of focus, shallow depth of field, low quality, bad quality, low detail, mutated, jpeg artifacts, compression artifacts,\"]},{\"id\":398,\"type\":\"SaveImage\",\"pos\":[1387.6151123046875,-268.26824951171875],\"size\":[603.7825927734375,598.39404296875],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1329}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":608,\"type\":\"ImageToMask\",\"pos\":[478.4993896484375,-645.0528564453125],\"size\":[210,58],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1809}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1807],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageToMask\"},\"widgets_values\":[\"red\"]},{\"id\":397,\"type\":\"VAEDecode\",\"pos\":[1388.41064453125,-374.6264953613281],\"size\":[210,46],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":1815},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1328}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1329],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":605,\"type\":\"LoadImage\",\"pos\":[-140,-900],\"size\":[210,314],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1810],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (446).png\",\"image\"]},{\"id\":603,\"type\":\"LoadImage\",\"pos\":[-130,-1280],\"size\":[210,314],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1811],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (444).png\",\"image\"]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[696.7778930664062,-164.97328186035156],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1398,1399],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,2048,\"red\",false,\"16_channels\"]},{\"id\":540,\"type\":\"CLIPTextEncode\",\"pos\":[743.9880981445312,-978.6345825195312],\"size\":[275.3782653808594,125.7564697265625],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1707}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1814],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"a charcoal drawing of the top of a skyscraper\"]},{\"id\":520,\"type\":\"CLIPTextEncode\",\"pos\":[740,-790],\"size\":[275.3782653808594,125.7564697265625],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1693}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1813],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"a children's messy crayon drawing of the middle floors of a skyscraper\"]},{\"id\":455,\"type\":\"CLIPTextEncode\",\"pos\":[740,-600],\"size\":[285.3899230957031,125.00720977783203],\"flags\":{\"collapsed\":false},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1691}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1812],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"a close up high quality cinematic color photograph of the base of an office building in a city park in wisconsin\"]},{\"id\":606,\"type\":\"ImageToMask\",\"pos\":[484.2362976074219,-962.7913818359375],\"size\":[210,58],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1811}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1806],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageToMask\"},\"widgets_values\":[\"red\"]},{\"id\":607,\"type\":\"ImageToMask\",\"pos\":[478.7450256347656,-798.6764526367188],\"size\":[210,58],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":1810}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[1808],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ImageToMask\"},\"widgets_values\":[\"red\"]},{\"id\":604,\"type\":\"LoadImage\",\"pos\":[-150,-510],\"size\":[210,314],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1809],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (445).png\",\"image\"]},{\"id\":401,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1010,-370],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1680},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1834},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1355},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1399},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1815],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",20,-1,1,4,86,\"fixed\",\"standard\",true]},{\"id\":533,\"type\":\"ClownRegionalConditioning_ABC\",\"pos\":[1087.326904296875,-873.5692138671875],\"size\":[243.60000610351562,390],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"conditioning_A\",\"localized_name\":\"conditioning_A\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1814},{\"name\":\"conditioning_B\",\"localized_name\":\"conditioning_B\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1813},{\"name\":\"conditioning_C\",\"localized_name\":\"conditioning_C\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1812},{\"name\":\"mask_A\",\"localized_name\":\"mask_A\",\"type\":\"MASK\",\"shape\":7,\"link\":1806},{\"name\":\"mask_B\",\"localized_name\":\"mask_B\",\"type\":\"MASK\",\"shape\":7,\"link\":1808},{\"name\":\"mask_C\",\"localized_name\":\"mask_C\",\"type\":\"MASK\",\"shape\":7,\"link\":1807},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":[1834],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning_ABC\"},\"widgets_values\":[-0.9,-0.25,0,\"constant\",0,-1,\"boolean\",256,false]},{\"id\":612,\"type\":\"Note\",\"pos\":[159.41253662109375,-707.9190063476562],\"size\":[210,99.94182586669922],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"It is critical that each part of the image is covered by one of these masks.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":512,\"type\":\"ReHiDreamPatcher\",\"pos\":[212.8125762939453,-444.52001953125],\"size\":[320.9115295410156,82],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1610}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1611],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReHiDreamPatcher\"},\"widgets_values\":[\"float32\",true]}],\"links\":[[18,14,0,7,4,\"VAE\"],[1328,14,0,397,1,\"VAE\"],[1329,397,0,398,0,\"IMAGE\"],[1344,404,0,14,0,\"*\"],[1355,394,0,401,2,\"CONDITIONING\"],[1395,13,0,431,0,\"MODEL\"],[1398,7,3,431,1,\"LATENT\"],[1399,7,3,401,3,\"LATENT\"],[1552,402,0,490,0,\"*\"],[1559,490,0,394,0,\"CLIP\"],[1610,403,0,512,0,\"MODEL\"],[1611,512,0,13,0,\"*\"],[1680,431,0,401,0,\"MODEL\"],[1691,490,0,455,0,\"CLIP\"],[1693,490,0,520,0,\"CLIP\"],[1707,490,0,540,0,\"CLIP\"],[1806,606,0,533,3,\"MASK\"],[1807,608,0,533,5,\"MASK\"],[1808,607,0,533,4,\"MASK\"],[1809,604,0,608,0,\"IMAGE\"],[1810,605,0,607,0,\"IMAGE\"],[1811,603,0,606,0,\"IMAGE\"],[1812,455,0,533,2,\"CONDITIONING\"],[1813,520,0,533,1,\"CONDITIONING\"],[1814,540,0,533,0,\"CONDITIONING\"],[1815,401,0,397,0,\"LATENT\"],[1834,533,0,401,1,\"CONDITIONING\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.3109994191500227,\"offset\":[2330.291089462677,1329.1104989082662]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream regional antiblur.json",
    "content": "{\"last_node_id\":727,\"last_link_id\":2103,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1280,-650],\"size\":[75,26],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2098}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1967],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":14,\"type\":\"Reroute\",\"pos\":[1280,-570],\"size\":[75,26],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2100}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,1328],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":398,\"type\":\"SaveImage\",\"pos\":[1379.9996337890625,-267.2835998535156],\"size\":[341.7508850097656,561.0067749023438],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1329}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":701,\"type\":\"Note\",\"pos\":[80,-520],\"size\":[342.05950927734375,88],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"I usually just lazily draw masks in Load Image nodes (with some random image loaded), but for the sake of reproducibility, here's another approach.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":712,\"type\":\"Note\",\"pos\":[-210,-520],\"size\":[245.76409912109375,91.6677017211914],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"So long as these masks are all the same size, the regional conditioning nodes will handle resizing to the image size for you.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":676,\"type\":\"InvertMask\",\"pos\":[20,-370],\"size\":[142.42074584960938,26],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2073}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2083],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"InvertMask\"},\"widgets_values\":[]},{\"id\":662,\"type\":\"CLIPTextEncode\",\"pos\":[460,-370],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1939}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2094],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a woman wearing a red flannel shirt and a cute shark plush blue hat\"]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[719.6110229492188,16.752899169921875],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1399],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":710,\"type\":\"MaskPreview\",\"pos\":[180,-190],\"size\":[210,246],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2054}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":397,\"type\":\"VAEDecode\",\"pos\":[1382.3662109375,-374.17059326171875],\"size\":[210,46],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":2096},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1328}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1329],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":715,\"type\":\"SolidMask\",\"pos\":[-220,-370],\"size\":[210,106],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2073],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,1024,1024]},{\"id\":716,\"type\":\"SolidMask\",\"pos\":[-220,-220],\"size\":[210,106],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2065],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,384,864]},{\"id\":709,\"type\":\"MaskComposite\",\"pos\":[190,-370],\"size\":[210,126],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"MASK\",\"link\":2083},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"MASK\",\"link\":2065}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2054,2091],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskComposite\"},\"widgets_values\":[256,160,\"add\"]},{\"id\":704,\"type\":\"Note\",\"pos\":[101.74818420410156,112.67951965332031],\"size\":[290.7107238769531,155.35317993164062],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownRegionalConditionings:\\n\\nTry raising or lowering weight, and changing the weight scheduler from beta57 to Karras (weakens more quickly), or to linear quadratic (stronger late).\\n\\nTry changing region_bleed_start_step (earlier will make the image blend together more), and end_step.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":703,\"type\":\"Note\",\"pos\":[423.10699462890625,-96.14085388183594],\"size\":[241.9689483642578,386.7543640136719],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"edge_width also creates some overlap around the edges of the mask.\\n\\nboolean_masked means that the masked area can \\\"see\\\" the rest of the image, but the unmasked area cannot. \\\"boolean\\\" would mean neither area could see the rest of the image.\\n\\nTry setting to boolean_unmasked and see what happens!\\n\\nIf you still have blur, try reducing edge_width (and if you have seams, try increasing it, or setting end_step to something like 20). \\n\\nAlso verify that you can generate the background prompt alone without blur (if you can't, this won't work). And don't get stuck on one seed.\\n\\nVaguely human-shaped masks also tend to work better than the blocky one used here.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":725,\"type\":\"ReHiDreamPatcher\",\"pos\":[1009.8884887695312,-694.5361328125],\"size\":[210,82],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2097}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2098],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReHiDreamPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":724,\"type\":\"ClownModelLoader\",\"pos\":[660.0880126953125,-695.142333984375],\"size\":[315,266],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2097],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2099],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2100],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn_fast\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":722,\"type\":\"ClownRegionalConditioning2\",\"pos\":[690,-370],\"size\":[287.75750732421875,330],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"conditioning_masked\",\"localized_name\":\"conditioning_masked\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2094},{\"name\":\"conditioning_unmasked\",\"localized_name\":\"conditioning_unmasked\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2093},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2091},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":[2095],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning2\"},\"widgets_values\":[0.9,0.1,0,\"constant\",0,-1,\"boolean_masked\",32,false]},{\"id\":723,\"type\":\"CLIPTextEncode\",\"pos\":[460,-240],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2092}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2093],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a college campus\"]},{\"id\":490,\"type\":\"Reroute\",\"pos\":[1280,-610],\"size\":[75,26],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2099}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[1939,2092,2102],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":727,\"type\":\"CLIPTextEncode\",\"pos\":[721.318359375,349.4079895019531],\"size\":[261.8798522949219,111.21334838867188],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2102}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2103],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, low quality, bad quality, low detail, mutated, jpeg artifacts, compression artifacts,\"]},{\"id\":401,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1010,-370],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1967},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2095},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2103},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1399},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2096],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"bong_tangent\",30,-1,1,4,0,\"fixed\",\"standard\",true]}],\"links\":[[18,14,0,7,4,\"VAE\"],[1328,14,0,397,1,\"VAE\"],[1329,397,0,398,0,\"IMAGE\"],[1399,7,3,401,3,\"LATENT\"],[1939,490,0,662,0,\"CLIP\"],[1967,13,0,401,0,\"MODEL\"],[2054,709,0,710,0,\"MASK\"],[2065,716,0,709,1,\"MASK\"],[2073,715,0,676,0,\"MASK\"],[2083,676,0,709,0,\"MASK\"],[2091,709,0,722,2,\"MASK\"],[2092,490,0,723,0,\"CLIP\"],[2093,723,0,722,1,\"CONDITIONING\"],[2094,662,0,722,0,\"CONDITIONING\"],[2095,722,0,401,1,\"CONDITIONING\"],[2096,401,0,397,0,\"LATENT\"],[2097,724,0,725,0,\"MODEL\"],[2098,725,0,13,0,\"*\"],[2099,724,1,490,0,\"*\"],[2100,724,2,14,0,\"*\"],[2102,490,0,727,0,\"CLIP\"],[2103,727,0,401,2,\"CONDITIONING\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.91943424957756,\"offset\":[1345.3511333682184,704.1505917671295]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream style antiblur.json",
    "content": "{\"last_node_id\":742,\"last_link_id\":2119,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1280,-650],\"size\":[75,26],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2115}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1967],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[1280,-610],\"size\":[75,26],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2116}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[1939,2119],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":14,\"type\":\"Reroute\",\"pos\":[1280,-570],\"size\":[75,26],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2117}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,1328],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":398,\"type\":\"SaveImage\",\"pos\":[1379.9996337890625,-267.2835998535156],\"size\":[341.7508850097656,561.0067749023438],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1329}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":397,\"type\":\"VAEDecode\",\"pos\":[1382.3662109375,-374.17059326171875],\"size\":[210,46],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":2096},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1328}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1329],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[412.2475280761719,-199.0681915283203],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2113},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2100],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1399],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":662,\"type\":\"CLIPTextEncode\",\"pos\":[761.3005981445312,-357.2689208984375],\"size\":[210,102.54972839355469],\"flags\":{\"collapsed\":false},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1939}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2098],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a woman wearing a red flannel shirt and a cute shark plush blue hat, a college campus, brick buildings\"]},{\"id\":727,\"type\":\"Note\",\"pos\":[412.8926086425781,-351.8606872558594],\"size\":[272.4425048828125,88],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This approach can be combined with the regional conditioning anti-blur approach for an even more powerful effect.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":724,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[703.7374267578125,-198.63233947753906],\"size\":[262.8634033203125,286],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2100},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2099],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,10,false]},{\"id\":739,\"type\":\"LoadImage\",\"pos\":[70.82455444335938,-201.66342163085938],\"size\":[315,314],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2113],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (655).png\",\"image\"]},{\"id\":741,\"type\":\"ReHiDreamPatcher\",\"pos\":[1000,-680],\"size\":[210,82],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2114}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2115],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReHiDreamPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":740,\"type\":\"ClownModelLoader\",\"pos\":[650,-680],\"size\":[315,266],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2114],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2116],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2117],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn_fast\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":401,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1010,-370],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1967},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2098},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2118},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1399},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2099},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2096],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"bong_tangent\",30,-1,1,4,7,\"fixed\",\"standard\",true]},{\"id\":742,\"type\":\"CLIPTextEncode\",\"pos\":[703.5707397460938,144.26979064941406],\"size\":[261.8798522949219,111.21334838867188],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2119}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2118],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, low quality, bad quality, low detail, mutated, jpeg artifacts, compression artifacts,\"]},{\"id\":726,\"type\":\"Note\",\"pos\":[305.74163818359375,169.59754943847656],\"size\":[364.5906677246094,164.38613891601562],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The best style guide images will share the lighting and color composition of your desired scene. Some are just inexplicably ineffective at killing blur. Just gather up a bunch of images to try, you'll find some good ones that can be reused for many things. I'm including the one used here in the example_workflows directory, be sure to check for it.\\n\\nAnd don't forget to change seeds. Don't optimize for one seed only. Don't get stuck on one seed! Sometimes one is just not going to work out for whatever you're doing.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[18,14,0,7,4,\"VAE\"],[1328,14,0,397,1,\"VAE\"],[1329,397,0,398,0,\"IMAGE\"],[1399,7,3,401,3,\"LATENT\"],[1939,490,0,662,0,\"CLIP\"],[1967,13,0,401,0,\"MODEL\"],[2096,401,0,397,0,\"LATENT\"],[2098,662,0,401,1,\"CONDITIONING\"],[2099,724,0,401,5,\"GUIDES\"],[2100,7,0,724,0,\"LATENT\"],[2113,739,0,7,0,\"IMAGE\"],[2114,740,0,741,0,\"MODEL\"],[2115,741,0,13,0,\"*\"],[2116,740,1,490,0,\"*\"],[2117,740,2,14,0,\"*\"],[2118,742,0,401,2,\"CONDITIONING\"],[2119,490,0,742,0,\"CLIP\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.7449402268886909,\"offset\":[1731.8135682982838,807.2501654184575]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream style transfer txt2img.json",
    "content": "{\"last_node_id\":1385,\"last_link_id\":3733,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-109.2831802368164],\"size\":[75,26],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3686}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1395],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":14,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-29.283178329467773],\"size\":[75,26],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3671}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,2696],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-69.28317260742188],\"size\":[75,26],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3670}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[2881,3581],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1363,\"type\":\"ReHiDreamPatcher\",\"pos\":[13268.9013671875,-109.2831802368164],\"size\":[210,82],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3685}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3686],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReHiDreamPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":981,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[14758.255859375,-64.39308166503906],\"size\":[340.20001220703125,510],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3698},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3469],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_2s\",-1,4,\"resample\",true]},{\"id\":1318,\"type\":\"ClownGuide_Beta\",\"pos\":[13828.255859375,675.60693359375],\"size\":[263.102783203125,290],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":3710},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3699,3708],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"inversion\",false,false,0.7,1,\"constant\",0,-1,false]},{\"id\":1333,\"type\":\"CLIPTextEncode\",\"pos\":[13688.255859375,-44.393089294433594],\"size\":[280.6252746582031,164.06936645507812],\"flags\":{\"collapsed\":false},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":3581}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[3602,3626],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"messy blackboard chalk drawing of the inside of a car driving down a creepy road. colorful chalk with shading that shows the chalk textures from drawing with the side of the chalk\\n\"]},{\"id\":1358,\"type\":\"ClownModelLoader\",\"pos\":[12828.9013671875,-299.2831726074219],\"size\":[341.7054443359375,266],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3685],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[3670],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[3671],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn_fast\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"clip_g_hidream.safetensors\",\"clip_l_hidream.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":431,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[13218.9013671875,-309.28314208984375],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1395},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1398}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2692],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":970,\"type\":\"CLIPTextEncode\",\"pos\":[13688.255859375,165.60690307617188],\"size\":[281.9206848144531,109.87118530273438],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2881}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2882,3627],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, jpeg artifacts, low quality, bad quality, unsharp\"]},{\"id\":907,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[14008.255859375,-64.39308166503906],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2692},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3602},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2882},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2983},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3708},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3578],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"beta57\",20,11,1,1,201,\"fixed\",\"unsample\",true]},{\"id\":980,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[14378.255859375,-64.39308166503906],\"size\":[340.20001220703125,570],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3626},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3627},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3578},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3604},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":3533},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":3707},{\"name\":\"options 3\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3698],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s_non-monotonic\",1,4,\"resample\",true]},{\"id\":1317,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[14408.255859375,-294.3930969238281],\"size\":[265.2884826660156,178],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3533],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[10,1,0.5,\"none\",-1,4]},{\"id\":1373,\"type\":\"LoadImage\",\"pos\":[12848.2666015625,531.6068115234375],\"size\":[315,314],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3721],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"title\":\"Load Image (Composition)\",\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (476).png\",\"image\"]},{\"id\":1374,\"type\":\"LoadImage\",\"pos\":[12838.2666015625,171.6068115234375],\"size\":[315,314],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3725],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"title\":\"Load Image (Style Guide)\",\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14627_.png\",\"image\"]},{\"id\":1378,\"type\":\"Reroute\",\"pos\":[13184.07421875,533.128662109375],\"size\":[75,26],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3721}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[3724,3729],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1379,\"type\":\"Reroute\",\"pos\":[13185.853515625,168.15780639648438],\"size\":[75,26],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3725}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[3726],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":909,\"type\":\"SaveImage\",\"pos\":[15220,-259.5838928222656],\"size\":[457.3382263183594,422.2065124511719],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2697}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":1362,\"type\":\"PreviewImage\",\"pos\":[13317.849609375,617.1558837890625],\"size\":[210,246],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3682}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":1350,\"type\":\"ColorMatch\",\"pos\":[13709.701171875,316.05731201171875],\"size\":[210,102],\"flags\":{\"collapsed\":false},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"image_ref\",\"localized_name\":\"image_ref\",\"type\":\"IMAGE\",\"link\":3728},{\"name\":\"image_target\",\"localized_name\":\"image_target\",\"type\":\"IMAGE\",\"link\":3724}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[3682,3688],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ColorMatch\"},\"widgets_values\":[\"mkl\",1]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[13343.19140625,556.8784790039062],\"size\":[261.2217712402344,298],\"flags\":{\"collapsed\":true},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":3688},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":3727},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,160.00003051757812],\"widget\":{\"name\":\"width\"},\"link\":3732},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,184.00003051757812],\"widget\":{\"name\":\"height\"},\"link\":3733}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2983,3710],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[3709],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1398],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[],\"slot_index\":4},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[],\"slot_index\":5}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1344,768,\"red\",false,\"16_channels\"]},{\"id\":1371,\"type\":\"Image Repeat Tile To Size\",\"pos\":[13329.5947265625,497.8262939453125],\"size\":[210,146],\"flags\":{\"collapsed\":true},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":3726},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,36],\"widget\":{\"name\":\"width\"},\"link\":3730},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,60],\"widget\":{\"name\":\"height\"},\"link\":3731}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[3727,3728],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Image Repeat Tile To Size\"},\"widgets_values\":[1024,1024,true]},{\"id\":1380,\"type\":\"SetImageSize\",\"pos\":[13324.7197265625,323.0480041503906],\"size\":[210,102],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[3730,3732],\"slot_index\":0},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[3731,3733],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"SetImageSize\"},\"widgets_values\":[1344,768]},{\"id\":1377,\"type\":\"Image Comparer (rgthree)\",\"pos\":[15742.4619140625,-253.3526153564453],\"size\":[461.9190368652344,413.5953369140625],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":3720},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":3729}],\"outputs\":[],\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_pzczy_00003_.png&type=temp&subfolder=&rand=0.543351218901418\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_pzczy_00004_.png&type=temp&subfolder=&rand=0.38178761627111313\"}]]},{\"id\":908,\"type\":\"VAEDecode\",\"pos\":[15217.7802734375,-312.1965637207031],\"size\":[210,46],\"flags\":{\"collapsed\":true},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3469},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2696}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2697,3720],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":1376,\"type\":\"Note\",\"pos\":[13703.0439453125,536.6895751953125],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increase or decrease weight in ClownGuide to alter adherence to the input image.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1383,\"type\":\"Note\",\"pos\":[14428.40234375,580.1749877929688],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Samplers like res_2s in this cycling node will also work and are faster. res_2m and res_3m are even faster, but sometimes the effect takes longer in wall time to fully kick in.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1384,\"type\":\"Note\",\"pos\":[14793.0322265625,518.4120483398438],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"res_2m or res_3m can be used here instead and are faster, but are less likely to fully clean up lingering artifacts.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1328,\"type\":\"ClownOptions_SDE_Beta\",\"pos\":[14186.4755859375,-132.6126251220703],\"size\":[315,266],\"flags\":{\"collapsed\":true},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas_substep\",\"localized_name\":\"etas_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3707],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Beta\"},\"widgets_values\":[\"gaussian\",\"gaussian\",\"hard\",\"hard\",0.5,0.75,-1,\"fixed\"]},{\"id\":1381,\"type\":\"Note\",\"pos\":[13881.6279296875,-217.62835693359375],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increase or decrease \\\"steps_to_run\\\" in ClownsharKSampler to change the effective denoise level.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1382,\"type\":\"Note\",\"pos\":[14718.0498046875,-295.4144592285156],\"size\":[268.1851806640625,124.49711608886719],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increasing cycles will increase the amount of change, but take longer.\\n\\nCycles will rerun the same step over and over, forwards and backwards, iteratively refining an image at a controlled noise level.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1308,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[14108.255859375,675.60693359375],\"size\":[246.31312561035156,286],\"flags\":{},\"order\":26,\"mode\":4,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":3709},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3699}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3604],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":1385,\"type\":\"Note\",\"pos\":[14396.5634765625,742.3948364257812],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"method = AdaIN is faster and uses less memory, but is less accurate. Some prefer the effect.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[18,14,0,7,4,\"VAE\"],[1395,13,0,431,0,\"MODEL\"],[1398,7,3,431,1,\"LATENT\"],[2692,431,0,907,0,\"MODEL\"],[2696,14,0,908,1,\"VAE\"],[2697,908,0,909,0,\"IMAGE\"],[2881,490,0,970,0,\"CLIP\"],[2882,970,0,907,2,\"CONDITIONING\"],[2983,7,0,907,3,\"LATENT\"],[3469,981,0,908,0,\"LATENT\"],[3533,1317,0,980,6,\"OPTIONS\"],[3578,907,0,980,4,\"LATENT\"],[3581,490,0,1333,0,\"CLIP\"],[3602,1333,0,907,1,\"CONDITIONING\"],[3604,1308,0,980,5,\"GUIDES\"],[3626,1333,0,980,1,\"CONDITIONING\"],[3627,970,0,980,2,\"CONDITIONING\"],[3670,1358,1,490,0,\"*\"],[3671,1358,2,14,0,\"*\"],[3682,1350,0,1362,0,\"IMAGE\"],[3685,1358,0,1363,0,\"MODEL\"],[3686,1363,0,13,0,\"*\"],[3688,1350,0,7,0,\"IMAGE\"],[3698,980,0,981,4,\"LATENT\"],[3699,1318,0,1308,3,\"GUIDES\"],[3707,1328,0,980,7,\"OPTIONS\"],[3708,1318,0,907,5,\"GUIDES\"],[3709,7,1,1308,0,\"LATENT\"],[3710,7,0,1318,0,\"LATENT\"],[3720,908,0,1377,0,\"IMAGE\"],[3721,1373,0,1378,0,\"*\"],[3724,1378,0,1350,1,\"IMAGE\"],[3725,1374,0,1379,0,\"*\"],[3726,1379,0,1371,0,\"IMAGE\"],[3727,1371,0,7,1,\"IMAGE\"],[3728,1371,0,1350,0,\"IMAGE\"],[3729,1378,0,1377,1,\"IMAGE\"],[3730,1380,0,1371,1,\"INT\"],[3731,1380,1,1371,2,\"INT\"],[3732,1380,0,7,5,\"INT\"],[3733,1380,1,7,6,\"INT\"]],\"groups\":[{\"id\":1,\"title\":\"Model Loaders\",\"bounding\":[12796.72265625,-401.9004211425781,822.762451171875,436.0693359375],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":2,\"title\":\"Sampling\",\"bounding\":[13652.6533203125,-402.70721435546875,1470.8076171875,1409.0289306640625],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":3,\"title\":\"Input Prep\",\"bounding\":[12797.1396484375,77.69412231445312,817.4218139648438,820.6239624023438],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":4,\"title\":\"Save and Compare\",\"bounding\":[15180.705078125,-399.09112548828125,1050.6468505859375,615.8845825195312],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}}],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.3072020475058237,\"offset\":[-11012.049075449982,623.0809311059861]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream style transfer v2.json",
    "content": "{\"last_node_id\":1385,\"last_link_id\":3733,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-109.2831802368164],\"size\":[75,26],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3686}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1395],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":14,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-29.283178329467773],\"size\":[75,26],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3671}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,2696],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-69.28317260742188],\"size\":[75,26],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3670}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[2881,3581],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1363,\"type\":\"ReHiDreamPatcher\",\"pos\":[13268.9013671875,-109.2831802368164],\"size\":[210,82],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3685}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3686],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReHiDreamPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":981,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[14758.255859375,-64.39308166503906],\"size\":[340.20001220703125,510],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3698},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3469],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_2s\",-1,4,\"resample\",true]},{\"id\":1308,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[14108.255859375,675.60693359375],\"size\":[246.31312561035156,286],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":3709},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3699}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3604],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":1318,\"type\":\"ClownGuide_Beta\",\"pos\":[13828.255859375,675.60693359375],\"size\":[263.102783203125,290],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":3710},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3699,3708],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"inversion\",false,false,0.7,1,\"constant\",0,-1,false]},{\"id\":1333,\"type\":\"CLIPTextEncode\",\"pos\":[13688.255859375,-44.393089294433594],\"size\":[280.6252746582031,164.06936645507812],\"flags\":{\"collapsed\":false},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":3581}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[3602,3626],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"messy blackboard chalk drawing of the inside of a car driving down a creepy road. colorful chalk with shading that shows the chalk textures from drawing with the side of the chalk\\n\"]},{\"id\":1358,\"type\":\"ClownModelLoader\",\"pos\":[12828.9013671875,-299.2831726074219],\"size\":[341.7054443359375,266],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3685],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[3670],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[3671],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn_fast\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"clip_g_hidream.safetensors\",\"clip_l_hidream.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":431,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[13218.9013671875,-309.28314208984375],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1395},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1398}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2692],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":970,\"type\":\"CLIPTextEncode\",\"pos\":[13688.255859375,165.60690307617188],\"size\":[281.9206848144531,109.87118530273438],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2881}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2882,3627],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, jpeg artifacts, low quality, bad quality, unsharp\"]},{\"id\":907,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[14008.255859375,-64.39308166503906],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2692},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3602},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2882},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2983},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3708},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3578],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"beta57\",20,11,1,1,201,\"fixed\",\"unsample\",true]},{\"id\":980,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[14378.255859375,-64.39308166503906],\"size\":[340.20001220703125,570],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3626},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3627},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3578},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3604},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":3533},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":3707},{\"name\":\"options 3\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3698],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s_non-monotonic\",1,4,\"resample\",true]},{\"id\":1317,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[14408.255859375,-294.3930969238281],\"size\":[265.2884826660156,178],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3533],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[10,1,0.5,\"none\",-1,4]},{\"id\":1373,\"type\":\"LoadImage\",\"pos\":[12848.2666015625,531.6068115234375],\"size\":[315,314],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3721],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"title\":\"Load Image (Composition)\",\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (476).png\",\"image\"]},{\"id\":1374,\"type\":\"LoadImage\",\"pos\":[12838.2666015625,171.6068115234375],\"size\":[315,314],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3725],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"title\":\"Load Image (Style Guide)\",\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14627_.png\",\"image\"]},{\"id\":1378,\"type\":\"Reroute\",\"pos\":[13184.07421875,533.128662109375],\"size\":[75,26],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3721}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[3724,3729],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1379,\"type\":\"Reroute\",\"pos\":[13185.853515625,168.15780639648438],\"size\":[75,26],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3725}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[3726],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":909,\"type\":\"SaveImage\",\"pos\":[15220,-259.5838928222656],\"size\":[457.3382263183594,422.2065124511719],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2697}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":1362,\"type\":\"PreviewImage\",\"pos\":[13317.849609375,617.1558837890625],\"size\":[210,246],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3682}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":1350,\"type\":\"ColorMatch\",\"pos\":[13709.701171875,316.05731201171875],\"size\":[210,102],\"flags\":{\"collapsed\":false},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"image_ref\",\"localized_name\":\"image_ref\",\"type\":\"IMAGE\",\"link\":3728},{\"name\":\"image_target\",\"localized_name\":\"image_target\",\"type\":\"IMAGE\",\"link\":3724}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[3682,3688],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ColorMatch\"},\"widgets_values\":[\"mkl\",1]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[13343.19140625,556.8784790039062],\"size\":[261.2217712402344,298],\"flags\":{\"collapsed\":true},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":3688},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":3727},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,160.00003051757812],\"widget\":{\"name\":\"width\"},\"link\":3732},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,184.00003051757812],\"widget\":{\"name\":\"height\"},\"link\":3733}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2983,3710],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[3709],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1398],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[],\"slot_index\":4},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[],\"slot_index\":5}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1344,768,\"red\",false,\"16_channels\"]},{\"id\":1371,\"type\":\"Image Repeat Tile To Size\",\"pos\":[13329.5947265625,497.8262939453125],\"size\":[210,146],\"flags\":{\"collapsed\":true},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":3726},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,36],\"widget\":{\"name\":\"width\"},\"link\":3730},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,60],\"widget\":{\"name\":\"height\"},\"link\":3731}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[3727,3728],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Image Repeat Tile To Size\"},\"widgets_values\":[1024,1024,true]},{\"id\":1380,\"type\":\"SetImageSize\",\"pos\":[13324.7197265625,323.0480041503906],\"size\":[210,102],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[3730,3732],\"slot_index\":0},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[3731,3733],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"SetImageSize\"},\"widgets_values\":[1344,768]},{\"id\":1377,\"type\":\"Image Comparer (rgthree)\",\"pos\":[15742.4619140625,-253.3526153564453],\"size\":[461.9190368652344,413.5953369140625],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":3720},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":3729}],\"outputs\":[],\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_pzczy_00001_.png&type=temp&subfolder=&rand=0.2568823425587843\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_pzczy_00002_.png&type=temp&subfolder=&rand=0.9444625525852213\"}]]},{\"id\":908,\"type\":\"VAEDecode\",\"pos\":[15217.7802734375,-312.1965637207031],\"size\":[210,46],\"flags\":{\"collapsed\":true},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3469},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2696}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2697,3720],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":1376,\"type\":\"Note\",\"pos\":[13703.0439453125,536.6895751953125],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increase or decrease weight in ClownGuide to alter adherence to the input image.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1383,\"type\":\"Note\",\"pos\":[14428.40234375,580.1749877929688],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Samplers like res_2s in this cycling node will also work and are faster. res_2m and res_3m are even faster, but sometimes the effect takes longer in wall time to fully kick in.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1384,\"type\":\"Note\",\"pos\":[14793.0322265625,518.4120483398438],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"res_2m or res_3m can be used here instead and are faster, but are less likely to fully clean up lingering artifacts.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1385,\"type\":\"Note\",\"pos\":[14398.345703125,768.2096557617188],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"method = AdaIN is faster and uses less memory, but is less accurate. Some prefer the effect.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1328,\"type\":\"ClownOptions_SDE_Beta\",\"pos\":[14186.4755859375,-132.6126251220703],\"size\":[315,266],\"flags\":{\"collapsed\":true},\"order\":1,\"mode\":0,\"inputs\":[{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas_substep\",\"localized_name\":\"etas_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3707],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Beta\"},\"widgets_values\":[\"gaussian\",\"gaussian\",\"hard\",\"hard\",0.5,0.75,-1,\"fixed\"]},{\"id\":1381,\"type\":\"Note\",\"pos\":[13881.6279296875,-217.62835693359375],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increase or decrease \\\"steps_to_run\\\" in ClownsharKSampler to change the effective denoise level.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1382,\"type\":\"Note\",\"pos\":[14718.0498046875,-295.4144592285156],\"size\":[268.1851806640625,124.49711608886719],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increasing cycles will increase the amount of change, but take longer.\\n\\nCycles will rerun the same step over and over, forwards and backwards, iteratively refining an image at a controlled noise level.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[18,14,0,7,4,\"VAE\"],[1395,13,0,431,0,\"MODEL\"],[1398,7,3,431,1,\"LATENT\"],[2692,431,0,907,0,\"MODEL\"],[2696,14,0,908,1,\"VAE\"],[2697,908,0,909,0,\"IMAGE\"],[2881,490,0,970,0,\"CLIP\"],[2882,970,0,907,2,\"CONDITIONING\"],[2983,7,0,907,3,\"LATENT\"],[3469,981,0,908,0,\"LATENT\"],[3533,1317,0,980,6,\"OPTIONS\"],[3578,907,0,980,4,\"LATENT\"],[3581,490,0,1333,0,\"CLIP\"],[3602,1333,0,907,1,\"CONDITIONING\"],[3604,1308,0,980,5,\"GUIDES\"],[3626,1333,0,980,1,\"CONDITIONING\"],[3627,970,0,980,2,\"CONDITIONING\"],[3670,1358,1,490,0,\"*\"],[3671,1358,2,14,0,\"*\"],[3682,1350,0,1362,0,\"IMAGE\"],[3685,1358,0,1363,0,\"MODEL\"],[3686,1363,0,13,0,\"*\"],[3688,1350,0,7,0,\"IMAGE\"],[3698,980,0,981,4,\"LATENT\"],[3699,1318,0,1308,3,\"GUIDES\"],[3707,1328,0,980,7,\"OPTIONS\"],[3708,1318,0,907,5,\"GUIDES\"],[3709,7,1,1308,0,\"LATENT\"],[3710,7,0,1318,0,\"LATENT\"],[3720,908,0,1377,0,\"IMAGE\"],[3721,1373,0,1378,0,\"*\"],[3724,1378,0,1350,1,\"IMAGE\"],[3725,1374,0,1379,0,\"*\"],[3726,1379,0,1371,0,\"IMAGE\"],[3727,1371,0,7,1,\"IMAGE\"],[3728,1371,0,1350,0,\"IMAGE\"],[3729,1378,0,1377,1,\"IMAGE\"],[3730,1380,0,1371,1,\"INT\"],[3731,1380,1,1371,2,\"INT\"],[3732,1380,0,7,5,\"INT\"],[3733,1380,1,7,6,\"INT\"]],\"groups\":[{\"id\":1,\"title\":\"Model Loaders\",\"bounding\":[12796.72265625,-401.9004211425781,822.762451171875,436.0693359375],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":2,\"title\":\"Sampling\",\"bounding\":[13652.6533203125,-402.70721435546875,1470.8076171875,1409.0289306640625],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":3,\"title\":\"Input Prep\",\"bounding\":[12797.1396484375,77.69412231445312,817.4218139648438,820.6239624023438],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":4,\"title\":\"Save and Compare\",\"bounding\":[15180.705078125,-399.09112548828125,1050.6468505859375,615.8845825195312],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}}],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.3072020475058237,\"offset\":[-10982.673431174471,526.9422127403179]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream style transfer.json",
    "content": "{\"last_node_id\":1317,\"last_link_id\":3533,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[13140,110],\"size\":[75,26],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3509}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1395],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":402,\"type\":\"QuadrupleCLIPLoader\",\"pos\":[12690,150],\"size\":[407.7720031738281,130],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[1552],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"QuadrupleCLIPLoader\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\"]},{\"id\":490,\"type\":\"Reroute\",\"pos\":[13140,150],\"size\":[75,26],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1552}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[2881,3323],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":981,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[14277.9453125,-92.8893051147461],\"size\":[340.20001220703125,510],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3250},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3469],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_2s\",-1,4,\"resample\",true]},{\"id\":908,\"type\":\"VAEDecode\",\"pos\":[14640.490234375,-94.68604278564453],\"size\":[210,46],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3469},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2696}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2697],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":909,\"type\":\"SaveImage\",\"pos\":[14635.966796875,4.407815933227539],\"size\":[457.3382263183594,422.2065124511719],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2697}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":431,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[13253.2275390625,-90.14451599121094],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1395},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1398}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2692],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":14,\"type\":\"Reroute\",\"pos\":[13140,190],\"size\":[75,26],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":1344}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,2696],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":403,\"type\":\"UNETLoader\",\"pos\":[12780,20],\"size\":[320.7802429199219,82],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[3508],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UNETLoader\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn\"]},{\"id\":404,\"type\":\"VAELoader\",\"pos\":[12887.7998046875,328.069091796875],\"size\":[210,58],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"VAE\",\"localized_name\":\"VAE\",\"type\":\"VAE\",\"links\":[1344],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAELoader\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ae.sft\"]},{\"id\":1308,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[13637.08984375,660.7327270507812],\"size\":[246.31312561035156,286],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":3531},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3530],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":980,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[13918.0234375,-98.65141296386719],\"size\":[340.20001220703125,570],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2971},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3530},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":3533},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3250],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_2s\",1,4,\"resample\",true]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[13250.6240234375,672.3837890625],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":3515},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":3532},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2983],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[3531],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1398],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",896,1152,\"red\",false,\"16_channels\"]},{\"id\":1285,\"type\":\"LoadImage\",\"pos\":[12887.7626953125,444.2932434082031],\"size\":[315,314],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3515],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (544).png\",\"image\"]},{\"id\":907,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[13550.5615234375,-92.92960357666016],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2692},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3480},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2882},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2983},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2971],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"exponential/res_2s\",\"beta57\",20,14,1,4,201,\"fixed\",\"unsample\",true]},{\"id\":1309,\"type\":\"LoadImage\",\"pos\":[12889.3486328125,815.3554077148438],\"size\":[315,314],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3532],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ChatGPT Image Apr 29, 2025, 09_18_46 PM.png\",\"image\"]},{\"id\":1297,\"type\":\"ReHiDreamPatcher\",\"pos\":[12779.865234375,-110.67424774169922],\"size\":[321.6453552246094,82],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3508}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3509],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReHiDreamPatcher\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"float32\",true]},{\"id\":1224,\"type\":\"CLIPTextEncode\",\"pos\":[13247.2734375,95.37741088867188],\"size\":[269.0397644042969,155.65545654296875],\"flags\":{\"collapsed\":false},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":3323}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[3480],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a gritty illustration of a japanese woman with traditional hair in traditional clothes\"]},{\"id\":970,\"type\":\"CLIPTextEncode\",\"pos\":[13257.970703125,316.4944152832031],\"size\":[261.8798522949219,111.21334838867188],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2881}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2882],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, low quality, bad quality, low detail, mutated, jpeg artifacts, compression artifacts,\"]},{\"id\":1317,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[13959.880859375,541.2625122070312],\"size\":[265.2884826660156,178],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3533],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[20,1,0.5,\"none\",-1,4]}],\"links\":[[18,14,0,7,4,\"VAE\"],[1344,404,0,14,0,\"*\"],[1395,13,0,431,0,\"MODEL\"],[1398,7,3,431,1,\"LATENT\"],[1552,402,0,490,0,\"*\"],[2692,431,0,907,0,\"MODEL\"],[2696,14,0,908,1,\"VAE\"],[2697,908,0,909,0,\"IMAGE\"],[2881,490,0,970,0,\"CLIP\"],[2882,970,0,907,2,\"CONDITIONING\"],[2971,907,0,980,4,\"LATENT\"],[2983,7,0,907,3,\"LATENT\"],[3250,980,0,981,4,\"LATENT\"],[3323,490,0,1224,0,\"CLIP\"],[3469,981,0,908,0,\"LATENT\"],[3480,1224,0,907,1,\"CONDITIONING\"],[3508,403,0,1297,0,\"MODEL\"],[3509,1297,0,13,0,\"*\"],[3515,1285,0,7,0,\"IMAGE\"],[3530,1308,0,980,5,\"GUIDES\"],[3531,7,1,1308,0,\"LATENT\"],[3532,1309,0,7,1,\"IMAGE\"],[3533,1317,0,980,6,\"OPTIONS\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.7398859252302459,\"offset\":[-10583.206320408986,234.77974623579652]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream txt2img.json",
    "content": "{\"last_node_id\":1321,\"last_link_id\":3548,\"nodes\":[{\"id\":490,\"type\":\"Reroute\",\"pos\":[13130,-70],\"size\":[75,26],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3534}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[2881,3323],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1317,\"type\":\"ClownModelLoader\",\"pos\":[12770,-90],\"size\":[315,266],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3539],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[3534],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[3535],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn_fast\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp16.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":14,\"type\":\"Reroute\",\"pos\":[13130,-30],\"size\":[75,26],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3535}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,2696],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":970,\"type\":\"CLIPTextEncode\",\"pos\":[13253.0546875,116.28263854980469],\"size\":[261.8798522949219,111.21334838867188],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2881}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2882],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, low quality, bad quality, low detail, mutated, jpeg artifacts, compression artifacts,\"]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[13253.044921875,283.4559020996094],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[3540],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1344,768,\"red\",false,\"16_channels\"]},{\"id\":1224,\"type\":\"CLIPTextEncode\",\"pos\":[13250,-90],\"size\":[269.0397644042969,155.65545654296875],\"flags\":{\"collapsed\":false},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":3323}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[3480],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a cold war era photograph from 1983 of a group of four friends holding up their hands inside an antique living room in a victorian era mansion\"]},{\"id\":13,\"type\":\"Reroute\",\"pos\":[13130,-110],\"size\":[75,26],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3539}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[3548],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":909,\"type\":\"SaveImage\",\"pos\":[13936.2919921875,12.050485610961914],\"size\":[457.3382263183594,422.2065124511719],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2697}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":908,\"type\":\"VAEDecode\",\"pos\":[13934.587890625,-92.61396026611328],\"size\":[210,46],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3537},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2696}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2697],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":907,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[13550.5615234375,-92.92960357666016],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":3548},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3480},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2882},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3540},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3537],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"bong_tangent\",20,-1,1,4,0,\"fixed\",\"standard\",true]},{\"id\":1321,\"type\":\"Note\",\"pos\":[12769.740234375,239.9431915283203],\"size\":[345.97113037109375,161.35496520996094],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"There are many samplers to try, but res_2m, res_3m, res_2s, and res_3s are very reliable. If you want to push quality a bit higher in exchange for time, you could even try res_5s.\\n\\nres_2m and res_3m begin with higher order steps (one res_2s step, and two res_3s steps, respectively) to initialize the sampling process. Ultimately, the result is faster convergence in terms of wall time, as fewer steps end up being necessary.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[18,14,0,7,4,\"VAE\"],[2696,14,0,908,1,\"VAE\"],[2697,908,0,909,0,\"IMAGE\"],[2881,490,0,970,0,\"CLIP\"],[2882,970,0,907,2,\"CONDITIONING\"],[3323,490,0,1224,0,\"CLIP\"],[3480,1224,0,907,1,\"CONDITIONING\"],[3534,1317,1,490,0,\"*\"],[3535,1317,2,14,0,\"*\"],[3537,907,0,908,0,\"LATENT\"],[3539,1317,0,13,0,\"*\"],[3540,7,3,907,3,\"LATENT\"],[3548,13,0,907,0,\"MODEL\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.9194342495775452,\"offset\":[-11336.810477400342,443.2870544682993]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream unsampling data WF.json",
    "content": "{\"last_node_id\":637,\"last_link_id\":2029,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\"size\":[315,314],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2017]},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14254_.png\",\"image\"]},{\"id\":629,\"type\":\"VAEEncodeAdvanced\",\"pos\":[961.6968994140625,123.66181182861328],\"size\":[278.0284423828125,280.5834045410156],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2017},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":2026}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2013,2020,2022],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2015]},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":632,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[962.5586547851562,-316.3705139160156],\"size\":[277.62237548828125,126],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2025},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2015}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2016],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":633,\"type\":\"SaveImage\",\"pos\":[1921.8458251953125,-123.4797134399414],\"size\":[436.4179382324219,508.5302429199219],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2019}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":631,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[1605.8143310546875,-124.34080505371094],\"size\":[280.55523681640625,510],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2005},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2023},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2008],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",-1,5.5,\"resample\",true]},{\"id\":634,\"type\":\"ClownGuide_Beta\",\"pos\":[1276.0064697265625,-480.84442138671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2021],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"data\",false,false,0.5,1,\"constant\",0,-1,false]},{\"id\":636,\"type\":\"ClownModelLoader\",\"pos\":[599.3463745117188,-176.31788635253906],\"size\":[315,266],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2025],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2024,2028],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2026,2027],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":591,\"type\":\"VAEDecode\",\"pos\":[1924.08251953125,-233.2501983642578],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"label\":\"samples\",\"type\":\"LATENT\",\"link\":2008},{\"name\":\"vae\",\"localized_name\":\"vae\",\"label\":\"vae\",\"type\":\"VAE\",\"link\":2027}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"label\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[2019],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":630,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1271.7001953125,-124.3408432006836],\"size\":[291.7499084472656,630],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2016},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2018},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2029},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2013},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2021},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2005]},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",60,-1,1,1,0,\"fixed\",\"unsample\",true]},{\"id\":107,\"type\":\"CLIPTextEncode\",\"pos\":[959.4713745117188,-123.3353500366211],\"size\":[282.33453369140625,173.58438110351562],\"flags\":{\"collapsed\":false},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2024}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2018],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"the mournful lamentations of of a female rock singer on stage with chaos behind her, her face screaming her sorrowful refrains the despairing cries of anguished screams howling agonized moans, her pained whispers mournful sighs distant echoes across the smoky stage, fading memories of lost loves, forgotten dreams, shattered hopes, crushed spirits, broken hearts\"]},{\"id\":637,\"type\":\"CLIPTextEncode\",\"pos\":[963.5917358398438,453.83306884765625],\"size\":[278.4529113769531,88],\"flags\":{\"collapsed\":false},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2028}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2029],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"\"]},{\"id\":635,\"type\":\"ClownGuide_Beta\",\"pos\":[1604.09326171875,-479.9832763671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2022},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2023],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"data\",false,true,0.5,1,\"beta57\",0,10,false]}],\"links\":[[2005,630,0,631,4,\"LATENT\"],[2008,631,0,591,0,\"LATENT\"],[2013,629,0,630,3,\"LATENT\"],[2015,629,3,632,1,\"LATENT\"],[2016,632,0,630,0,\"MODEL\"],[2017,628,0,629,0,\"IMAGE\"],[2018,107,0,630,1,\"CONDITIONING\"],[2019,591,0,633,0,\"IMAGE\"],[2020,629,0,634,0,\"LATENT\"],[2021,634,0,630,5,\"GUIDES\"],[2022,629,0,635,0,\"LATENT\"],[2023,635,0,631,5,\"GUIDES\"],[2024,636,1,107,0,\"CLIP\"],[2025,636,0,632,0,\"MODEL\"],[2026,636,2,629,4,\"VAE\"],[2027,636,2,591,1,\"VAE\"],[2028,636,1,637,0,\"CLIP\"],[2029,637,0,630,2,\"CONDITIONING\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":2.1762913579017154,\"offset\":[427.0670817937978,488.9238245904811]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream unsampling data.json",
    "content": "{\"last_node_id\":637,\"last_link_id\":2029,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\"size\":[315,314],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2017]},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14254_.png\",\"image\"]},{\"id\":629,\"type\":\"VAEEncodeAdvanced\",\"pos\":[961.6968994140625,123.66181182861328],\"size\":[278.0284423828125,280.5834045410156],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2017},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":2026}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2013,2020,2022],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2015]},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":632,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[962.5586547851562,-316.3705139160156],\"size\":[277.62237548828125,126],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2025},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2015}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2016],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":633,\"type\":\"SaveImage\",\"pos\":[1921.8458251953125,-123.4797134399414],\"size\":[436.4179382324219,508.5302429199219],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2019}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":631,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[1605.8143310546875,-124.34080505371094],\"size\":[280.55523681640625,510],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2005},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2023},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2008],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",-1,5.5,\"resample\",true]},{\"id\":634,\"type\":\"ClownGuide_Beta\",\"pos\":[1276.0064697265625,-480.84442138671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2021],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"data\",false,false,0.5,1,\"constant\",0,-1,false]},{\"id\":636,\"type\":\"ClownModelLoader\",\"pos\":[599.3463745117188,-176.31788635253906],\"size\":[315,266],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2025],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2024,2028],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2026,2027],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":591,\"type\":\"VAEDecode\",\"pos\":[1924.08251953125,-233.2501983642578],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"label\":\"samples\",\"type\":\"LATENT\",\"link\":2008},{\"name\":\"vae\",\"localized_name\":\"vae\",\"label\":\"vae\",\"type\":\"VAE\",\"link\":2027}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"label\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[2019],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":630,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1271.7001953125,-124.3408432006836],\"size\":[291.7499084472656,630],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2016},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2018},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2029},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2013},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2021},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2005]},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",60,-1,1,1,0,\"fixed\",\"unsample\",true]},{\"id\":107,\"type\":\"CLIPTextEncode\",\"pos\":[959.4713745117188,-123.3353500366211],\"size\":[282.33453369140625,173.58438110351562],\"flags\":{\"collapsed\":false},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2024}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2018],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"the mournful lamentations of of a female rock singer on stage with chaos behind her, her face screaming her sorrowful refrains the despairing cries of anguished screams howling agonized moans, her pained whispers mournful sighs distant echoes across the smoky stage, fading memories of lost loves, forgotten dreams, shattered hopes, crushed spirits, broken hearts\"]},{\"id\":637,\"type\":\"CLIPTextEncode\",\"pos\":[963.5917358398438,453.83306884765625],\"size\":[278.4529113769531,88],\"flags\":{\"collapsed\":false},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2028}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2029],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"\"]},{\"id\":635,\"type\":\"ClownGuide_Beta\",\"pos\":[1604.09326171875,-479.9832763671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2022},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2023],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"data\",false,true,0.5,1,\"beta57\",0,10,false]}],\"links\":[[2005,630,0,631,4,\"LATENT\"],[2008,631,0,591,0,\"LATENT\"],[2013,629,0,630,3,\"LATENT\"],[2015,629,3,632,1,\"LATENT\"],[2016,632,0,630,0,\"MODEL\"],[2017,628,0,629,0,\"IMAGE\"],[2018,107,0,630,1,\"CONDITIONING\"],[2019,591,0,633,0,\"IMAGE\"],[2020,629,0,634,0,\"LATENT\"],[2021,634,0,630,5,\"GUIDES\"],[2022,629,0,635,0,\"LATENT\"],[2023,635,0,631,5,\"GUIDES\"],[2024,636,1,107,0,\"CLIP\"],[2025,636,0,632,0,\"MODEL\"],[2026,636,2,629,4,\"VAE\"],[2027,636,2,591,1,\"VAE\"],[2028,636,1,637,0,\"CLIP\"],[2029,637,0,630,2,\"CONDITIONING\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":2.1762913579017154,\"offset\":[427.0670817937978,488.9238245904811]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream unsampling pseudoimplicit.json",
    "content": "{\"last_node_id\":637,\"last_link_id\":2029,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\"size\":[315,314],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2017]},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14254_.png\",\"image\"]},{\"id\":629,\"type\":\"VAEEncodeAdvanced\",\"pos\":[961.6968994140625,123.66181182861328],\"size\":[278.0284423828125,280.5834045410156],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2017},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":2026}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2013,2020,2022],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2015]},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":632,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[962.5586547851562,-316.3705139160156],\"size\":[277.62237548828125,126],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2025},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2015}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2016],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":633,\"type\":\"SaveImage\",\"pos\":[1921.8458251953125,-123.4797134399414],\"size\":[436.4179382324219,508.5302429199219],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2019}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":591,\"type\":\"VAEDecode\",\"pos\":[1924.08251953125,-233.2501983642578],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"label\":\"samples\",\"type\":\"LATENT\",\"link\":2008},{\"name\":\"vae\",\"localized_name\":\"vae\",\"label\":\"vae\",\"type\":\"VAE\",\"link\":2027}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"label\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[2019],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":107,\"type\":\"CLIPTextEncode\",\"pos\":[959.4713745117188,-123.3353500366211],\"size\":[282.33453369140625,173.58438110351562],\"flags\":{\"collapsed\":false},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2024}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2018],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"the mournful lamentations of of a female rock singer on stage with chaos behind her, her face screaming her sorrowful refrains the despairing cries of anguished screams howling agonized moans, her pained whispers mournful sighs distant echoes across the smoky stage, fading memories of lost loves, forgotten dreams, shattered hopes, crushed spirits, broken hearts\"]},{\"id\":637,\"type\":\"CLIPTextEncode\",\"pos\":[963.5917358398438,453.83306884765625],\"size\":[278.4529113769531,88],\"flags\":{\"collapsed\":false},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2028}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2029],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"\"]},{\"id\":636,\"type\":\"ClownModelLoader\",\"pos\":[599.3463745117188,-176.31788635253906],\"size\":[315,266],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2025],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2024,2028],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2026,2027],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn_fast\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":630,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1271.7001953125,-124.3408432006836],\"size\":[291.7499084472656,630],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2016},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2018},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2029},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2013},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2021},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2005]},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",30,-1,1,1,0,\"fixed\",\"unsample\",true]},{\"id\":631,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[1605.8143310546875,-124.34080505371094],\"size\":[280.55523681640625,510],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2005},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2023},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2008],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",-1,4,\"resample\",true]},{\"id\":634,\"type\":\"ClownGuide_Beta\",\"pos\":[1276.0064697265625,-480.84442138671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2021],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"pseudoimplicit\",false,false,0.5,1,\"beta57\",0,30,false]},{\"id\":635,\"type\":\"ClownGuide_Beta\",\"pos\":[1604.09326171875,-479.9832763671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2022},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2023],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"pseudoimplicit\",false,false,0.5,1,\"beta57\",0,4,false]}],\"links\":[[2005,630,0,631,4,\"LATENT\"],[2008,631,0,591,0,\"LATENT\"],[2013,629,0,630,3,\"LATENT\"],[2015,629,3,632,1,\"LATENT\"],[2016,632,0,630,0,\"MODEL\"],[2017,628,0,629,0,\"IMAGE\"],[2018,107,0,630,1,\"CONDITIONING\"],[2019,591,0,633,0,\"IMAGE\"],[2020,629,0,634,0,\"LATENT\"],[2021,634,0,630,5,\"GUIDES\"],[2022,629,0,635,0,\"LATENT\"],[2023,635,0,631,5,\"GUIDES\"],[2024,636,1,107,0,\"CLIP\"],[2025,636,0,632,0,\"MODEL\"],[2026,636,2,629,4,\"VAE\"],[2027,636,2,591,1,\"VAE\"],[2028,636,1,637,0,\"CLIP\"],[2029,637,0,630,2,\"CONDITIONING\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.7449402268886909,\"offset\":[544.7968662691544,737.2296697550046]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/hidream unsampling.json",
    "content": "{\"last_node_id\":637,\"last_link_id\":2029,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\"size\":[315,314],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2017]},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14254_.png\",\"image\"]},{\"id\":629,\"type\":\"VAEEncodeAdvanced\",\"pos\":[961.6968994140625,123.66181182861328],\"size\":[278.0284423828125,280.5834045410156],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2017},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":2026}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2013,2020,2022],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2015]},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":632,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[962.5586547851562,-316.3705139160156],\"size\":[277.62237548828125,126],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2025},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2015}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2016],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":633,\"type\":\"SaveImage\",\"pos\":[1921.8458251953125,-123.4797134399414],\"size\":[436.4179382324219,508.5302429199219],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2019}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":631,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[1605.8143310546875,-124.34080505371094],\"size\":[280.55523681640625,510],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2005},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2023},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2008],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",-1,5.5,\"resample\",true]},{\"id\":591,\"type\":\"VAEDecode\",\"pos\":[1924.08251953125,-233.2501983642578],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"label\":\"samples\",\"type\":\"LATENT\",\"link\":2008},{\"name\":\"vae\",\"localized_name\":\"vae\",\"label\":\"vae\",\"type\":\"VAE\",\"link\":2027}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"label\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[2019],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":630,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1271.7001953125,-124.3408432006836],\"size\":[291.7499084472656,630],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2016},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2018},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2029},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2013},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2021},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2005]},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",60,-1,1,1,0,\"fixed\",\"unsample\",true]},{\"id\":107,\"type\":\"CLIPTextEncode\",\"pos\":[959.4713745117188,-123.3353500366211],\"size\":[282.33453369140625,173.58438110351562],\"flags\":{\"collapsed\":false},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2024}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2018],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"the mournful lamentations of of a female rock singer on stage with chaos behind her, her face screaming her sorrowful refrains the despairing cries of anguished screams howling agonized moans, her pained whispers mournful sighs distant echoes across the smoky stage, fading memories of lost loves, forgotten dreams, shattered hopes, crushed spirits, broken hearts\"]},{\"id\":637,\"type\":\"CLIPTextEncode\",\"pos\":[963.5917358398438,453.83306884765625],\"size\":[278.4529113769531,88],\"flags\":{\"collapsed\":false},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2028}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2029],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"\"]},{\"id\":636,\"type\":\"ClownModelLoader\",\"pos\":[599.3463745117188,-176.31788635253906],\"size\":[315,266],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2025],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2024,2028],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2026,2027],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn_fast\",\"clip_l_hidream.safetensors\",\"clip_g_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":634,\"type\":\"ClownGuide_Beta\",\"pos\":[1276.0064697265625,-480.84442138671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2021],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"inversion\",false,false,0.5,1,\"constant\",0,-1,false]},{\"id\":635,\"type\":\"ClownGuide_Beta\",\"pos\":[1604.09326171875,-479.9832763671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2022},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2023],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"inversion\",false,false,1,1,\"beta57\",0,15,false]}],\"links\":[[2005,630,0,631,4,\"LATENT\"],[2008,631,0,591,0,\"LATENT\"],[2013,629,0,630,3,\"LATENT\"],[2015,629,3,632,1,\"LATENT\"],[2016,632,0,630,0,\"MODEL\"],[2017,628,0,629,0,\"IMAGE\"],[2018,107,0,630,1,\"CONDITIONING\"],[2019,591,0,633,0,\"IMAGE\"],[2020,629,0,634,0,\"LATENT\"],[2021,634,0,630,5,\"GUIDES\"],[2022,629,0,635,0,\"LATENT\"],[2023,635,0,631,5,\"GUIDES\"],[2024,636,1,107,0,\"CLIP\"],[2025,636,0,632,0,\"MODEL\"],[2026,636,2,629,4,\"VAE\"],[2027,636,2,591,1,\"VAE\"],[2028,636,1,637,0,\"CLIP\"],[2029,637,0,630,2,\"CONDITIONING\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.7449402268886909,\"offset\":[802.8733998149229,690.5491177830577]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/intro to clownsampling.json",
    "content": "{\"last_node_id\":876,\"last_link_id\":2046,\"nodes\":[{\"id\":453,\"type\":\"VAEDecode\",\"pos\":[-303.0476379394531,3073.681640625],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":228,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":1923},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1940,\"slot_index\":1}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[1365],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":606,\"type\":\"LoraLoader\",\"pos\":[-2194.87353515625,3180.94482421875],\"size\":[359.7619323730469,126],\"flags\":{},\"order\":177,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1890},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1939}],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[1904],\"slot_index\":0},{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[1937,1938],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"LoraLoader\"},\"widgets_values\":[\"csbw_cascade_dark_ema.safetensors\",1,1]},{\"id\":454,\"type\":\"SaveImage\",\"pos\":[-303.3555603027344,3184.454345703125],\"size\":[753.4503784179688,734.7869262695312],\"flags\":{},\"order\":229,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1365}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":625,\"type\":\"SharkOptions_UltraCascade_Latent_Beta\",\"pos\":[-648.0692138671875,3944.1982421875],\"size\":[310.79998779296875,82],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1948],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_UltraCascade_Latent_Beta\"},\"widgets_values\":[1536,1536]},{\"id\":626,\"type\":\"SharkOptions_UltraCascade_Latent_Beta\",\"pos\":[-1372.7569580078125,3947.591064453125],\"size\":[310.79998779296875,82],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1951],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_UltraCascade_Latent_Beta\"},\"widgets_values\":[24,24]},{\"id\":624,\"type\":\"SharkOptions_UltraCascade_Latent_Beta\",\"pos\":[-1013.2625732421875,3947.5908203125],\"size\":[310.79998779296875,82],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1947],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_UltraCascade_Latent_Beta\"},\"widgets_values\":[36,36]},{\"id\":609,\"type\":\"UNETLoader\",\"pos\":[-1020.5138549804688,3045.097412109375],\"size\":[356.544677734375,82],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[1926],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UNETLoader\"},\"widgets_values\":[\"stage_b_lite_CSBW_v1.1.safetensors\",\"default\"]},{\"id\":621,\"type\":\"VAELoader\",\"pos\":[-637.3134765625,3068.5341796875],\"size\":[294.6280212402344,58],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"VAE\",\"localized_name\":\"VAE\",\"type\":\"VAE\",\"links\":[1940],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAELoader\"},\"widgets_values\":[\"stage_a_ft_hq.safetensors\"]},{\"id\":620,\"type\":\"CLIPLoader\",\"pos\":[-2564.87353515625,3272.8349609375],\"size\":[344.635498046875,98],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[1939],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPLoader\"},\"widgets_values\":[\"cascade_text_encoder.safetensors\",\"stable_cascade\",\"default\"]},{\"id\":627,\"type\":\"Note\",\"pos\":[-1381.849365234375,4086.07421875],\"size\":[331.63720703125,415.29815673828125],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Stage C: the original Stable Cascade version. \\n\\nStable Cascade latents are actually quite small: typically, a 1024x1024 image will be generated from a stage C latent that is only 24x24 (for comparison, with SDXL or SD1.5, the dimensions are 128x128). \\n\\n\\\"Compression\\\" is just a shorthand method of determining these dimensions, such as 24x24 (1024 / 42 = 24.38, which means a \\\"compression\\\" of 42).\\n\\nThis poses a problem though: Cascade was only trained on a handful of resolutions. The difference between 24x24 and 25x25 is a significant drop in quality and coherence. Therefore, it is best to just set these dimensions directly.\\n\\nThe best trained resolutions are:\\n\\n24x24 > 32x32\\n30x16 > 40x24 \\n\\n48x24 also works, but seems to result in more doubling problems than the others.\\n\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":628,\"type\":\"Note\",\"pos\":[-1012.45947265625,4084.7783203125],\"size\":[331.63720703125,415.29815673828125],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Stage UP: a patched version of Stable Cascade stage C (\\\"UltraPixel\\\"). \\n\\nThe key with these dimensions is to keep the aspect ratio the same as the stage C latent. Typically, best results are with a 1.5x upscale. 2.0x works, but will result in somewhat more issues with doubling, and can be a lot slower. However, the detail level will also be very high.\\n\\nSome viable resolutions are listed below. Asterisks signify ones that have been verified to work particularly well.\\n\\n32x32\\n36x36 **\\n40x40\\n42x42\\n48x48 *\\n\\n40x24\\n50x30\\n60x36 **\\n70x42\\n80x48 *\\n\\n72x36 \\n80x40 *\\n96x48 (very slow!)\\n\\n\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":632,\"type\":\"CheckpointLoaderSimple\",\"pos\":[-1073.474609375,2726.673583984375],\"size\":[452.7829895019531,102.89583587646484],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":null},{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":null},{\"name\":\"VAE\",\"localized_name\":\"VAE\",\"type\":\"VAE\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"CheckpointLoaderSimple\"},\"widgets_values\":[\"cascade_B-lite_refined_CSBW_v1.1.safetensors\"]},{\"id\":633,\"type\":\"Note\",\"pos\":[-1075.468994140625,2892.701416015625],\"size\":[457.5304870605469,94.27093505859375],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This is the stage B lite CSBW finetune (model only).\\n\\nhttps://huggingface.co/ClownsharkBatwing/Cascade_Stage_B_CSBW_Refined/blob/main/stage_b_lite_CSBW_v1.1.safetensors\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":634,\"type\":\"Note\",\"pos\":[-575.989501953125,2895.603271484375],\"size\":[547.0546875,91.47331237792969],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This is a finetune of stage A. You will get a sharper image, but in images with large white areas, small circular grey halos are sometimes visible.\\n\\nhttps://huggingface.co/madebyollin/stage-a-ft-hq/blob/main/stage_a_ft_hq.safetensors\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":630,\"type\":\"Note\",\"pos\":[-3309.3076171875,3048.958984375],\"size\":[717.709228515625,165.61032104492188],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"I recommend the BF16 version of stage C. There is no visible difference vs. the full precision weights, and it halves the disk space requirements.\\n\\nhttps://huggingface.co/stabilityai/stable-cascade/blob/main/stage_c_bf16.safetensors\\n\\nIMPORTANT: The original UltraPixel \\\"safetensors\\\" is not a safetensors at all - it is a PICKLE, where they lazily (at best) changed the file extension to \\\".safetensors\\\"!\\n\\nI converted it to a real safetensors file, and it's available below:\\n\\nhttps://huggingface.co/ClownsharkBatwing/ultrapixel_convert/blob/main/ultrapixel_t2i.safetensors\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":584,\"type\":\"UltraCascade_Loader\",\"pos\":[-2564.4580078125,3133.043212890625],\"size\":[345.5117492675781,82.95540618896484],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"shape\":3,\"links\":[1890],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UltraCascade_Loader\"},\"widgets_values\":[\"stage_c_bf16.safetensors\",\"ultrapixel_t2i.safetensors\"]},{\"id\":635,\"type\":\"Note\",\"pos\":[-3307.105712890625,3272.173095703125],\"size\":[715.61083984375,89.37511444091797],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Any clip G will do. The Cascade version is available at:\\n\\nhttps://huggingface.co/stabilityai/stable-cascade/blob/main/text_encoder/model.bf16.safetensors\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":636,\"type\":\"Note\",\"pos\":[-3306.760009765625,3418.6708984375],\"size\":[715.61083984375,113.57872772216797],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The LORA was trained with OneTrainer (https://github.com/Nerogar/OneTrainer) on some of my own SDXL generations. It has deep colors and is strong with wacky paint, illustration, and vector art styles. \\n\\nCascade learns extremely quickly and is very adept with artistic styles (it knows many artist names).\\n\\nhttps://huggingface.co/ClownsharkBatwing/CSBW_Style/blob/main/csbw_cascade_dark_ema.safetensors\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":629,\"type\":\"Note\",\"pos\":[-647.965087890625,4084.8818359375],\"size\":[331.63720703125,415.29815673828125],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Stage B: the Stable Cascade superresolution model.\\n\\nAs with stage UP, the key with these dimensions is to keep the aspect ratio the same as the prior latents. Theoretically, any resolution may be used, though some odd distortions can occur when the ideal upscale ratio is not used. It's not entirely clear what those ratios are, so some experimentation may be necessary. \\n\\nSome resolutions that work particularly well are:\\n\\n1536x1536 *\\n2048x2048 *\\n\\n1600x960\\n2560x1536 **\\n2880x1792 *\\n3200x1920\\n\\nIf you use stage B lite, you can hit 4k resolutions without even using more than 12GB of VRAM.\\n\\nIt's highly recommended to use the CSBW finetune of stage B, as it fixes many of the severe artifact problems the original release had.\\n\\nNote: CFG is not needed for this stage!\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":637,\"type\":\"Note\",\"pos\":[-1838.5732421875,2922.63671875],\"size\":[457.5304870605469,94.27093505859375],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Perturbed attention guidance (PAG) makes an enormous difference with Stable Cascade stages C and UP. Like CFG, it will double the runtime.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":598,\"type\":\"CLIPTextEncode\",\"pos\":[-1811.0350341796875,3205.474853515625],\"size\":[351.592529296875,173.00360107421875],\"flags\":{},\"order\":201,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1937}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1907,1911,1914],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"impasto oil painting by Yayoi Kusama and Lisa Frank, thick paint textures, tunning contrasts at  night with stylish roughly drawn thick black lines, a nuclear explosion destroying a city, its towering wide glowing nuclear mushroom cloud enveloping the entire skyline, the nuclear fireball lighting up the dark sky\"]},{\"id\":601,\"type\":\"UltraCascade_PerturbedAttentionGuidance\",\"pos\":[-1808.5911865234375,3084.306884765625],\"size\":[344.3999938964844,58],\"flags\":{},\"order\":200,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1904}],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[1909,1910],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UltraCascade_PerturbedAttentionGuidance\"},\"widgets_values\":[3]},{\"id\":599,\"type\":\"CLIPTextEncode\",\"pos\":[-1814.4205322265625,3435.57763671875],\"size\":[356.2470703125,110.6326904296875],\"flags\":{},\"order\":202,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1938}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1908,1912,1915],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"low quality, bad quality, low detail, blurry, unsharp\"]},{\"id\":631,\"type\":\"Note\",\"pos\":[-1557.671142578125,2725.4599609375],\"size\":[457.5304870605469,94.27093505859375],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This is a checkpoint that, for convenience, includes the stage B lite CSBW finetune, clip G, and stage A (the FT_HQ finetune).\\n\\nhttps://huggingface.co/ClownsharkBatwing/CSBW_Style/blob/main/cascade_B-lite_refined_CSBW_v1.1.safetensors\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":649,\"type\":\"Note\",\"pos\":[2011.257080078125,3860],\"size\":[282.2704772949219,88],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Since \\\"steps_to_run\\\" is set to -1,\\nthis will run all remaining steps.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":648,\"type\":\"Note\",\"pos\":[1661.257080078125,3860],\"size\":[283.8087463378906,88],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Runs the next 10 steps (out of 30).\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":657,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1710,3140],\"size\":[296.93646240234375,418],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,-1,1,5.5,0,\"fixed\",\"standard\",true]},{\"id\":680,\"type\":\"ClownSampler_Beta\",\"pos\":[1050,3140],\"size\":[283.6876220703125,174],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"sampler\",\"localized_name\":\"sampler\",\"type\":\"SAMPLER\",\"links\":[1973]}],\"properties\":{\"Node name for S&R\":\"ClownSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",-1,\"fixed\",true]},{\"id\":685,\"type\":\"Note\",\"pos\":[3440,5450],\"size\":[280.6243896484375,109.73818969726562],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"NOTE: \\\"epsilon_scales\\\" is currently unused, but exists as a placeholder. \\n\\n\\\"frame_weights\\\" is for video models such as Hunyuan. This is for use with guides.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":713,\"type\":\"Note\",\"pos\":[4574.66552734375,4613.29833984375],\"size\":[280.0735168457031,88],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"INPAINTING TIP: Try using the settings to the right with a feathered mask.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":670,\"type\":\"SigmasSchedulePreview\",\"pos\":[3850,5410],\"size\":[315,270],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SigmasSchedulePreview\"},\"widgets_values\":[\"hard\",0.25,1,1,1,\"beta57\",30,2.1,0]},{\"id\":654,\"type\":\"BetaSamplingScheduler\",\"pos\":[1420,2780],\"size\":[210,106],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":null}],\"outputs\":[{\"name\":\"SIGMAS\",\"localized_name\":\"SIGMAS\",\"type\":\"SIGMAS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"BetaSamplingScheduler\"},\"widgets_values\":[20,0.5,0.7]},{\"id\":653,\"type\":\"Note\",\"pos\":[1390,2940],\"size\":[252.12789916992188,117.73304748535156],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"\\\"beta57\\\" is equivalent to the BetaSamplingScheduler node above. I have found the results to be generally superior to the default \\\"beta\\\" (where the values are both set to 0.60).\\n\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":643,\"type\":\"Note\",\"pos\":[751.2572021484375,4100],\"size\":[507.688720703125,165.58355712890625],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"\\\"steps_to_run\\\": When set to -1, it will run all steps per usual. \\n\\nIf set to a positive value, it will run that number of steps, and then stop and pass the latent off to the next sampler node.\\n\\nIf the next sampler node's \\\"sampler_mode\\\" is set to \\\"resample\\\", it will then continue where the first one left off. \\n\\nThis even works with multistep samplers, as it carries its \\\"momentum\\\" from node to the next. This is not the case for \\\"KSampler (Advanced)\\\", or any other sampler nodes that I'm aware of.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":724,\"type\":\"CLIPTextEncode\",\"pos\":[990,4960],\"size\":[210,88],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":null}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1982,1983],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"\"]},{\"id\":722,\"type\":\"ClownGuide_Beta\",\"pos\":[1250,5430],\"size\":[315,290],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[1984],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"epsilon\",false,false,0.5,1,\"constant\",0,1000,false]},{\"id\":720,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1260,4940],\"size\":[315,418],\"flags\":{},\"order\":178,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1982},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1983},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":1984},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0,\"multistep/res_2m\",\"beta57\",30,-1,1,1,0,\"fixed\",\"unsample\",true]},{\"id\":721,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1620,4940],\"size\":[315,418],\"flags\":{},\"order\":183,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":1985},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0,\"multistep/res_2m\",\"beta57\",30,-1,1,5.5,-1,\"fixed\",\"resample\",true]},{\"id\":727,\"type\":\"Note\",\"pos\":[890,5170],\"size\":[333.3896179199219,108.9758071899414],\"flags\":{},\"order\":30,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"UNSAMPLER SETTINGS: \\n\\nEta should usually be 0.0. \\nCFG should be 1.0, and used with an empty prompt.\\n\\nDenoise < 1.0 can help with adherence to the unsampled image.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":731,\"type\":\"Note\",\"pos\":[1980,5160],\"size\":[364.70263671875,103.89823150634766],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Ensure resampler denoise matches the unsampler denoise.\\n\\nLow eta values can be used here (try 0.1 to 0.25). Sometimes they can actually improve adherence to the unsampled image.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":733,\"type\":\"Note\",\"pos\":[880,5530],\"size\":[339.3138122558594,133.51815795898438],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Typical guide settings for unsampling/resampling with a rectified flow model (AuraFlow, SD3.5, Flux) are to the right.\\n\\nThis will generally NOT work well with UNSAMPLING SD1.5, SDXL, Cascade, etc.! (These guide nodes however work great as regular guides with these models!)\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":659,\"type\":\"Note\",\"pos\":[1427.1387939453125,3646.506591796875],\"size\":[352.2813415527344,88],\"flags\":{},\"order\":33,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"All of the configurations above will have the same output (and runtime) as the chained samplers below.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":610,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[-1373.449462890625,3188.063232421875],\"size\":[311.41375732421875,693.9824829101562],\"flags\":{},\"order\":213,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1909},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1907},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1908},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1951},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1949],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,-1,1,5.5,1,\"fixed\",\"standard\",true]},{\"id\":612,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[-1014.779296875,3187.209228515625],\"size\":[314.421142578125,693.9824829101562],\"flags\":{},\"order\":221,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1910},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1911},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1912},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1949},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1947},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1950],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,-1,1,5.5,-1,\"fixed\",\"standard\",true]},{\"id\":613,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[-648.0813598632812,3185.39013671875],\"size\":[309.2452087402344,691.814208984375],\"flags\":{},\"order\":226,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1926},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1914},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1915},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1950},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1948},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1923],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,-1,1,1,-1,\"fixed\",\"standard\",true]},{\"id\":716,\"type\":\"Note\",\"pos\":[4574.66552734375,4963.29833984375],\"size\":[270.65277099609375,108.61186218261719],\"flags\":{},\"order\":34,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"INPAINTING TIP: Try using the settings to the right with a feathered mask, and \\\"end_step\\\" set to the number of sampling steps (or less). This will allow the entire image to change slightly to help heal any seams that may appear.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":714,\"type\":\"ClownGuide_Beta\",\"pos\":[4874.66552734375,4503.29833984375],\"size\":[257.2991638183594,290],\"flags\":{},\"order\":35,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"epsilon\",false,false,1,1,\"constant\",0,1000,false]},{\"id\":715,\"type\":\"ClownGuide_Beta\",\"pos\":[4874.66552734375,4863.29833984375],\"size\":[254.67617797851562,290],\"flags\":{},\"order\":36,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"epsilon\",false,true,1,1,\"beta57\",0,40,false]},{\"id\":709,\"type\":\"Note\",\"pos\":[5570,4480],\"size\":[280.0735168457031,88],\"flags\":{},\"order\":37,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Note: The \\\"guide_masked\\\" latent image will control the region that is \\\"masked out\\\"! And vice versa with \\\"guide_unmasked\\\".\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":688,\"type\":\"ClownGuides_Beta\",\"pos\":[5542.884765625,3927.678955078125],\"size\":[315,450],\"flags\":{},\"order\":38,\"mode\":0,\"inputs\":[{\"name\":\"guide_masked\",\"localized_name\":\"guide_masked\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"guide_unmasked\",\"localized_name\":\"guide_unmasked\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights_masked\",\"localized_name\":\"weights_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"weights_unmasked\",\"localized_name\":\"weights_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[1977],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuides_Beta\"},\"widgets_values\":[\"epsilon\",false,true,0.75,0.75,1,1,\"beta57\",\"constant\",0,0,15,15,false]},{\"id\":707,\"type\":\"ClownGuide_Beta\",\"pos\":[5206.15283203125,3929.6015625],\"size\":[315,290],\"flags\":{},\"order\":39,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"pseudoimplicit\",false,false,0.15,1,\"beta57\",5,15,false]},{\"id\":706,\"type\":\"Note\",\"pos\":[5228.6552734375,4270.94677734375],\"size\":[280.0735168457031,88],\"flags\":{},\"order\":40,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Tip: Try a delayed start (start_step > 0), like shown above with pseudoimplicit, for wacky results!\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":711,\"type\":\"Note\",\"pos\":[5570,4610],\"size\":[280.0735168457031,88],\"flags\":{},\"order\":41,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Tip: I recommend drawing your masks on random load image nodes, for convenience.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":712,\"type\":\"Note\",\"pos\":[5546.97216796875,3772.719970703125],\"size\":[308.80828857421875,88],\"flags\":{},\"order\":42,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"INPAINTING TIP: Use ClownRegionalConditioning ClownGuides together with the same mask!\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":744,\"type\":\"Note\",\"pos\":[7538.74755859375,3817.72216796875],\"size\":[337.9170227050781,389.18304443359375],\"flags\":{},\"order\":43,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"It can be confusing at first, trying to understand which area is affected by which conditioning or mask. I suggest starting with prompts like \\\"blue ice\\\" and \\\"red fire\\\" with region_bleed = 0.0 to clear things up.\\n\\nTO THE LEFT:\\n\\nThe two nodes to the left will automatically create an unmasked area, based on what areas are not masked by mask, mask_A, or mask_B.\\n\\nAs an example:\\n\\nPositive_A will affect the area masked by \\\"mask_A\\\".\\n\\nPositive_B will affect the area masked by \\\"mask_B\\\".\\n\\nPositive_unmasked will affect the area that is not masked by \\\"mask_A\\\" or \\\"mask_B\\\".\\n\\nTO THE RIGHT:\\n\\nThese two nodes give you manual control over the area for each prompt. This is especially useful for temporal attention with video modes like WAN. The risk is if you fail to ensure every part of the image (or frame) is masked by one of the masks, you'll end up with an unconditioned area that will look like pure noise.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":756,\"type\":\"TemporalCrossAttnMask\",\"pos\":[7017.90087890625,4790.6005859375],\"size\":[210,82],\"flags\":{},\"order\":44,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"temporal_mask\",\"localized_name\":\"temporal_mask\",\"type\":\"MASK\",\"links\":[1988]}],\"properties\":{\"Node name for S&R\":\"TemporalCrossAttnMask\"},\"widgets_values\":[1,65]},{\"id\":757,\"type\":\"TemporalCrossAttnMask\",\"pos\":[7017.12060546875,4954.5556640625],\"size\":[210,82],\"flags\":{},\"order\":45,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"temporal_mask\",\"localized_name\":\"temporal_mask\",\"type\":\"MASK\",\"links\":[1989],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"TemporalCrossAttnMask\"},\"widgets_values\":[65,133]},{\"id\":758,\"type\":\"Note\",\"pos\":[7644.2734375,4659.04248046875],\"size\":[275.73828125,88],\"flags\":{},\"order\":46,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Sometimes it is beneficial to allow self-attention masks to overlap slightly. This is similar to the \\\"edge_width\\\" parameter above, except it overlaps frames, not spatial components (areas of the image).\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":751,\"type\":\"TemporalSplitAttnMask\",\"pos\":[7668.654296875,4808.83642578125],\"size\":[210,130],\"flags\":{},\"order\":47,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"temporal_mask\",\"localized_name\":\"temporal_mask\",\"type\":\"MASK\",\"links\":[1986],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"TemporalSplitAttnMask\"},\"widgets_values\":[1,69,1,65]},{\"id\":753,\"type\":\"TemporalSplitAttnMask\",\"pos\":[7668.654296875,4998.83642578125],\"size\":[210,130],\"flags\":{},\"order\":48,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"temporal_mask\",\"localized_name\":\"temporal_mask\",\"type\":\"MASK\",\"links\":[1987],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"TemporalSplitAttnMask\"},\"widgets_values\":[61,133,65,133]},{\"id\":749,\"type\":\"Note\",\"pos\":[6907.34326171875,4623.6328125],\"size\":[280.0735168457031,88],\"flags\":{},\"order\":49,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The advanced version of the WAN patcher can set a sliding self-attention window. The \\\"size\\\" is the number of latent frames (which is 1/4th the number of frames in the final output).\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":689,\"type\":\"Note\",\"pos\":[4562.91796875,3811.044189453125],\"size\":[494.1324462890625,535.6380004882812],\"flags\":{},\"order\":50,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Guides are a way of controlling the image generation process without denoising an image directly, but by steering the denoising process itself. This can mimic many of the benefits of unsampling, without the need to spend extra time unsampling the image.\\n\\nThere are two main guide modes:\\n\\n\\\"Epsilon\\\" can be used in conjunction with unsampling/resampling workflows to dramatically improve results with rectified flow models (AuraFlow, SD3.5, Flux). It can also be used directly. It works by modifying the noise prediction made by the model to align with the guide image.\\n\\n\\\"Pseudoimplicit\\\" works by lying to the model about the state of the denoising process, so that it generates a noise prediction that strongly aligns with the guide image. \\\"Fully_pseudoimplicit\\\" is only supported with \\\"fully_implicit\\\" and \\\"diag_implicit\\\" samplers (all others will default back to pseudoimplicit).\\n\\nChannelwise and projection modes can have a dramatic effect. I especially recommend trying epsilon with these modes, though they are quite interesting with pseudoimplicit as well. \\\"projection_mode\\\" can result in some issues with image details if used for the entire sampling process.\\n\\nCUTOFF:\\n\\nFlux has extremely strong self-attention, and has issues with getting \\\"stuck\\\" if the guide strength is too high (or used for too many steps), which results in an output that looks nearly identical to the guide. \\\"cutoff\\\" does a crude check for how similar the image is to the guide - if it exceeds that value, it will turn off the guide for that step. Try setting to 0.5 or 0.6 when using Flux.\\n\\nWEIGHT SCHEDULERS:\\n\\nThese control the weight at each step. For example, with the settings shown:\\n\\n * the \\\"unmasked\\\" region will have a weight of 0.75 for the first 15 steps, then 0.0 for every step after that\\n\\n * the \\\"masked\\\" region will start with a weight of 0.75 for the first step, gradually declining until reaching 0.0 after 15 steps (and remaining at 0.0)\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":726,\"type\":\"Note\",\"pos\":[5630,5410],\"size\":[257.97479248046875,159.16941833496094],\"flags\":{},\"order\":51,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"VAEEncodeAdvanced is a quality of life node for convenience when using multiple guides.\\n\\nNote: the mask input is for a black and white image. It is there for convenience with converting any masks you may have saved as black and white images into masks you can use in a workflow.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":691,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[5894.66552734375,3923.29833984375],\"size\":[274.2724609375,418],\"flags\":{},\"order\":179,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":1977},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"beta57\",30,-1,1,5.5,0,\"fixed\",\"standard\",true]},{\"id\":695,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[8110,3210],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":215,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1980},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":null}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":701,\"type\":\"Note\",\"pos\":[7080,3060],\"size\":[327.4920959472656,88],\"flags\":{},\"order\":52,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Loader nodes are provided for convenience with Flux and SD3.5. The Flux loader can also load the Redux (and ClipVision) models for you.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":694,\"type\":\"FluxGuidanceDisable\",\"pos\":[7790,3210],\"size\":[210,82],\"flags\":{},\"order\":208,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1979}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1980],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"FluxGuidanceDisable\"},\"widgets_values\":[true,true]},{\"id\":699,\"type\":\"Note\",\"pos\":[7760,3030],\"size\":[253.01846313476562,112.91952514648438],\"flags\":{},\"order\":53,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This disables \\\"Flux Guidance\\\" (which is actually NOT disabled by setting to 1.0 or 0.0). It can be helpful in many cases where you wish to banish the \\\"Flux look\\\" to the bottom of a creepy old well in Transylvania.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":660,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[3907.121826171875,3512.491943359375],\"size\":[293.78173828125,618],\"flags\":{},\"order\":205,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1962},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":1963},{\"name\":\"options 3\",\"type\":\"OPTIONS\",\"link\":1991},{\"name\":\"options 4\",\"type\":\"OPTIONS\",\"link\":1968},{\"name\":\"options 5\",\"type\":\"OPTIONS\",\"link\":1971},{\"name\":\"options 6\",\"type\":\"OPTIONS\",\"link\":1972},{\"name\":\"options 7\",\"type\":\"OPTIONS\",\"link\":1974},{\"name\":\"options 8\",\"type\":\"OPTIONS\",\"link\":1990},{\"name\":\"options 9\",\"type\":\"OPTIONS\",\"link\":2003},{\"name\":\"options 10\",\"type\":\"OPTIONS\",\"link\":2007},{\"name\":\"options 11\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,-1,1,5.5,-1,\"fixed\",\"standard\",true]},{\"id\":647,\"type\":\"Note\",\"pos\":[1321.257080078125,3860],\"size\":[288.0400390625,88],\"flags\":{},\"order\":54,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Runs the first 7 steps (out of 30).\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":640,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1321.257080078125,4010],\"size\":[296.93646240234375,418],\"flags\":{},\"order\":55,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1952],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,7,1,5.5,0,\"fixed\",\"standard\",true]},{\"id\":641,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1671.257080078125,4010],\"size\":[288.4732666015625,418],\"flags\":{},\"order\":182,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1952},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1953],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,10,1,5.5,-1,\"fixed\",\"resample\",true]},{\"id\":642,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[2021.257080078125,4010],\"size\":[291.5506286621094,422.6160888671875],\"flags\":{},\"order\":203,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1953},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,-1,1,5.5,-1,\"fixed\",\"resample\",true]},{\"id\":729,\"type\":\"Note\",\"pos\":[1666.1065673828125,4786.38134765625],\"size\":[210,88],\"flags\":{},\"order\":56,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"        RESAMPLER NODE\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":650,\"type\":\"Note\",\"pos\":[1771.257080078125,4480],\"size\":[453.94183349609375,144.25192260742188],\"flags\":{},\"order\":57,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"IMPORTANT: sampler_mode is set to \\\"resample\\\"!\\n\\nALSO: seed is set to -1!\\n\\nThis means \\\"continue where the last sampler left off\\\", as in, use the next available unused seed.\\n\\nIf you set it to another value, the noise sampler that is used at every step might reuse a seed, which can cause the image to burn.\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":738,\"type\":\"ReHiDreamPatcher\",\"pos\":[6490,4280],\"size\":[210,82],\"flags\":{},\"order\":58,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":null}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ReHiDreamPatcher\"},\"widgets_values\":[\"float32\",true]},{\"id\":739,\"type\":\"ReSD35Patcher\",\"pos\":[6490,4420],\"size\":[210,82],\"flags\":{},\"order\":59,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":null}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ReSD35Patcher\"},\"widgets_values\":[\"float32\",true]},{\"id\":740,\"type\":\"ReAuraPatcher\",\"pos\":[6490,4560],\"size\":[210,82],\"flags\":{},\"order\":60,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":null}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null,\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReAuraPatcher\"},\"widgets_values\":[true,true]},{\"id\":741,\"type\":\"ReWanPatcher\",\"pos\":[6490,4680],\"size\":[210,58],\"flags\":{},\"order\":61,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":null}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ReWanPatcher\"},\"widgets_values\":[true]},{\"id\":658,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[2070,3140],\"size\":[296.93646240234375,418],\"flags\":{},\"order\":62,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,10000,1,5.5,0,\"fixed\",\"standard\",true]},{\"id\":734,\"type\":\"Note\",\"pos\":[1830,2950],\"size\":[433.063232421875,101.85264587402344],\"flags\":{},\"order\":63,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"NOTE: steps_to_run = -1 means to run all steps (the usual default approach for any sampler).\\n\\nOn the right, steps_to_run > steps, so it will run all the way till the end, just like on the left. This is the approach traditionally used in KSampler (Advanced).\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":765,\"type\":\"Note\",\"pos\":[-3299.93603515625,2699.88427734375],\"size\":[389.86285400390625,98.29244232177734],\"flags\":{},\"order\":64,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"THESE NODES ARE NOT REQUIRED TO USE RES4LYF!!!\\n\\nThese descriptions are included only out of a desire to consolidate all CSBW node documentation into one location.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":728,\"type\":\"Note\",\"pos\":[1311.5924072265625,4784.7666015625],\"size\":[213.4912109375,88],\"flags\":{},\"order\":65,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"        UNSAMPLER NODE\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":723,\"type\":\"ClownGuide_Beta\",\"pos\":[1620,5430],\"size\":[315,290],\"flags\":{},\"order\":66,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[1985],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"epsilon\",false,true,0.75,1,\"beta57\",0,10,false]},{\"id\":766,\"type\":\"Note\",\"pos\":[1980.62939453125,5502.46337890625],\"size\":[366.45068359375,97.77838134765625],\"flags\":{},\"order\":67,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Tip: \\\"projection\\\" and \\\"channelwise\\\" modes can increase the intensity of the effect with epsilon and data guide modes. Sometimes, this effect is very desirable. It's worth experimenting with.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":732,\"type\":\"Note\",\"pos\":[1981.25732421875,5003.00537109375],\"size\":[361.62445068359375,90.62290954589844],\"flags\":{},\"order\":68,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Tip: multistep samplers usually adhere to unsampled images more effectively than others.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":768,\"type\":\"ClownGuides_Beta\",\"pos\":[4877.3896484375,5235.57373046875],\"size\":[333.3587951660156,450],\"flags\":{},\"order\":69,\"mode\":0,\"inputs\":[{\"name\":\"guide_masked\",\"localized_name\":\"guide_masked\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"guide_unmasked\",\"localized_name\":\"guide_unmasked\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights_masked\",\"localized_name\":\"weights_masked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"weights_unmasked\",\"localized_name\":\"weights_unmasked\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownGuides_Beta\"},\"widgets_values\":[\"lure\",false,false,1,1,1,1,\"linear_quadratic\",\"constant\",0,0,8,-1,false]},{\"id\":769,\"type\":\"Note\",\"pos\":[4576.1689453125,5281.28271484375],\"size\":[266.2802734375,135.71385192871094],\"flags\":{},\"order\":70,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"INPAINTING TIP: Try using the settings to the right with your input image connected to both the guide_masked and guide_unmasked inputs. Adjust \\\"end_step_masked\\\" to change the strength of the inpainting effect (or weight_masked, or eight_scheduler_masked).\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":725,\"type\":\"VAEEncodeAdvanced\",\"pos\":[5930,5410],\"size\":[255.3756103515625,278],\"flags\":{},\"order\":71,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":735,\"type\":\"ClownGuide_Beta\",\"pos\":[5147.18896484375,4861.30419921875],\"size\":[254.67617797851562,290],\"flags\":{},\"order\":72,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[1992],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"flow\",false,false,1,1,\"constant\",0,40,false]},{\"id\":770,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[5451.8759765625,4763.7705078125],\"size\":[315,438],\"flags\":{},\"order\":184,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":1992},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1994},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"beta57\",30,-1,1,5.5,0,\"randomize\",\"standard\",true]},{\"id\":772,\"type\":\"SharkOptions_GuideCond_Beta\",\"pos\":[5230.13525390625,5239.04736328125],\"size\":[210,98],\"flags\":{},\"order\":73,\"mode\":0,\"inputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1994]}],\"properties\":{\"Node name for S&R\":\"SharkOptions_GuideCond_Beta\"},\"widgets_values\":[5.5]},{\"id\":773,\"type\":\"Note\",\"pos\":[5229.7958984375,5385.55810546875],\"size\":[272.242919921875,112.58575439453125],\"flags\":{},\"order\":74,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"When using \\\"flow\\\" mode a second set of conditionings can be added that will be used to evolve the guide itself to sync up better with your image during sampling. Try describing the guide with some creative liberties to bend things in the desired stylistic direction.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":774,\"type\":\"Note\",\"pos\":[4576.4013671875,5501.66796875],\"size\":[266.2802734375,135.71385192871094],\"flags\":{},\"order\":75,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"TIP: ClownGuides allows you to use multiple input images, each with their own separate schedule and strength settings. There's a lot of creative possibilities here, especially when combined with regional conditioning sharing the same mask!\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":746,\"type\":\"ClownRegionalConditioning_AB\",\"pos\":[7918.16943359375,3816.63427734375],\"size\":[248.7556610107422,350],\"flags\":{},\"order\":76,\"mode\":0,\"inputs\":[{\"name\":\"conditioning_A\",\"localized_name\":\"conditioning_A\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"conditioning_B\",\"localized_name\":\"conditioning_B\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"mask_A\",\"localized_name\":\"mask_A\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"mask_B\",\"localized_name\":\"mask_B\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning_AB\"},\"widgets_values\":[1,0,0,\"constant\",0,-1,\"boolean\",128,false]},{\"id\":747,\"type\":\"ClownRegionalConditioning_ABC\",\"pos\":[7916.001953125,4221.97314453125],\"size\":[250.51895141601562,390],\"flags\":{},\"order\":77,\"mode\":0,\"inputs\":[{\"name\":\"conditioning_A\",\"localized_name\":\"conditioning_A\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"conditioning_B\",\"localized_name\":\"conditioning_B\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"conditioning_C\",\"localized_name\":\"conditioning_C\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"mask_A\",\"localized_name\":\"mask_A\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"mask_B\",\"localized_name\":\"mask_B\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"mask_C\",\"localized_name\":\"mask_C\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning_ABC\"},\"widgets_values\":[1,0,0,\"constant\",0,100,\"boolean\",128,false]},{\"id\":743,\"type\":\"ClownRegionalConditioning3\",\"pos\":[7224.5439453125,4216.19189453125],\"size\":[287.20001220703125,370],\"flags\":{},\"order\":78,\"mode\":0,\"inputs\":[{\"name\":\"conditioning_A\",\"localized_name\":\"conditioning_A\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"conditioning_B\",\"localized_name\":\"conditioning_B\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"conditioning_unmasked\",\"localized_name\":\"conditioning_unmasked\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"mask_A\",\"localized_name\":\"mask_A\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"mask_B\",\"localized_name\":\"mask_B\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning3\"},\"widgets_values\":[1,0,0,\"constant\",0,100,\"boolean\",128,false]},{\"id\":754,\"type\":\"ClownRegionalConditioning_AB\",\"pos\":[7261.39306640625,4816.611328125],\"size\":[248.7556610107422,350],\"flags\":{},\"order\":180,\"mode\":0,\"inputs\":[{\"name\":\"conditioning_A\",\"localized_name\":\"conditioning_A\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"conditioning_B\",\"localized_name\":\"conditioning_B\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"mask_A\",\"localized_name\":\"mask_A\",\"type\":\"MASK\",\"shape\":7,\"link\":1988},{\"name\":\"mask_B\",\"localized_name\":\"mask_B\",\"type\":\"MASK\",\"shape\":7,\"link\":1989},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning_AB\"},\"widgets_values\":[1,0,0,\"constant\",0,-1,\"boolean\",128,false]},{\"id\":752,\"type\":\"ClownRegionalConditioning_AB\",\"pos\":[7918.654296875,4798.83642578125],\"size\":[248.7556610107422,350],\"flags\":{},\"order\":181,\"mode\":0,\"inputs\":[{\"name\":\"conditioning_A\",\"localized_name\":\"conditioning_A\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"conditioning_B\",\"localized_name\":\"conditioning_B\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"mask_A\",\"localized_name\":\"mask_A\",\"type\":\"MASK\",\"shape\":7,\"link\":1986},{\"name\":\"mask_B\",\"localized_name\":\"mask_B\",\"type\":\"MASK\",\"shape\":7,\"link\":1987},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning_AB\"},\"widgets_values\":[1,0,0,\"constant\",0,-1,\"boolean\",128,false]},{\"id\":777,\"type\":\"ClownRegionalConditioning2\",\"pos\":[7226.02978515625,3817.949462890625],\"size\":[287.20001220703125,330],\"flags\":{},\"order\":79,\"mode\":0,\"inputs\":[{\"name\":\"conditioning_masked\",\"localized_name\":\"conditioning_masked\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"conditioning_unmasked\",\"localized_name\":\"conditioning_unmasked\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning2\"},\"widgets_values\":[1,0,0,\"constant\",0,-1,\"boolean\",128,false]},{\"id\":705,\"type\":\"Note\",\"pos\":[6811.22021484375,3808.38671875],\"size\":[379.8222351074219,549.5839233398438],\"flags\":{},\"order\":80,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"\\\"weight\\\" affects how strongly the attention mask is applied, which controls how well the masked and unmasked regions are separated. \\n\\n\\\"region_bleed\\\" affects how much the regions can \\\"talk\\\" with each other (via self-attention). region_bleed=0.0 will ensure the strongest possible separation, but higher values can help prevent visible seams from forming along the edges of the masked areas.\\n\\n\\nWEIGHT SCHEDULER:\\n\\nThis controls the weight (strength of separation of the regions) at each step. For example, with the settings shown, the weight will begin at 1.70, and gradually decline before reaching 0.0 after 10 steps (and remaining at 0.0).\\n\\n\\\"mask_type\\\" currently only has the \\\"gradient\\\" option, but others may be added later. \\n\\nYes, this does mean you can use masks with gradients (so you can feather and blur them if you wish)!\\n\\nMASK_TYPE:\\n\\nThere are options here that are a bit like causal attention in LLMs. For example, \\\"boolean_masked\\\" means the masked area can \\\"see\\\" the entire image (via self-attention), while the unmasked area cannot \\\"see\\\" the masked area. This is very useful with Flux if you wish to generate a character close to the camera but have an unblurred background. Place the character in the masked area, describe only the background in the unmasked area, select \\\"boolean_masked\\\" and set region_bleed = 0.0. \\n\\nEDGE_WIDTH:\\n\\nThis creates overlapping self-attention at the boundaries between masked and unmasked areas. This helps to conceal seams. Try values like 50 or 150 to start, and watch the preview.\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":776,\"type\":\"ClownRegionalConditionings\",\"pos\":[9220.0224609375,3819.96826171875],\"size\":[238.2400665283203,266],\"flags\":{},\"order\":222,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":2000},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditionings\"},\"widgets_values\":[0.9,0.25,0,\"constant\",0,-1,\"boolean\",false]},{\"id\":782,\"type\":\"Note\",\"pos\":[8261.9765625,4016.659423828125],\"size\":[272.1261291503906,131.35166931152344],\"flags\":{},\"order\":81,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"\\\"Spineless\\\" mode means that the region \\\"has no spine\\\" and is susceptible to influence by other regions (via self-attention). This is comparable to the \\\"boolean_masked\\\" etc. modes in the nodes to the left. For example, \\\"boolean_masked\\\" sets the masked area to \\\"spineless\\\".\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":785,\"type\":\"Note\",\"pos\":[8574.7734375,4022.382080078125],\"size\":[210,97.39286804199219],\"flags\":{},\"order\":82,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Unlimited regions may be set using these nodes. Up to 12 regions have been successfully tested in a single workflow.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":786,\"type\":\"Note\",\"pos\":[8979.33203125,4020.87939453125],\"size\":[212.89056396484375,176.87088012695312],\"flags\":{},\"order\":83,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The risk of manual control over all masks is that you miss an area (some part ends up not being covered by any mask) which means it then has no conditioning. \\n\\nThis is easily avoided by simply not hooking up a mask to the final node. It will use any remaining unmasked area as the final mask.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":755,\"type\":\"Note\",\"pos\":[7241.16015625,4666.3251953125],\"size\":[275.73828125,88],\"flags\":{},\"order\":84,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"If sliding self-attention is used, only cross-attention needs to be masked.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":748,\"type\":\"ReWanPatcherAdvanced\",\"pos\":[6702.76953125,4816.7041015625],\"size\":[279.3623352050781,214],\"flags\":{},\"order\":85,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":null}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ReWanPatcherAdvanced\"},\"widgets_values\":[\"all\",\"all\",true,\"standard\",60]},{\"id\":750,\"type\":\"Note\",\"pos\":[6444.103515625,4815.1484375],\"size\":[225.1619873046875,212.99703979492188],\"flags\":{},\"order\":86,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Sliding self-attention is useful for generating sequences where the conditioning changes from one frame to another, or for reducing VRAM requirements and reducing inference time when generating long sequences. At least 601 frames can be generated in one shot on a RTX 4090 with the above settings.\\n\\nThere are two modes: standard and circular. Circular allows the first frame to \\\"see\\\" the last frame, whereas standard does not.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":742,\"type\":\"Note\",\"pos\":[6420.9990234375,3806.16064453125],\"size\":[345.86224365234375,263.46356201171875],\"flags\":{},\"order\":87,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Regional conditioning requires for you to patch the model with a \\\"Re\\\" (for Regional) patcher (shown below) and to use the beta versions of either ClownSampler + SharkSampler, or ClownSharkSampler.\\n\\nFully compatible with Flux Redux, CFG, etc.\\n\\nHiDream notes:\\nRegional negative conditioning is currently supported with HiDream and is useful for controlling styles (i.e., \\\"photo\\\" in a region that should be a painting, and vice versa). \\n\\nWith HiDream, weight and region_bleed may also be set to negative values. The effect in terms of strength is the same for -0.9 vs 0.9, but it will change whether it operates on initial or final blocks in the model. The effect can be quite different.\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":737,\"type\":\"ReFluxPatcher\",\"pos\":[6490,4140],\"size\":[210,82],\"flags\":{},\"order\":88,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":null}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float32\",true]},{\"id\":778,\"type\":\"ClownRegionalConditioning\",\"pos\":[8500,3820],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":185,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":1995},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[1996],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,128]},{\"id\":775,\"type\":\"ClownRegionalConditioning\",\"pos\":[8260,3820],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":89,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":null},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[1995],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[true,128]},{\"id\":779,\"type\":\"ClownRegionalConditioning\",\"pos\":[8740.115234375,3820.114990234375],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":204,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":1996},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[1999],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,128]},{\"id\":783,\"type\":\"ClownRegionalConditioning\",\"pos\":[8975.990234375,3820.1171875],\"size\":[211.60000610351562,122],\"flags\":{},\"order\":214,\"mode\":0,\"inputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"shape\":7,\"link\":1999},{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"cond_regions\",\"localized_name\":\"cond_regions\",\"type\":\"COND_REGIONS\",\"links\":[2000],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning\"},\"widgets_values\":[false,128]},{\"id\":651,\"type\":\"Note\",\"pos\":[1041.0577392578125,2843.751953125],\"size\":[304.6747741699219,235.28672790527344],\"flags\":{},\"order\":90,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"SDE NOISE:\\n\\n\\\"eta\\\" represents how much noise the sampler adds after each step. If set to 0.0, the samplers will be \\\"ODEs\\\". If set to > 0.0, they will be \\\"SDEs\\\" and/or \\\"ancestral\\\". \\n\\nThe math has been carefully designed for both variance preserving and exploding models: results are particularly good with SD1.5, SDXL, Stable Cascade, Auraflow, SD3.5 Medium, and Flux. \\n\\nIn most cases, using eta will result in gains in quality and coherence when using at least 20 sampling steps. Best results are with 30 or more. \\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":638,\"type\":\"Note\",\"pos\":[690,2842.6943359375],\"size\":[321.5638427734375,270.1020202636719],\"flags\":{},\"order\":91,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"SPEED:\\n\\nAll multistep samplers, like Euler, use one model call per step. Therefore, they run at the same speed.\\n\\nAll others have the number of model calls per step listed at the end of the name in terms of \\\"stages\\\" (abbreviated \\\"s\\\").\\n\\nTherefore, \\\"res_2s\\\" has 2 stages, and uses 2 model calls per step. Each step will take 2x as long as a Euler step. \\\"ralston_3s\\\" will take 3x as long.\\n\\nImplicit samplers benefit enormously from an extra model call to initialize each step. Therefore, \\\"gauss-legendre_2s\\\" will actually use 3 model calls per step.\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":681,\"type\":\"Note\",\"pos\":[691.0578002929688,3171.1572265625],\"size\":[320.96875,168.8627166748047],\"flags\":{},\"order\":92,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"IMPORTANT: the seed here is set to -1! \\n\\nThis means \\\"use the next available seed\\\" (which will be the most recently used seed + 1).\\n\\nThis setting is irrelevant if eta = 0.0. It is only used for SDE sampling (where noise is added after each step, the amount of which is controlled by \\\"eta\\\").\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":801,\"type\":\"Note\",\"pos\":[631.0161743164062,2693.894775390625],\"size\":[602.4559326171875,93.21308135986328],\"flags\":{},\"order\":93,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"MANY COMPLEX WORKFLOWS BECOME MUCH SIMPLER WHEN USING RES4LYF NODES.\\n\\nA great emphasis has been placed, during the design of these nodes, on usability - ensuring they are not just more powerful than the default KSampler nodes, and don't just provide superior results, but are also ultimately easier to use, encouraging experimentation. \"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":655,\"type\":\"Note\",\"pos\":[2114.199951171875,2702.9755859375],\"size\":[321.8917236328125,108.77723693847656],\"flags\":{},\"order\":94,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"BONGMath is an algorithm unique to RES4LYF that vastly improves sampling quality and coherence in most cases, with little to no effect on sampling speed.\\n\\nIt has no effect when eta is set to 0. \\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":667,\"type\":\"Note\",\"pos\":[2630,2720],\"size\":[242.25900268554688,198.10833740234375],\"flags\":{},\"order\":96,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"OPTIONS NODES:\\n\\nThese can be connected directly to ClownSampler, ClownSharkSampler, and SharkSampler, to control a variety of advanced parameters.\\n\\nSHARKoptions may be connected to SHARKsampler or clownSHARKsampler.\\n\\nCLOWNoptions may be connected to CLOWNsampler or CLOWNsharksampler.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":669,\"type\":\"Note\",\"pos\":[2894.58544921875,3200.53369140625],\"size\":[478.7455139160156,399.50189208984375],\"flags\":{},\"order\":97,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownOptions_SDE controls the noise that is added after each step (or substep).\\n\\nNOISE TYPES:\\n\\nBrownian can give very good results, and is the \\\"correct\\\" noise type to use from a mathematical perspective. It does, however, result in a burned image with BONGMath when using many of the higher order samplers (the issue is with \\\"non-monotonic\\\" ones, which are typically those \\n whose names end with \\\"5s\\\" or greater).\\n\\nNOISE MODES:\\n\\nThe \\\"noise mode\\\" controls how much noise is actually used after each step. The list is roughly sorted in order of strength (hard at the top being the strongest, hard_var at the bottom being the weakest - and the only one that uses \\\"mathematically correct\\\" scaling). \\n\\n\\\"Sinusoidal\\\" begins very weak, then becomes strong in the middle of the sampling process before losing strength again.\\n\\nThe \\\"soft\\\" noise types begin very strong, and drop off extremely rapidly.\\n\\nSUBSTEPS:\\n\\nAny sampler that is not euler or ddim uses information from multiple model calls (\\\"stages\\\") to predict the step. Multistep samplers reuse previous steps as \\\"stages\\\", whereas the rest make new model calls. \\n\\nThe settings for \\\"substep\\\" control these intermediate \\\"substeps\\\". If eta_substep is set to 0, BONGMath will have no effect.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":661,\"type\":\"ClownOptions_SDE_Beta\",\"pos\":[3414.8017578125,3262.863037109375],\"size\":[315,266],\"flags\":{},\"order\":98,\"mode\":0,\"inputs\":[{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas_substep\",\"localized_name\":\"etas_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1963],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Beta\"},\"widgets_values\":[\"gaussian\",\"gaussian\",\"hard\",\"hard\",0.5,0.5,-1,\"randomize\"]},{\"id\":677,\"type\":\"Note\",\"pos\":[2900,3650],\"size\":[471.3785095214844,160.20542907714844],\"flags\":{},\"order\":99,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Overshoot > 0 causes the sampler to \\\"overshoot\\\" the step size, and then scale backwards to where it was really supposed to end. This is what all other SDE and ancestral sampler implementations do, though I have found it to adversely affect accuracy, especially with high eta values (> 0.7), resulting in softened, simplified images with little detail.\\n\\nHowever, careful use can soften images and deepen colors with pleasant results.\\n\\nTo mimic the behavior of the typical SDE and ancestral sampler implementations, set these settings to match those in ClownOptions_SDE.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":683,\"type\":\"Note\",\"pos\":[2900,5230],\"size\":[481.8527526855469,325.1487731933594],\"flags\":{},\"order\":100,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"RES4LYF heavily emphasizes giving you control over the sampling process!\\n\\nYou will often see these green \\\"sigmas\\\" inputs that aren't really for sigmas. These are used to control parameters on a step-by-step basis. \\n\\nIMPORTANT: The values used in the input here are multiplied by the value in ClownSampler/SharkSampler/ClownsharKSampler!\\n\\nFor example, the KarrasSchedule connected below creates a list of numbers:\\n\\n1.0, 1.0, 1.0, 1.0, 1.0\\n\\n(The rest is automatically filled in with 0.0.)\\n\\nThese are then multiplied by the value for \\\"eta\\\" (0.5) in the connected ClownsharKSampler node:\\n\\n0.5, 0.5, 0.5, 0.5, 0.5\\n\\nThe result is the sampler sets \\\"eta\\\" to 0.5 for the first 5 steps, and then 0.0 for every step after that. \\n\\nTry connecting something like the BetaScheduler while using \\\"beta57\\\" as your sampling scheduler!\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":676,\"type\":\"ClownOptions_StepSize_Beta\",\"pos\":[3420,3660],\"size\":[316.0789794921875,130],\"flags\":{},\"order\":101,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1968],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_StepSize_Beta\"},\"widgets_values\":[\"hard\",\"hard\",0,0]},{\"id\":668,\"type\":\"Note\",\"pos\":[2900,2720],\"size\":[476.7748718261719,425.3497314453125],\"flags\":{},\"order\":102,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"SharkOptions controls the initial noise generated before starting the sampling process. \\n\\nNOISE TYPES:\\n\\nPerlin is great with Stable Cascade, and with Flux will frequently result in a less blurred image (and a somewhat less saturated look, which can be helpful for realism).\\n\\nThe \\\"color\\\" noise modes have more low frequency information (structure), brown being greater than pink. White is neutral, while blue and especially violet have extra high frequency information (details).\\n\\nhires-pyramid-bicubic can generate exceptionally sharp images in many cases. The other pyramid noise types, and studentt, are often great for painterly styles.\\n\\nOTHER OPTIONS:\\n\\n\\\"noise_stdev\\\" increases the \\\"size\\\" of the noise. Values around 1.05 to 1.1 can have a wonderful effect with some painterly styles, with a boost in saturation.\\n\\n\\\"denoise_alt\\\" overrides the denoise setting. It has a very different effect that can often be easier to control when doing img2img generations with rectified flow models. (It scales the sigmas schedule, instead of slicing it).\\n\\n\\\"channelwise_cfg\\\" changes the cfg method used to one that can be very good with models that use a 16 channel VAE (SD3.5, Flux). Setting a negative value in the \\\"cfg\\\" box in any ClownsharKSampler or SharkSampler node is equivalent to using this toggle (for example, cfg = -2.0 is the same as setting cfg = 2.0, and channelwise_cfg = true).\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":666,\"type\":\"SharkOptions_Beta\",\"pos\":[3413.490478515625,2880],\"size\":[257.98193359375,130],\"flags\":{},\"order\":103,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1962]}],\"properties\":{\"Node name for S&R\":\"SharkOptions_Beta\"},\"widgets_values\":[\"gaussian\",1,1,false]},{\"id\":684,\"type\":\"KarrasScheduler\",\"pos\":[3190,5610],\"size\":[210,130],\"flags\":{},\"order\":104,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"SIGMAS\",\"localized_name\":\"SIGMAS\",\"type\":\"SIGMAS\",\"links\":[1975,1976],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"KarrasScheduler\"},\"widgets_values\":[5,1,1,1]},{\"id\":687,\"type\":\"Note\",\"pos\":[2910,5610],\"size\":[257.2243957519531,88],\"flags\":{},\"order\":105,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Tip: use SigmasPreview very heavily so that you can *see* what's going on!\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":671,\"type\":\"Note\",\"pos\":[3820,5080],\"size\":[363.5062255859375,260.6607971191406],\"flags\":{},\"order\":106,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This node can be used to visualize the effect of different noise parameters on how much noise is actually added (or removed) during the sampling process.\\n\\nDelta (Δ) signifies change. So for example, Δt = step size.\\n\\nThe most important thing to look at is the original sigma (σ) schedule vs σup. \\n\\nThe difference between σ (white line) and σup (red line above) is how much noise is added by the sampler after each step. If the two overlap, you aren't adding noise, and it's in ODE mode (eta = 0.0).\\n\\nThe most important thing to try here is higher or lower eta values, and different noise_modes. Try comparing \\\"hard\\\" vs \\\"soft\\\" vs \\\"hard_var\\\" with eta = 0.5.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":686,\"type\":\"SigmasPreview\",\"pos\":[3430,5610],\"size\":[289.7076110839844,128.47837829589844],\"flags\":{},\"order\":187,\"mode\":0,\"inputs\":[{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"link\":1976}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"SigmasPreview\"},\"widgets_values\":[false]},{\"id\":682,\"type\":\"ClownOptions_Automation_Beta\",\"pos\":[3430,5250],\"size\":[284.9833984375,146],\"flags\":{},\"order\":186,\"mode\":0,\"inputs\":[{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":1975},{\"name\":\"etas_substep\",\"localized_name\":\"etas_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"s_noises\",\"localized_name\":\"s_noises\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"s_noises_substep\",\"localized_name\":\"s_noises_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"epsilon_scales\",\"localized_name\":\"epsilon_scales\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"frame_weights\",\"localized_name\":\"frame_weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1974]}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Automation_Beta\"},\"widgets_values\":[]},{\"id\":673,\"type\":\"Note\",\"pos\":[2900.961181640625,4977.04736328125],\"size\":[480.85333251953125,190.63368225097656],\"flags\":{},\"order\":107,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This option node can be very useful for SAVING TIME! \\n\\n\\\"swap_below_error\\\" is a tolerance threshold where, if the total error at each step falls below the value in the box, it will switch to the sampler specified here.\\n\\n\\\"log_err_to_console\\\" will print these values at each step to the terminal/console/cmd.exe window where ComfyUI is running. This is essential if you wish to choose a reasonable value for \\\"swap_below_err\\\".\\n\\n\\\"swap_at_step\\\" will switch after the step specified, no matter what. This is equivalent to chaining two samplers together as shown to the left - it's just more convenient and compact.\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":665,\"type\":\"ClownOptions_SwapSampler_Beta\",\"pos\":[3430.416015625,5008.54541015625],\"size\":[287.92266845703125,130],\"flags\":{},\"order\":108,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1972],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SwapSampler_Beta\"},\"widgets_values\":[\"multistep/res_3m\",0,30,false]},{\"id\":798,\"type\":\"ClownOptions_Momentum_Beta\",\"pos\":[3433.12158203125,4837.14990234375],\"size\":[286.6007995605469,58],\"flags\":{},\"order\":109,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[2003],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Momentum_Beta\"},\"widgets_values\":[0.5]},{\"id\":803,\"type\":\"Note\",\"pos\":[2904.318359375,4827.84912109375],\"size\":[481.74639892578125,88],\"flags\":{},\"order\":110,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Momentum can be used to accelerate convergence in some cases. Use carefully.\\n\\nMay be best used with chained workflows, with momentum applied only to some portion of early steps.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":672,\"type\":\"Note\",\"pos\":[2904.900634765625,4490.91796875],\"size\":[483.2145690917969,270.3226623535156],\"flags\":{},\"order\":111,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Implicit steps refine the sampling process by feeding the output of each step back into its input and rerunning it. This means setting either to \\\"1\\\" will increase the runtime 2x, as you're doubling the number of steps. \\n\\nThey can drastically increase quality. In some cases, results can actually be improved by cutting the step count in half, and running with implicit_steps=1 or implicit_substeps=1 (which results in an equivalent runtime).\\n\\nWith the other samplers, rebound will add one extra model call per step. \\n\\nBongmath and predictor-corrector can have significantly different effects. Rebound can as well (but also adds 1 model call per implicit step or substep).\\n\\nTRUE IMPLICIT SAMPLERS:\\n\\nIt is recommended to use \\\"implicit_steps\\\" with the \\\"fully_implicit\\\" samplers, and \\\"implicit_substeps\\\" with the \\\"diag_implicit\\\" samplers. Both of these sampler types will ignore the \\\"implicit_type\\\" settings.\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":675,\"type\":\"Note\",\"pos\":[2906.080322265625,4199.57763671875],\"size\":[474.05108642578125,230.29006958007812],\"flags\":{},\"order\":112,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownOptions_DetailBoost aims to BREAK the noise scaling math that was so carefully prepared by ClownOptions_SDE. Most users see this as a replacement for the \\\"Detail Daemon\\\" node.\\n\\nTry experimenting with different methods: ones that end with \\\"normal\\\" will attempt to preserve luminosity in the image after the adjustments to the noise are made.\\n\\nIt is worth trying \\\"sinusoidal\\\" mode as well, as this is designed to be strongest at middle steps.\\n\\nEta will increase the intensity of the effect. \\n\\nIt seems to be best to not have this start on the first step (step 0), and to have it end no more than halfway (end_step = 1/2 of total steps or less). With method = \\\"model\\\", this seems to add a lot of detail without losing saturation, increasing luminosity, or triggering mutations.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":763,\"type\":\"ClownOptions_DetailBoost_Beta\",\"pos\":[3436.449462890625,4207.4345703125],\"size\":[282.9725646972656,218],\"flags\":{},\"order\":113,\"mode\":0,\"inputs\":[{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1991],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_DetailBoost_Beta\"},\"widgets_values\":[1,\"model\",\"hard\",0.5,3,10]},{\"id\":761,\"type\":\"Note\",\"pos\":[2905.32470703125,3869.700439453125],\"size\":[471.6525573730469,266.9491882324219],\"flags\":{},\"order\":114,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownOptions_SigmaScaling aims to BREAK the noise scaling math that was so carefully prepared by ClownOptions_SDE.\\n\\n\\\"noise_anchor_sde\\\" can make the image look much dirtier with lower values. Try 0.5 for starters, especially with any non-multistep sampler.\\n\\n\\\"s_noise\\\" increases the \\\"size\\\" of the noise added with each step. Values around 1.05-1.1 can considerably boost saturation in painterly images. BONGMath is particularly good when this is set to values > 1.0.\\n\\n\\\"s_noise_substep\\\" is not compatible with BONGMath. You will get a terrible image if you use them together.\\n\\n\\\"lying\\\" is equivalent to the popular \\\"lying sigmas\\\" approach. Like \\\"noise_anchor\\\", values < 1.0 will increase the \\\"dirty\\\" look. Try starting with 0.95.\\n\\n\\\"lying_inv\\\" will do the opposite of \\\"lying\\\". If you find your images look \\\"dried out\\\" or desaturated when using lying, try setting this to a similar value, and have it start at a later step, as shown below.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":760,\"type\":\"ClownOptions_SigmaScaling_Beta\",\"pos\":[3436.542724609375,3886.986328125],\"size\":[272.21429443359375,454],\"flags\":{},\"order\":115,\"mode\":0,\"inputs\":[{\"name\":\"s_noises\",\"localized_name\":\"s_noises\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"s_noises_substep\",\"localized_name\":\"s_noises_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1990],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SigmaScaling_Beta\"},\"widgets_values\":[1,1,1,0.9500000000000001,0.9500000000000001,2,8]},{\"id\":678,\"type\":\"Note\",\"pos\":[3783.1923828125,4263.46484375],\"size\":[363.2837219238281,88],\"flags\":{},\"order\":116,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Tip: if your results are too noisy,\\ntry setting \\\"overshoot\\\" in \\\"ClownOptions StepSize\\\" to the same value as \\\"eta\\\" used in \\\"ClownOptions SDE\\\"! \\n\\n(Default eta is 0.50 if \\\"ClownOptions SDE\\\" is not used).\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":806,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[5163.4833984375,2735.41357421875],\"size\":[262.0870056152344,298],\"flags\":{},\"order\":206,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2005},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_2m\",-1,5.5,\"resample\",true]},{\"id\":804,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[4552.6552734375,2734.24609375],\"size\":[268.7583312988281,418],\"flags\":{},\"order\":117,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2004],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"beta57\",20,14,1,5.5,0,\"fixed\",\"unsample\",true]},{\"id\":805,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[4861.5244140625,2737.638671875],\"size\":[262.0870056152344,318],\"flags\":{},\"order\":188,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2004},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":2006},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2005],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_2m\",1,5.5,\"resample\",true]},{\"id\":808,\"type\":\"Note\",\"pos\":[4806.57275390625,3320.429443359375],\"size\":[384.6367492675781,194.1151580810547],\"flags\":{},\"order\":118,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownOptions Cycles causes the sampler node to rerun after completion, while reversing the sampling mode (resample becomes unsample, unsample becomes resample). \\n\\n1.0 cycles implies it returns to where it began. \\n\\n1.5 cycles implies it returns to where it began, then reverses direction and reruns one last time - so it would end at the end of the step.\\n\\nThis often has VERY good results with unsampling workflows, various img2img workflows, style transfer, etc. With 1 steps_to_run, it's a lot like \\\"ClownOptions Implicit\\\", though results are often better.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":717,\"type\":\"Note\",\"pos\":[2635.989501953125,4579.111328125],\"size\":[237.44444274902344,91.61251831054688],\"flags\":{},\"order\":119,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"INPAINTING TIP: Implicit steps can really help, especially with seams! They also can have a significant impact on unsampling, and guides in general.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":662,\"type\":\"ClownOptions_ImplicitSteps_Beta\",\"pos\":[3440.08251953125,4551.392578125],\"size\":[286.5861511230469,130],\"flags\":{},\"order\":120,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1971],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_ImplicitSteps_Beta\"},\"widgets_values\":[\"bongmath\",\"bongmath\",0,0]},{\"id\":811,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[3860.137451171875,4546.62158203125],\"size\":[261.53253173828125,202],\"flags\":{},\"order\":121,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[2007],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[5,1,0.5,1,-1,1,false]},{\"id\":812,\"type\":\"Note\",\"pos\":[3846.717041015625,4748.94482421875],\"size\":[308.37188720703125,88],\"flags\":{},\"order\":122,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This node is closely related to ImplicitSteps! It is explained in more detail in the \\\"Cyclosampling\\\" group above and to the right (northeast).\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":807,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[4863.31494140625,3128.078125],\"size\":[261.53253173828125,202],\"flags\":{},\"order\":123,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[2006]}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[5,1,0.5,5.5,-1,1,false]},{\"id\":818,\"type\":\"Note\",\"pos\":[5294.46533203125,3122.492431640625],\"size\":[309.0342712402344,192.40728759765625],\"flags\":{},\"order\":124,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"eta_decay_scale multiplies the eta by that value after each cycle. This can help the cyclosampling process converge on an output. \\n\\nFor example, if you started with an eta of 0.5, and eta_decay_scale is set to 0.9, the following etas will be used for successive cycles:\\n\\n0.5\\n0.45   (0.5 * 0.9)\\n0.405  (0.5 * 0.9 * 0.9)\\n0.3645 (0.5 * 0.9 * 0.9 * 0.9)\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":809,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[5704.94384765625,2739.097900390625],\"size\":[262.0870056152344,318],\"flags\":{},\"order\":190,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":2009},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_2m\",5,5.5,\"resample\",true],\"color\":\"#2a363b\",\"bgcolor\":\"#3f5159\"},{\"id\":814,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[6042.30615234375,2736.216064453125],\"size\":[262.0870056152344,318],\"flags\":{},\"order\":125,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2010],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_2m\",5,0.5,\"resample\",true],\"color\":\"#2a363b\",\"bgcolor\":\"#3f5159\"},{\"id\":815,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[6327.12060546875,2732.67724609375],\"size\":[262.0870056152344,318],\"flags\":{},\"order\":189,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2010},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2011],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.25,\"multistep/res_2m\",5,4,\"unsample\",true],\"color\":\"#233\",\"bgcolor\":\"#355\"},{\"id\":817,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[6619.41552734375,2730.05029296875],\"size\":[262.0870056152344,318],\"flags\":{},\"order\":207,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2011},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_2m\",5,0.5,\"resample\",true],\"color\":\"#2a363b\",\"bgcolor\":\"#3f5159\"},{\"id\":810,\"type\":\"Note\",\"pos\":[5840.63916015625,3313.710693359375],\"size\":[333.3376770019531,103.0768051147461],\"flags\":{},\"order\":126,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The two setups above are equivalent. \\n\\nI suggest running \\\"ClownOptions Cycles\\\" with 10 steps_to_run and just watching the progress bar. It's easier to understand visually.\"],\"color\":\"#2a363b\",\"bgcolor\":\"#3f5159\"},{\"id\":813,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[5706.35546875,3124.623291015625],\"size\":[261.53253173828125,202],\"flags\":{},\"order\":127,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[2009],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[1,1,0.25,4,-1,1,false],\"color\":\"#233\",\"bgcolor\":\"#355\"},{\"id\":816,\"type\":\"Note\",\"pos\":[6323.08056640625,3110.669189453125],\"size\":[274.8790588378906,88],\"flags\":{},\"order\":128,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Compare these settings to the \\\"ClownOptions Cycles\\\" node to the left (eta 0.25, cfg 4.0).\\n\"],\"color\":\"#233\",\"bgcolor\":\"#355\"},{\"id\":692,\"type\":\"ReFluxPatcher\",\"pos\":[7490,3210],\"size\":[210,82],\"flags\":{},\"order\":191,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1978}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1979],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReFluxPatcher\"},\"widgets_values\":[\"float32\",true]},{\"id\":820,\"type\":\"Note\",\"pos\":[7770,3350],\"size\":[251.27003479003906,88],\"flags\":{},\"order\":129,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"THIS NODE IS NOT REQUIRED!\\n\\nEXPERIMENTAL!\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":693,\"type\":\"FluxLoader\",\"pos\":[7090,3210],\"size\":[315,282],\"flags\":{},\"order\":130,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1978],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":null},{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"links\":null},{\"name\":\"style_model\",\"localized_name\":\"style_model\",\"type\":\"STYLE_MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"FluxLoader\"},\"widgets_values\":[\"consolidated_s6700.safetensors\",\"default\",\".use_ckpt_clip\",\".none\",\".use_ckpt_vae\",\"sigclip_vision_patch14_384.safetensors\",\"flux1-redux-dev.safetensors\"]},{\"id\":819,\"type\":\"ClownModelLoader\",\"pos\":[7090,2740],\"size\":[315,266],\"flags\":{},\"order\":131,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"hidream_i1_full_fp8.safetensors\",\"fp8_e4m3fn\",\"clip_g_hidream.safetensors\",\"clip_l_hidream.safetensors\",\"t5xxl_fp8_e4m3fn_scaled.safetensors\",\"llama_3.1_8b_instruct_fp8_scaled.safetensors\",\"hidream\",\"ae.sft\"]},{\"id\":698,\"type\":\"Note\",\"pos\":[8098.36279296875,3386.5087890625],\"size\":[282.65814208984375,92.654541015625],\"flags\":{},\"order\":132,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This node is similar ModelSamplingAdvanced, except it uses the dimensions of the latent image to set the shift value.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":795,\"type\":\"Image Sharpen FS\",\"pos\":[8812.0927734375,2744.748046875],\"size\":[210,106],\"flags\":{},\"order\":133,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":null}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Image Sharpen FS\"},\"widgets_values\":[\"hard\",\"median\",6]},{\"id\":821,\"type\":\"Note\",\"pos\":[8531.4560546875,2746.428466796875],\"size\":[211.12799072265625,95.03887939453125],\"flags\":{},\"order\":134,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Unique method for sharpening images. Can add a lot of \\\"pop\\\" to SDXL and AuraFlow outputs that otherwise look soft due to the 4 channel VAE.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":796,\"type\":\"Image Grain Add\",\"pos\":[8819.529296875,2974.022216796875],\"size\":[210,58],\"flags\":{},\"order\":135,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":null}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Image Grain Add\"},\"widgets_values\":[0.5]},{\"id\":822,\"type\":\"Note\",\"pos\":[8537.251953125,2954.097412109375],\"size\":[210.33291625976562,88],\"flags\":{},\"order\":136,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Very useful with ClipVision, IPadapter, etc. for avoiding blurry or blown out/oversaturated outputs.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":797,\"type\":\"Image Repeat Tile To Size\",\"pos\":[8819.0595703125,3152.188720703125],\"size\":[210,106],\"flags\":{},\"order\":137,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":null}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Image Repeat Tile To Size\"},\"widgets_values\":[1024,1024,true]},{\"id\":823,\"type\":\"Note\",\"pos\":[8541.458984375,3162.56103515625],\"size\":[229.4075927734375,156.3510284423828],\"flags\":{},\"order\":138,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Use in conjunction with \\\"ClownGuide Style\\\" when upscaling to prevent blurry outputs. \\n\\nWhen used wisely (not applied to all steps), this can improve results dramatically.\\n\\nConnect your original (unresized) input image to this node.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":828,\"type\":\"PreviewImage\",\"pos\":[9950,2830],\"size\":[210,26],\"flags\":{},\"order\":216,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2015}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":829,\"type\":\"PreviewImage\",\"pos\":[9950,3220],\"size\":[210,26],\"flags\":{},\"order\":218,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2018}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":825,\"type\":\"Frequency Separation Hard Light\",\"pos\":[9990,3070],\"size\":[260.3999938964844,66],\"flags\":{},\"order\":217,\"mode\":0,\"inputs\":[{\"name\":\"high_pass\",\"localized_name\":\"high_pass\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2016},{\"name\":\"original\",\"localized_name\":\"original\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"low_pass\",\"localized_name\":\"low_pass\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2017}],\"outputs\":[{\"name\":\"high_pass\",\"localized_name\":\"high_pass\",\"type\":\"IMAGE\",\"links\":null},{\"name\":\"original\",\"localized_name\":\"original\",\"type\":\"IMAGE\",\"links\":[2019],\"slot_index\":1},{\"name\":\"low_pass\",\"localized_name\":\"low_pass\",\"type\":\"IMAGE\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Frequency Separation Hard Light\"},\"widgets_values\":[]},{\"id\":830,\"type\":\"PreviewImage\",\"pos\":[10300,3090],\"size\":[210,26],\"flags\":{},\"order\":223,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2019}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":831,\"type\":\"Note\",\"pos\":[9205.6611328125,2745.06982421875],\"size\":[293.7847900390625,149.87860107421875],\"flags\":{},\"order\":139,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Unique frequency separation method. Try combining the low pass layer from one image, and the high pass layer from another, such as with faces with carefully matched overlapping alignment, or other scenes. Better at transferring compositional information such as lighting and hue than the frequency separation method in Photoshop.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":827,\"type\":\"Image Median Blur\",\"pos\":[9443.4658203125,3153.701416015625],\"size\":[210,58],\"flags\":{},\"order\":192,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2013}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[2014],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Image Median Blur\"},\"widgets_values\":[40]},{\"id\":832,\"type\":\"Image Gaussian Blur\",\"pos\":[9442.099609375,3277.42578125],\"size\":[210,58],\"flags\":{},\"order\":140,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":null}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Image Gaussian Blur\"},\"widgets_values\":[40]},{\"id\":824,\"type\":\"Frequency Separation Hard Light\",\"pos\":[9680,3070],\"size\":[260.3999938964844,66],\"flags\":{},\"order\":209,\"mode\":0,\"inputs\":[{\"name\":\"high_pass\",\"localized_name\":\"high_pass\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"original\",\"localized_name\":\"original\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2012},{\"name\":\"low_pass\",\"localized_name\":\"low_pass\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2014}],\"outputs\":[{\"name\":\"high_pass\",\"localized_name\":\"high_pass\",\"type\":\"IMAGE\",\"links\":[2015,2016],\"slot_index\":0},{\"name\":\"original\",\"localized_name\":\"original\",\"type\":\"IMAGE\",\"links\":null},{\"name\":\"low_pass\",\"localized_name\":\"low_pass\",\"type\":\"IMAGE\",\"links\":[2017,2018],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"Frequency Separation Hard Light\"},\"widgets_values\":[]},{\"id\":833,\"type\":\"Note\",\"pos\":[9539.69921875,2749.275146484375],\"size\":[255.63558959960938,88],\"flags\":{},\"order\":141,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Median blur is edge-aware, and usually gives better results than gaussian blur, if images are carefully aligned first.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":840,\"type\":\"Frequency Separation Hard Light\",\"pos\":[11160,3190],\"size\":[260.3999938964844,66],\"flags\":{},\"order\":211,\"mode\":0,\"inputs\":[{\"name\":\"high_pass\",\"localized_name\":\"high_pass\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"original\",\"localized_name\":\"original\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2025},{\"name\":\"low_pass\",\"localized_name\":\"low_pass\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2026}],\"outputs\":[{\"name\":\"high_pass\",\"localized_name\":\"high_pass\",\"type\":\"IMAGE\",\"links\":[],\"slot_index\":0},{\"name\":\"original\",\"localized_name\":\"original\",\"type\":\"IMAGE\",\"links\":null},{\"name\":\"low_pass\",\"localized_name\":\"low_pass\",\"type\":\"IMAGE\",\"links\":[2022],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"Frequency Separation Hard Light\"},\"widgets_values\":[]},{\"id\":838,\"type\":\"Image Median Blur\",\"pos\":[10920,3270],\"size\":[210,58],\"flags\":{},\"order\":194,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2024}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[2026],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Image Median Blur\"},\"widgets_values\":[40]},{\"id\":842,\"type\":\"Image Median Blur\",\"pos\":[10930,3010],\"size\":[210,58],\"flags\":{},\"order\":193,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2031}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[2032],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Image Median Blur\"},\"widgets_values\":[40]},{\"id\":826,\"type\":\"LoadImage\",\"pos\":[9212.66796875,3090.664306640625],\"size\":[210,314],\"flags\":{},\"order\":142,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2012,2013],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"00107-496528661.png\",\"image\"]},{\"id\":843,\"type\":\"LoadImage\",\"pos\":[10690,2920],\"size\":[210,314],\"flags\":{},\"order\":143,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2030,2031],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"00109-3396456281.png\",\"image\"]},{\"id\":835,\"type\":\"Frequency Separation Hard Light\",\"pos\":[11480,3030],\"size\":[260.3999938964844,66],\"flags\":{},\"order\":219,\"mode\":0,\"inputs\":[{\"name\":\"high_pass\",\"localized_name\":\"high_pass\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2033},{\"name\":\"original\",\"localized_name\":\"original\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"low_pass\",\"localized_name\":\"low_pass\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2022}],\"outputs\":[{\"name\":\"high_pass\",\"localized_name\":\"high_pass\",\"type\":\"IMAGE\",\"links\":null},{\"name\":\"original\",\"localized_name\":\"original\",\"type\":\"IMAGE\",\"links\":[2023],\"slot_index\":1},{\"name\":\"low_pass\",\"localized_name\":\"low_pass\",\"type\":\"IMAGE\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Frequency Separation Hard Light\"},\"widgets_values\":[]},{\"id\":837,\"type\":\"PreviewImage\",\"pos\":[11800,3050],\"size\":[210,26],\"flags\":{},\"order\":224,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2023}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":841,\"type\":\"Frequency Separation Hard Light\",\"pos\":[11160,2910],\"size\":[260.3999938964844,66],\"flags\":{},\"order\":210,\"mode\":0,\"inputs\":[{\"name\":\"high_pass\",\"localized_name\":\"high_pass\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"original\",\"localized_name\":\"original\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2030},{\"name\":\"low_pass\",\"localized_name\":\"low_pass\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2032}],\"outputs\":[{\"name\":\"high_pass\",\"localized_name\":\"high_pass\",\"type\":\"IMAGE\",\"links\":[2033],\"slot_index\":0},{\"name\":\"original\",\"localized_name\":\"original\",\"type\":\"IMAGE\",\"links\":null},{\"name\":\"low_pass\",\"localized_name\":\"low_pass\",\"type\":\"IMAGE\",\"links\":[],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"Frequency Separation Hard Light\"},\"widgets_values\":[]},{\"id\":836,\"type\":\"LoadImage\",\"pos\":[10680,3210],\"size\":[213.1792755126953,314],\"flags\":{},\"order\":144,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2024,2025],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"00107-496528661.png\",\"image\"]},{\"id\":844,\"type\":\"Note\",\"pos\":[10287.0224609375,3208.888916015625],\"size\":[255.63558959960938,88],\"flags\":{},\"order\":145,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This will output the original image.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":845,\"type\":\"Note\",\"pos\":[11723.419921875,2862.596923828125],\"size\":[285.8372497558594,88.79490661621094],\"flags\":{},\"order\":146,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This will combine high frequency (detail) information from the first image with the low frequency (color, hue, lighting) information from the second image.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":794,\"type\":\"Image Get Color Swatches\",\"pos\":[8580,5060],\"size\":[295.6000061035156,26],\"flags\":{},\"order\":147,\"mode\":0,\"inputs\":[{\"name\":\"image_color_swatches\",\"localized_name\":\"image_color_swatches\",\"type\":\"IMAGE\",\"link\":null}],\"outputs\":[{\"name\":\"color_swatches\",\"localized_name\":\"color_swatches\",\"type\":\"COLOR_SWATCHES\",\"links\":[2034],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Image Get Color Swatches\"},\"widgets_values\":[]},{\"id\":848,\"type\":\"Note\",\"pos\":[8750,4900],\"size\":[328.192138671875,88],\"flags\":{},\"order\":148,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This configuration is equivalent to \\\"Masks From Colors\\\".\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":792,\"type\":\"Masks From Color Swatches\",\"pos\":[8900,5040],\"size\":[315,46],\"flags\":{},\"order\":195,\"mode\":0,\"inputs\":[{\"name\":\"image_color_mask\",\"localized_name\":\"image_color_mask\",\"type\":\"IMAGE\",\"link\":null},{\"name\":\"color_swatches\",\"localized_name\":\"color_swatches\",\"type\":\"COLOR_SWATCHES\",\"link\":2034}],\"outputs\":[{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Masks From Color Swatches\"},\"widgets_values\":[]},{\"id\":851,\"type\":\"Masks Unpack 8\",\"pos\":[9280,4590],\"size\":[140,166],\"flags\":{},\"order\":149,\"mode\":0,\"inputs\":[{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"link\":null}],\"outputs\":[{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Masks Unpack 8\"},\"widgets_values\":[]},{\"id\":852,\"type\":\"Masks Unpack 4\",\"pos\":[9280,4430],\"size\":[140,86],\"flags\":{},\"order\":150,\"mode\":0,\"inputs\":[{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"link\":null}],\"outputs\":[{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Masks Unpack 4\"},\"widgets_values\":[]},{\"id\":849,\"type\":\"Note\",\"pos\":[8590,4370],\"size\":[296.4569396972656,149.35540771484375],\"flags\":{},\"order\":151,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"image_color_swatches:\\n\\nThis is an image with colors drawn one at a time, top to bottom. It will set the order of the masks outputted in the connected \\\"unpack\\\" node to be the same as the order they appear in the \\\"swatches\\\" image.\\n\\nNote: white is the background and is ignored!\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":793,\"type\":\"Masks From Colors\",\"pos\":[8910,4430],\"size\":[330,46],\"flags\":{},\"order\":152,\"mode\":0,\"inputs\":[{\"name\":\"image_color_swatches\",\"localized_name\":\"image_color_swatches\",\"type\":\"IMAGE\",\"link\":null},{\"name\":\"image_color_mask\",\"localized_name\":\"image_color_mask\",\"type\":\"IMAGE\",\"link\":null}],\"outputs\":[{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":[2036,2037],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Masks From Colors\"},\"widgets_values\":[]},{\"id\":855,\"type\":\"MaskPreview+\",\"pos\":[8970,4590],\"size\":[210,26],\"flags\":{},\"order\":197,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2037}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview+\"},\"widgets_values\":[]},{\"id\":854,\"type\":\"Note\",\"pos\":[8600,4580],\"size\":[284.8203125,146.1818084716797],\"flags\":{},\"order\":153,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"image_color_mask:\\n\\nDraw a mask using the same colors used in the swatches. \\n\\nI *strongly* suggest using Mask Preview as shown to the right to get a feel for this.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":847,\"type\":\"MaskFromRGBCMYBW+\",\"pos\":[8300,4640],\"size\":[224.02476501464844,246],\"flags\":{},\"order\":154,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":null}],\"outputs\":[{\"name\":\"red\",\"localized_name\":\"red\",\"type\":\"MASK\",\"links\":null},{\"name\":\"green\",\"localized_name\":\"green\",\"type\":\"MASK\",\"links\":null},{\"name\":\"blue\",\"localized_name\":\"blue\",\"type\":\"MASK\",\"links\":null},{\"name\":\"cyan\",\"localized_name\":\"cyan\",\"type\":\"MASK\",\"links\":null},{\"name\":\"magenta\",\"localized_name\":\"magenta\",\"type\":\"MASK\",\"links\":null},{\"name\":\"yellow\",\"localized_name\":\"yellow\",\"type\":\"MASK\",\"links\":null},{\"name\":\"black\",\"localized_name\":\"black\",\"type\":\"MASK\",\"links\":null},{\"name\":\"white\",\"localized_name\":\"white\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"MaskFromRGBCMYBW+\"},\"widgets_values\":[0.15,0.15,0.15]},{\"id\":846,\"type\":\"Note\",\"pos\":[8270,4370],\"size\":[286.9356994628906,206.45907592773438],\"flags\":{},\"order\":155,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"These nodes can be useful in situations where you want to composite complex masks with many regions, without overlap, and without missing areas. They allow these to be made easily in an editor such as MSPaint.\\n\\nThey are somewhat similar in function to the node shown below from ComfyUI Essentials. They will all get the job done. The only advantage of the \\\"Masks From Colors\\\" nodes is that any color may be used, theoretically allowing dozens of zones to be drawn. For 8 or fewer zones (most cases) either may be used.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":850,\"type\":\"Masks Unpack 16\",\"pos\":[9280,4840],\"size\":[140,326],\"flags\":{},\"order\":196,\"mode\":0,\"inputs\":[{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"link\":2036}],\"outputs\":[{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null},{\"name\":\"masks\",\"localized_name\":\"masks\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Masks Unpack 16\"},\"widgets_values\":[]},{\"id\":858,\"type\":\"VAEEncode\",\"pos\":[12693.298828125,3026.815673828125],\"size\":[140,46],\"flags\":{},\"order\":212,\"mode\":0,\"inputs\":[{\"name\":\"pixels\",\"localized_name\":\"pixels\",\"type\":\"IMAGE\",\"link\":2039},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":null}],\"outputs\":[{\"name\":\"LATENT\",\"localized_name\":\"LATENT\",\"type\":\"LATENT\",\"links\":[2040],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEEncode\"},\"widgets_values\":[]},{\"id\":862,\"type\":\"Note\",\"pos\":[12226.9638671875,2733.324951171875],\"size\":[307.74560546875,219.57456970214844],\"flags\":{},\"order\":156,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ConditioningBatch4 (and 8) apply conditioning to each tile in the order received by the sampler. This is very useful for ensuring coherent results. This WF avoids the creation of seams, and is efficient, only requiring 4 tiles for upscales. \\n\\nConditioningBatch4 currently is not supported by the negative conditioning input. Use a standard negative prompt (or nothing).\\n\\nIf you separate the tiles individually, you should be able to use Flux Redux conditioning for each of the ConditioningBatch4 inputs.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":860,\"type\":\"VAEDecode\",\"pos\":[13233.298828125,2776.815673828125],\"size\":[140,46],\"flags\":{},\"order\":225,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":2041},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":null}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2042],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":853,\"type\":\"ConditioningBatch4\",\"pos\":[12613.298828125,2746.815673828125],\"size\":[228.39999389648438,86],\"flags\":{},\"order\":157,\"mode\":0,\"inputs\":[{\"name\":\"conditioning_0\",\"localized_name\":\"conditioning_0\",\"type\":\"CONDITIONING\",\"link\":null},{\"name\":\"conditioning_1\",\"localized_name\":\"conditioning_1\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"conditioning_2\",\"localized_name\":\"conditioning_2\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"conditioning_3\",\"localized_name\":\"conditioning_3\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":[2038],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ConditioningBatch4\"},\"widgets_values\":[]},{\"id\":856,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[12883.298828125,2776.815673828125],\"size\":[315,418],\"flags\":{},\"order\":220,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2038},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2043},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2040},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2041],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"beta57\",30,-1,1,5.5,0,\"randomize\",\"standard\",true]},{\"id\":863,\"type\":\"CLIPTextEncode\",\"pos\":[12610.970703125,2882.634033203125],\"size\":[229.78173828125,88],\"flags\":{},\"order\":158,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":null}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2043]}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"\"]},{\"id\":864,\"type\":\"ImageResize+\",\"pos\":[12208.77734375,3038.4794921875],\"size\":[210,218],\"flags\":{},\"order\":159,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":null}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2044]},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ImageResize+\"},\"widgets_values\":[1792,1792,\"nearest\",\"stretch\",\"always\",0]},{\"id\":857,\"type\":\"ImageTile+\",\"pos\":[12453.298828125,3036.815673828125],\"size\":[210,234],\"flags\":{},\"order\":198,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":2044}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2039],\"slot_index\":0},{\"name\":\"tile_width\",\"localized_name\":\"tile_width\",\"type\":\"INT\",\"links\":null},{\"name\":\"tile_height\",\"localized_name\":\"tile_height\",\"type\":\"INT\",\"links\":null},{\"name\":\"overlap_x\",\"localized_name\":\"overlap_x\",\"type\":\"INT\",\"links\":null},{\"name\":\"overlap_y\",\"localized_name\":\"overlap_y\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ImageTile+\"},\"widgets_values\":[2,2,0,128,128]},{\"id\":859,\"type\":\"ImageUntile+\",\"pos\":[13413.298828125,2776.815673828125],\"size\":[210,130],\"flags\":{},\"order\":227,\"mode\":0,\"inputs\":[{\"name\":\"tiles\",\"localized_name\":\"tiles\",\"type\":\"IMAGE\",\"link\":2042}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ImageUntile+\"},\"widgets_values\":[128,128,2,2]},{\"id\":868,\"type\":\"Note\",\"pos\":[9664.4189453125,3771.01416015625],\"size\":[284.8223571777344,176.87057495117188],\"flags\":{},\"order\":160,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Styles are supported for:\\n\\nHiDream (outstanding results)\\n\\nFlux (best results are with style loras, as the base model is severely lacking understanding of styles)\\n\\nWAN \\n\\nUse of the \\\"Re...\\\" patcher nodes is required, as custom model code is used.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":869,\"type\":\"TorchCompileModels\",\"pos\":[7795.904296875,2736.478515625],\"size\":[247.29759216308594,178],\"flags\":{},\"order\":161,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":null}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"TorchCompileModels\"},\"widgets_values\":[\"inductor\",false,\"default\",false,64,0]},{\"id\":703,\"type\":\"SD35Loader\",\"pos\":[7438.462890625,2740.95849609375],\"size\":[315,218],\"flags\":{},\"order\":162,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"SD35Loader\"},\"widgets_values\":[\"sd3.5_medium.safetensors\",\"default\",\".use_ckpt_clip\",\".none\",\".none\",\".use_ckpt_vae\"]},{\"id\":700,\"type\":\"Note\",\"pos\":[7480,3060],\"size\":[225.09121704101562,88],\"flags\":{},\"order\":163,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This node must be used when using regional conditioning with Flux. \"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":870,\"type\":\"Note\",\"pos\":[8087.9755859375,2736.00830078125],\"size\":[291.73583984375,88],\"flags\":{},\"order\":164,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Generic compile node for many models.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":696,\"type\":\"ModelSamplingAdvanced\",\"pos\":[8136.62109375,3062.419677734375],\"size\":[210,82],\"flags\":{},\"order\":165,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":null}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvanced\"},\"widgets_values\":[\"exponential\",3]},{\"id\":697,\"type\":\"Note\",\"pos\":[8088.63037109375,2876.072998046875],\"size\":[299.7002868652344,122.62284851074219],\"flags\":{},\"order\":166,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ModelSamplingAdvanced many different models, including AuraFlow, SD3.5, Flux, and more, including video models.\\n\\nWhen \\\"scaling\\\" is set to \\\"exponential\\\" it uses the method employed by Flux, which is actually quite good with SD3.5. \\\"linear\\\" is the default method used by SD3.5.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":788,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[9993.591796875,4034.93017578125],\"size\":[243.85076904296875,286],\"flags\":{},\"order\":167,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,15,false]},{\"id\":865,\"type\":\"Note\",\"pos\":[9667.3056640625,4012.013427734375],\"size\":[283.76544189453125,448.7384338378906],\"flags\":{},\"order\":168,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownGuide style: the settings shown are the ones you will generally use. WCT is the more accurate of the two methods. If you have issues, you can fall back to AdaIN. \\n\\nIt is best to use this on the first 1/2 of steps or so. Be sure to provide some information about the style in the prompt for best results \\\"cel-shaded anime illustration of...\\\" \\\"\\\"gritty illustration of....\\\" \\\"analog photo of\\\".\\n\\nThe mask current has no effect, but is there as a placeholder as regional style methods are under development.\\n\\n\\nIf you are using CFG = 1.0 (typical with distilled models such as Flux or HiDream Dev), synweight has no effect and can be ignored.\\n\\nSynweight simply applies the same style to the opposite conditioning (so if apply_to = positive, and synweight is at 0.5, it will use it at 0.5 strength on the negative). In the vast majority of cases, it's best to leave synweight at the default. Occasionally, setting it to 0.5 or 0.0 can be helpful, but it can result in burning the image due to issues with CFG. \\n\\nStandard guides may be inputted into this node, if you wish to use them together.\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":871,\"type\":\"Note\",\"pos\":[9669.4208984375,4527.666015625],\"size\":[277.0335998535156,102.53260040283203],\"flags\":{},\"order\":169,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownGuide Mean: somewhat similar effect, but does not require the \\\"Re...\\\" patcher nodes and works with all models. Effect is typically considerably less precise.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":866,\"type\":\"Note\",\"pos\":[10292.3017578125,4024.706787109375],\"size\":[284.8223571777344,176.87057495117188],\"flags\":{},\"order\":170,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"BLUR KILLER TIP:\\n\\nWhen generating photography with Flux or HiDream (where blur can be frustratingly difficult to avoid), try using a style guide for the first 1/3rd of steps that is a sharp photograph with similar lighting/hues to what you are aiming for. You might need to try a handful of photos before landing on a \\\"hit\\\", but the right one will eliminate blur 100%, even with a close up portrait photograph.\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":872,\"type\":\"ClownGuide_AdaIN_MMDiT_Beta\",\"pos\":[10680,3810],\"size\":[246.13087463378906,430],\"flags\":{},\"order\":171,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownGuide_AdaIN_MMDiT_Beta\"},\"widgets_values\":[1,\"constant\",\"\",\"\",\"20\",\"0.5\",0,15,false]},{\"id\":874,\"type\":\"ClownGuide_AttnInj_MMDiT_Beta\",\"pos\":[10990,3810],\"size\":[272.0969543457031,718],\"flags\":{},\"order\":172,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownGuide_AttnInj_MMDiT_Beta\"},\"widgets_values\":[1,\"constant\",\"0,1,3\",\"1.0\",\"20\",\"0.5\",0,0,1,0,0,0,0,0,0,0,0,0,0,15,false]},{\"id\":873,\"type\":\"Note\",\"pos\":[10590,4300],\"size\":[348.2928771972656,313.42919921875],\"flags\":{},\"order\":173,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownGuide AdaIN and AttnInj:\\n\\nAdvanced experimental nodes for HiDream. Very strong effect and can be used together with all other guide nodes.\\n\\nBest used like a monkey in a missile silo. Start pushing buttons and you'll win eventually!\\n\\nList the blocks you wish the effect to be applied to, and the weight of the effect on that block, in the same order. \\\"all\\\" will use all blocks of that type, and if only one weight is listed, it will use that for all blocks listed.\\n\\nThere are 16 double blocks, and 32 single blocks. Each is numbered beginning at 0. For example, the following block numberings are equivalent for double_blocks:\\n\\nall\\n0-15\\n0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":875,\"type\":\"Note\",\"pos\":[10980,4590],\"size\":[301.1705017089844,233.60943603515625],\"flags\":{},\"order\":174,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Injects calculated attention from the guide into the main sampling process. This will carry over some compositional information, as well as lighting. It can be very interesting in combination with ClownGuide Style or ClownGuide AdaIN (MMDiT).\\n\\nimg_v will have the most color/style information with the least effect on composition.\\n\\nimg_k will increase the amount of compositional information.\\n\\nimg_q will increase the compositional information to the point where it can begin looking more like a traditional guide mode.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":867,\"type\":\"ClownGuide_Mean_Beta\",\"pos\":[9997.337890625,4526.90869140625],\"size\":[241.34442138671875,238],\"flags\":{},\"order\":175,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Mean_Beta\"},\"widgets_values\":[1,1,\"constant\",0,15,false]},{\"id\":679,\"type\":\"SharkSampler_Beta\",\"pos\":[1370,3140],\"size\":[285.713623046875,386],\"flags\":{},\"order\":199,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sampler\",\"localized_name\":\"sampler\",\"type\":\"SAMPLER\",\"shape\":7,\"link\":1973},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":2046},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"SharkSampler_Beta\"},\"widgets_values\":[\"beta57\",30,-1,1,5.5,0,\"fixed\",\"standard\"]},{\"id\":876,\"type\":\"SharkOptions_GuiderInput\",\"pos\":[1051.8299560546875,3379.638427734375],\"size\":[282.30291748046875,46],\"flags\":{},\"order\":176,\"mode\":0,\"inputs\":[{\"name\":\"guider\",\"localized_name\":\"guider\",\"type\":\"GUIDER\",\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[2046],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_GuiderInput\"}},{\"id\":802,\"type\":\"Note\",\"pos\":[690.13916015625,3410.965576171875],\"size\":[321.8917236328125,108.77723693847656],\"flags\":{},\"order\":95,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Typically, SharkSampler slots into workflows where SamplerCustom would have been used.\\n\\nSharkOptions GuiderInput allows it to be used like SamplerCustomAdvanced, with any guider input of your choosing. It may also be used with ClownSharkSampler.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[1365,453,0,454,0,\"IMAGE\"],[1890,584,0,606,0,\"MODEL\"],[1904,606,0,601,0,\"MODEL\"],[1907,598,0,610,1,\"CONDITIONING\"],[1908,599,0,610,2,\"CONDITIONING\"],[1909,601,0,610,0,\"MODEL\"],[1910,601,0,612,0,\"MODEL\"],[1911,598,0,612,1,\"CONDITIONING\"],[1912,599,0,612,2,\"CONDITIONING\"],[1914,598,0,613,1,\"CONDITIONING\"],[1915,599,0,613,2,\"CONDITIONING\"],[1923,613,0,453,0,\"LATENT\"],[1926,609,0,613,0,\"MODEL\"],[1937,606,1,598,0,\"CLIP\"],[1938,606,1,599,0,\"CLIP\"],[1939,620,0,606,1,\"CLIP\"],[1940,621,0,453,1,\"VAE\"],[1947,624,0,612,6,\"OPTIONS\"],[1948,625,0,613,6,\"OPTIONS\"],[1949,610,0,612,3,\"LATENT\"],[1950,612,0,613,3,\"LATENT\"],[1951,626,0,610,6,\"OPTIONS\"],[1952,640,0,641,3,\"LATENT\"],[1953,641,0,642,3,\"LATENT\"],[1962,666,0,660,6,\"OPTIONS\"],[1963,661,0,660,7,\"OPTIONS\"],[1968,676,0,660,9,\"OPTIONS\"],[1971,662,0,660,10,\"OPTIONS\"],[1972,665,0,660,11,\"OPTIONS\"],[1973,680,0,679,3,\"SAMPLER\"],[1974,682,0,660,12,\"OPTIONS\"],[1975,684,0,682,0,\"SIGMAS\"],[1976,684,0,686,0,\"SIGMAS\"],[1977,688,0,691,5,\"GUIDES\"],[1978,693,0,692,0,\"MODEL\"],[1979,692,0,694,0,\"MODEL\"],[1980,694,0,695,0,\"MODEL\"],[1982,724,0,720,1,\"CONDITIONING\"],[1983,724,0,720,2,\"CONDITIONING\"],[1984,722,0,720,5,\"GUIDES\"],[1985,723,0,721,5,\"GUIDES\"],[1986,751,0,752,2,\"MASK\"],[1987,753,0,752,3,\"MASK\"],[1988,756,0,754,2,\"MASK\"],[1989,757,0,754,3,\"MASK\"],[1990,760,0,660,13,\"OPTIONS\"],[1991,763,0,660,8,\"OPTIONS\"],[1992,735,0,770,5,\"GUIDES\"],[1994,772,0,770,6,\"OPTIONS\"],[1995,775,0,778,0,\"COND_REGIONS\"],[1996,778,0,779,0,\"COND_REGIONS\"],[1999,779,0,783,0,\"COND_REGIONS\"],[2000,783,0,776,0,\"COND_REGIONS\"],[2003,798,0,660,14,\"OPTIONS\"],[2004,804,0,805,4,\"LATENT\"],[2005,805,0,806,4,\"LATENT\"],[2006,807,0,805,6,\"OPTIONS\"],[2007,811,0,660,15,\"OPTIONS\"],[2009,813,0,809,6,\"OPTIONS\"],[2010,814,0,815,4,\"LATENT\"],[2011,815,0,817,4,\"LATENT\"],[2012,826,0,824,1,\"IMAGE\"],[2013,826,0,827,0,\"IMAGE\"],[2014,827,0,824,2,\"IMAGE\"],[2015,824,0,828,0,\"IMAGE\"],[2016,824,0,825,0,\"IMAGE\"],[2017,824,2,825,2,\"IMAGE\"],[2018,824,2,829,0,\"IMAGE\"],[2019,825,1,830,0,\"IMAGE\"],[2022,840,2,835,2,\"IMAGE\"],[2023,835,1,837,0,\"IMAGE\"],[2024,836,0,838,0,\"IMAGE\"],[2025,836,0,840,1,\"IMAGE\"],[2026,838,0,840,2,\"IMAGE\"],[2030,843,0,841,1,\"IMAGE\"],[2031,843,0,842,0,\"IMAGE\"],[2032,842,0,841,2,\"IMAGE\"],[2033,841,0,835,0,\"IMAGE\"],[2034,794,0,792,1,\"COLOR_SWATCHES\"],[2036,793,0,850,0,\"MASK\"],[2037,793,0,855,0,\"MASK\"],[2038,853,0,856,1,\"CONDITIONING\"],[2039,857,0,858,0,\"IMAGE\"],[2040,858,0,856,3,\"LATENT\"],[2041,856,0,860,0,\"LATENT\"],[2042,860,0,859,0,\"IMAGE\"],[2043,863,0,856,2,\"CONDITIONING\"],[2044,864,0,857,0,\"IMAGE\"],[2046,876,0,679,6,\"OPTIONS\"]],\"groups\":[{\"id\":1,\"title\":\"UNSAMPLING SETUP\",\"bounding\":[727.6610717773438,4702.3486328125,1679.59423828125,1066.484619140625],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":2,\"title\":\"CHAINED SAMPLER SETUP\",\"bounding\":[726.7158203125,3763.15576171875,1680.4798583984375,894.4533081054688],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":3,\"title\":\"INTRODUCTION TO CLOWNSAMPLING\",\"bounding\":[603.1690063476562,2607.28857421875,1866.77099609375,983.0913696289062],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":5,\"title\":\"OPTIONS AND AUTOMATION\",\"bounding\":[2599.417236328125,2632.92578125,1724.455078125,3136.83154296875],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":6,\"title\":\"GUIDES\",\"bounding\":[4494.4521484375,3692.791259765625,1757.123291015625,2078.85498046875],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":7,\"title\":\"LOADERS AND PATCHERS\",\"bounding\":[7042.78515625,2636.466552734375,1379.5494384765625,916.3328247070312],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":8,\"title\":\"ULTRACASCADE\",\"bounding\":[-3341.39892578125,2604.916748046875,3831.125244140625,1936.0570068359375],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":9,\"title\":\"REGIONAL CONDITIONING\",\"bounding\":[6335.28125,3695.2412109375,3155.973388671875,1510.357421875],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":10,\"title\":\"Cyclosampling (looping a sampler node)\",\"bounding\":[4505.0283203125,2635.9287109375,2438.49853515625,916.0137939453125],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":11,\"title\":\"Miscellaneous Image Nodes\",\"bounding\":[8475.5224609375,2640.341552734375,610.4679565429688,912.4259033203125],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":12,\"title\":\"Frequency Separation\",\"bounding\":[9150.265625,2642.074462890625,2909.011474609375,911.339111328125],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":13,\"title\":\"Tiled Upscales with Tiled Conditioning\",\"bounding\":[12136.822265625,2643.802490234375,1541.95556640625,664.3409423828125],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":14,\"title\":\"Style Transfer\",\"bounding\":[9615.3837890625,3690.478515625,2453.906494140625,1514.1885986328125],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}}],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.6105100000000008,\"offset\":[1291.1169756467461,-2415.779581669771]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/sd35 medium unsampling data.json",
    "content": "{\"last_node_id\":635,\"last_link_id\":2023,\"nodes\":[{\"id\":627,\"type\":\"SD35Loader\",\"pos\":[602.6103515625,-123.47957611083984],\"size\":[315,218],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2014],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2010],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2011,2012],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"SD35Loader\"},\"widgets_values\":[\"sd3.5_medium.safetensors\",\"default\",\"clip_l_sd35.safetensors\",\"clip_g_sd35.safetensors\",\"t5xxl_fp16.safetensors\",\"sd35_vae.safetensors\"]},{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\"size\":[315,314],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2017]},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14254_.png\",\"image\"]},{\"id\":107,\"type\":\"CLIPTextEncode\",\"pos\":[959.4713745117188,-123.3353500366211],\"size\":[282.33453369140625,173.58438110351562],\"flags\":{\"collapsed\":false},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2010}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2018],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"the mournful lamentations of of a female rock singer on stage with chaos behind her, her face screaming her sorrowful refrains the despairing cries of anguished screams howling agonized moans, her pained whispers mournful sighs distant echoes across the smoky stage, fading memories of lost loves, forgotten dreams, shattered hopes, crushed spirits, broken hearts\"]},{\"id\":629,\"type\":\"VAEEncodeAdvanced\",\"pos\":[961.6968994140625,123.66181182861328],\"size\":[278.0284423828125,280.5834045410156],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2017},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":2012}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2013,2020,2022],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2015]},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":632,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[962.5586547851562,-316.3705139160156],\"size\":[277.62237548828125,126],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2014},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2015}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2016],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":591,\"type\":\"VAEDecode\",\"pos\":[1924.08251953125,-233.2501983642578],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"label\":\"samples\",\"type\":\"LATENT\",\"link\":2008},{\"name\":\"vae\",\"localized_name\":\"vae\",\"label\":\"vae\",\"type\":\"VAE\",\"link\":2011}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"label\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[2019],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":633,\"type\":\"SaveImage\",\"pos\":[1921.8458251953125,-123.4797134399414],\"size\":[436.4179382324219,508.5302429199219],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2019}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":631,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[1605.8143310546875,-124.34080505371094],\"size\":[280.55523681640625,510],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2005},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2023},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2008],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",-1,5.5,\"resample\",true]},{\"id\":630,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1271.7001953125,-124.3408432006836],\"size\":[291.7499084472656,630],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2016},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2018},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2013},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2021},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2005]},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",60,-1,1,1,0,\"fixed\",\"unsample\",true]},{\"id\":634,\"type\":\"ClownGuide_Beta\",\"pos\":[1276.0064697265625,-480.84442138671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2021],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"data\",false,false,0.5,1,\"constant\",0,-1,false]},{\"id\":635,\"type\":\"ClownGuide_Beta\",\"pos\":[1604.09326171875,-479.9832763671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2022},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2023],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"data\",false,true,0.35,0.35,\"beta57\",0,12,false]}],\"links\":[[2005,630,0,631,4,\"LATENT\"],[2008,631,0,591,0,\"LATENT\"],[2010,627,1,107,0,\"CLIP\"],[2011,627,2,591,1,\"VAE\"],[2012,627,2,629,4,\"VAE\"],[2013,629,0,630,3,\"LATENT\"],[2014,627,0,632,0,\"MODEL\"],[2015,629,3,632,1,\"LATENT\"],[2016,632,0,630,0,\"MODEL\"],[2017,628,0,629,0,\"IMAGE\"],[2018,107,0,630,1,\"CONDITIONING\"],[2019,591,0,633,0,\"IMAGE\"],[2020,629,0,634,0,\"LATENT\"],[2021,634,0,630,5,\"GUIDES\"],[2022,629,0,635,0,\"LATENT\"],[2023,635,0,631,5,\"GUIDES\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.7985878990923265,\"offset\":[672.6014509912476,552.1175843760627]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/sd35 medium unsampling.json",
    "content": "{\"last_node_id\":635,\"last_link_id\":2023,\"nodes\":[{\"id\":627,\"type\":\"SD35Loader\",\"pos\":[602.6103515625,-123.47957611083984],\"size\":[315,218],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2014],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[2010],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[2011,2012],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"SD35Loader\"},\"widgets_values\":[\"sd3.5_medium.safetensors\",\"default\",\"clip_l_sd35.safetensors\",\"clip_g_sd35.safetensors\",\"t5xxl_fp16.safetensors\",\"sd35_vae.safetensors\"]},{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\"size\":[315,314],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2017]},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_14254_.png\",\"image\"]},{\"id\":107,\"type\":\"CLIPTextEncode\",\"pos\":[959.4713745117188,-123.3353500366211],\"size\":[282.33453369140625,173.58438110351562],\"flags\":{\"collapsed\":false},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"label\":\"clip\",\"type\":\"CLIP\",\"link\":2010}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"label\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"shape\":3,\"links\":[2018],\"slot_index\":0}],\"title\":\"Positive Prompt\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"the mournful lamentations of of a female rock singer on stage with chaos behind her, her face screaming her sorrowful refrains the despairing cries of anguished screams howling agonized moans, her pained whispers mournful sighs distant echoes across the smoky stage, fading memories of lost loves, forgotten dreams, shattered hopes, crushed spirits, broken hearts\"]},{\"id\":629,\"type\":\"VAEEncodeAdvanced\",\"pos\":[961.6968994140625,123.66181182861328],\"size\":[278.0284423828125,280.5834045410156],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":2017},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":2012}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2013,2020,2022],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[2015]},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":632,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[962.5586547851562,-316.3705139160156],\"size\":[277.62237548828125,126],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2014},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":2015}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2016],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":634,\"type\":\"ClownGuide_Beta\",\"pos\":[1276.0064697265625,-480.84442138671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2020},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2021],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"epsilon\",false,false,0.5,1,\"constant\",0,-1,false]},{\"id\":633,\"type\":\"SaveImage\",\"pos\":[1921.8458251953125,-123.4797134399414],\"size\":[436.4179382324219,508.5302429199219],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2019}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":631,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[1605.8143310546875,-124.34080505371094],\"size\":[280.55523681640625,510],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2005},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2023},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2008],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",-1,5.5,\"resample\",true]},{\"id\":630,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1271.7001953125,-124.3408432006836],\"size\":[291.7499084472656,630],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":2016},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2018},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2013},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":2021},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2005]},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",60,-1,1,1,0,\"fixed\",\"unsample\",true]},{\"id\":635,\"type\":\"ClownGuide_Beta\",\"pos\":[1604.09326171875,-479.9832763671875],\"size\":[284.860595703125,290.8609924316406],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":2022},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[2023],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"epsilon\",false,true,0.5,1,\"beta57\",0,25,false]},{\"id\":591,\"type\":\"VAEDecode\",\"pos\":[1924.08251953125,-233.2501983642578],\"size\":[140,46],\"flags\":{\"collapsed\":false},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"label\":\"samples\",\"type\":\"LATENT\",\"link\":2008},{\"name\":\"vae\",\"localized_name\":\"vae\",\"label\":\"vae\",\"type\":\"VAE\",\"link\":2011}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"label\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[2019],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]}],\"links\":[[2005,630,0,631,4,\"LATENT\"],[2008,631,0,591,0,\"LATENT\"],[2010,627,1,107,0,\"CLIP\"],[2011,627,2,591,1,\"VAE\"],[2012,627,2,629,4,\"VAE\"],[2013,629,0,630,3,\"LATENT\"],[2014,627,0,632,0,\"MODEL\"],[2015,629,3,632,1,\"LATENT\"],[2016,632,0,630,0,\"MODEL\"],[2017,628,0,629,0,\"IMAGE\"],[2018,107,0,630,1,\"CONDITIONING\"],[2019,591,0,633,0,\"IMAGE\"],[2020,629,0,634,0,\"LATENT\"],[2021,634,0,630,5,\"GUIDES\"],[2022,629,0,635,0,\"LATENT\"],[2023,635,0,631,5,\"GUIDES\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.635079908265751,\"offset\":[1291.723098320105,628.7383473687522]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/sdxl regional antiblur.json",
    "content": "{\"last_node_id\":730,\"last_link_id\":2113,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1280,-650],\"size\":[75,26],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2098}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[1967],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[1280,-610],\"size\":[75,26],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2099}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[1939,2092,2112],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":14,\"type\":\"Reroute\",\"pos\":[1280,-570],\"size\":[75,26],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":2100}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,1328],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":398,\"type\":\"SaveImage\",\"pos\":[1379.9996337890625,-267.2835998535156],\"size\":[341.7508850097656,561.0067749023438],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1329}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":701,\"type\":\"Note\",\"pos\":[80,-520],\"size\":[342.05950927734375,88],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"I usually just lazily draw masks in Load Image nodes (with some random image loaded), but for the sake of reproducibility, here's another approach.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":712,\"type\":\"Note\",\"pos\":[-210,-520],\"size\":[245.76409912109375,91.6677017211914],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"So long as these masks are all the same size, the regional conditioning nodes will handle resizing to the image size for you.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":676,\"type\":\"InvertMask\",\"pos\":[20,-370],\"size\":[142.42074584960938,26],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2073}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2083],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"InvertMask\"},\"widgets_values\":[]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[719.6110229492188,16.752899169921875],\"size\":[261.2217712402344,279.3136901855469],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1399],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1024,1024,\"red\",false,\"16_channels\"]},{\"id\":710,\"type\":\"MaskPreview\",\"pos\":[180,-190],\"size\":[210,246],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"link\":2054}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"MaskPreview\"},\"widgets_values\":[]},{\"id\":397,\"type\":\"VAEDecode\",\"pos\":[1382.3662109375,-374.17059326171875],\"size\":[210,46],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":2096},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1328}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1329],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":715,\"type\":\"SolidMask\",\"pos\":[-220,-370],\"size\":[210,106],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2073],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,1024,1024]},{\"id\":716,\"type\":\"SolidMask\",\"pos\":[-220,-220],\"size\":[210,106],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2065],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SolidMask\"},\"widgets_values\":[1,384,864]},{\"id\":709,\"type\":\"MaskComposite\",\"pos\":[190,-370],\"size\":[210,126],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"destination\",\"localized_name\":\"destination\",\"type\":\"MASK\",\"link\":2083},{\"name\":\"source\",\"localized_name\":\"source\",\"type\":\"MASK\",\"link\":2065}],\"outputs\":[{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":[2054,2091],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"MaskComposite\"},\"widgets_values\":[256,160,\"add\"]},{\"id\":704,\"type\":\"Note\",\"pos\":[101.74818420410156,112.67951965332031],\"size\":[290.7107238769531,155.35317993164062],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ClownRegionalConditionings:\\n\\nTry raising or lowering weight, and changing the weight scheduler from beta57 to Karras (weakens more quickly), or to linear quadratic (stronger late).\\n\\nTry changing region_bleed_start_step (earlier will make the image blend together more), and end_step.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":722,\"type\":\"ClownRegionalConditioning2\",\"pos\":[690,-370],\"size\":[287.75750732421875,330],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"conditioning_masked\",\"localized_name\":\"conditioning_masked\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2094},{\"name\":\"conditioning_unmasked\",\"localized_name\":\"conditioning_unmasked\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2093},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":2091},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"region_bleeds\",\"localized_name\":\"region_bleeds\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"conditioning\",\"localized_name\":\"conditioning\",\"type\":\"CONDITIONING\",\"links\":[2095],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownRegionalConditioning2\"},\"widgets_values\":[1,0,0,\"constant\",0,-1,\"boolean_masked\",32,false]},{\"id\":703,\"type\":\"Note\",\"pos\":[423.10699462890625,-96.14085388183594],\"size\":[241.9689483642578,386.7543640136719],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"edge_width also creates some overlap around the edges of the mask.\\n\\nboolean_masked means that the masked area can \\\"see\\\" the rest of the image, but the unmasked area cannot. \\\"boolean\\\" would mean neither area could see the rest of the image.\\n\\nTry setting to boolean_unmasked and see what happens!\\n\\nIf you still have blur, try reducing edge_width (and if you have seams, try increasing it, or setting end_step to something like 20). \\n\\nAlso verify that you can generate the background prompt alone without blur (if you can't, this won't work). And don't get stuck on one seed.\\n\\nVaguely human-shaped masks also tend to work better than the blocky one used here.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":725,\"type\":\"ReSDPatcher\",\"pos\":[1012.9199829101562,-651.4929809570312],\"size\":[210,82],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":2097}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[2098],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReSDPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":724,\"type\":\"CheckpointLoaderSimple\",\"pos\":[549.1465454101562,-653.311767578125],\"size\":[416.2424011230469,98],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[2097],\"slot_index\":0},{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[2099],\"slot_index\":1},{\"name\":\"VAE\",\"localized_name\":\"VAE\",\"type\":\"VAE\",\"links\":[2100],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"CheckpointLoaderSimple\"},\"widgets_values\":[\"_SDXL_/juggernautXL_v9Rundiffusionphoto2.safetensors\"]},{\"id\":730,\"type\":\"CLIPTextEncode\",\"pos\":[712.8302612304688,358.5015869140625],\"size\":[273.04931640625,94.66851806640625],\"flags\":{\"collapsed\":false},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2112}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2113],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"low quality, low detail, blurry, unsharp, low resolution, jpeg artifacts\"]},{\"id\":662,\"type\":\"CLIPTextEncode\",\"pos\":[460,-370],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1939}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2094],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a woman wearing a red flannel shirt and a cute shark plush blue hat\"]},{\"id\":723,\"type\":\"CLIPTextEncode\",\"pos\":[460,-240],\"size\":[210,88],\"flags\":{\"collapsed\":false},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2092}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2093],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"a photo from the ground of a college campus\"]},{\"id\":401,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1010,-370],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1967},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2095},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2113},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1399},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[2096],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"karras\",60,-1,1,7,2,\"fixed\",\"standard\",true]}],\"links\":[[18,14,0,7,4,\"VAE\"],[1328,14,0,397,1,\"VAE\"],[1329,397,0,398,0,\"IMAGE\"],[1399,7,3,401,3,\"LATENT\"],[1939,490,0,662,0,\"CLIP\"],[1967,13,0,401,0,\"MODEL\"],[2054,709,0,710,0,\"MASK\"],[2065,716,0,709,1,\"MASK\"],[2073,715,0,676,0,\"MASK\"],[2083,676,0,709,0,\"MASK\"],[2091,709,0,722,2,\"MASK\"],[2092,490,0,723,0,\"CLIP\"],[2093,723,0,722,1,\"CONDITIONING\"],[2094,662,0,722,0,\"CONDITIONING\"],[2095,722,0,401,1,\"CONDITIONING\"],[2096,401,0,397,0,\"LATENT\"],[2097,724,0,725,0,\"MODEL\"],[2098,725,0,13,0,\"*\"],[2099,724,1,490,0,\"*\"],[2100,724,2,14,0,\"*\"],[2112,490,0,730,0,\"CLIP\"],[2113,730,0,401,2,\"CONDITIONING\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":2.322515441988848,\"offset\":[1367.132902556087,589.0262767308418]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/sdxl style transfer.json",
    "content": "{\"last_node_id\":1394,\"last_link_id\":3744,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-109.2831802368164],\"size\":[75,26],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3741}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[3740],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":14,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-29.283178329467773],\"size\":[75,26],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3744}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,2696],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-69.28317260742188],\"size\":[75,26],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3743}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[2881,3581],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1308,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[14108.255859375,675.60693359375],\"size\":[246.31312561035156,286],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":3709},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3699}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3604],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":970,\"type\":\"CLIPTextEncode\",\"pos\":[13688.255859375,165.60690307617188],\"size\":[281.9206848144531,109.87118530273438],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2881}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2882,3627],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, jpeg artifacts, low quality, bad quality, unsharp\"]},{\"id\":1378,\"type\":\"Reroute\",\"pos\":[13184.07421875,533.128662109375],\"size\":[75,26],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3721}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[3724,3729],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1379,\"type\":\"Reroute\",\"pos\":[13185.853515625,168.15780639648438],\"size\":[75,26],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3725}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[3726],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":909,\"type\":\"SaveImage\",\"pos\":[15220,-259.5838928222656],\"size\":[457.3382263183594,422.2065124511719],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2697}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[13400,560],\"size\":[261.2217712402344,298],\"flags\":{\"collapsed\":true},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":3688},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":3727},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,160.00003051757812],\"widget\":{\"name\":\"width\"},\"link\":3732},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,184.00003051757812],\"widget\":{\"name\":\"height\"},\"link\":3733}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2983,3710],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[3709],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[],\"slot_index\":4},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[],\"slot_index\":5}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1344,768,\"red\",false,\"16_channels\"]},{\"id\":1371,\"type\":\"Image Repeat Tile To Size\",\"pos\":[13390,500],\"size\":[210,146],\"flags\":{\"collapsed\":true},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":3726},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,36],\"widget\":{\"name\":\"width\"},\"link\":3730},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,60],\"widget\":{\"name\":\"height\"},\"link\":3731}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[3727,3728],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Image Repeat Tile To Size\"},\"widgets_values\":[1024,1024,true]},{\"id\":1380,\"type\":\"SetImageSize\",\"pos\":[13380,320],\"size\":[210,102],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[3730,3732],\"slot_index\":0},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[3731,3733],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"SetImageSize\"},\"widgets_values\":[1344,768]},{\"id\":1377,\"type\":\"Image Comparer (rgthree)\",\"pos\":[15742.4619140625,-253.3526153564453],\"size\":[461.9190368652344,413.5953369140625],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":3720},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":3729}],\"outputs\":[],\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_ogxbu_00017_.png&type=temp&subfolder=&rand=0.8732033562598724\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_ogxbu_00018_.png&type=temp&subfolder=&rand=0.08327234118228466\"}]]},{\"id\":908,\"type\":\"VAEDecode\",\"pos\":[15217.7802734375,-312.1965637207031],\"size\":[210,46],\"flags\":{\"collapsed\":true},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3469},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2696}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2697,3720],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":1383,\"type\":\"Note\",\"pos\":[14428.40234375,580.1749877929688],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Samplers like res_2s in this cycling node will also work and are faster. res_2m and res_3m are even faster, but sometimes the effect takes longer in wall time to fully kick in.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1384,\"type\":\"Note\",\"pos\":[14793.0322265625,518.4120483398438],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"res_2m or res_3m can be used here instead and are faster, but are less likely to fully clean up lingering artifacts.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1385,\"type\":\"Note\",\"pos\":[14398.345703125,768.2096557617188],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"method = AdaIN is faster and uses less memory, but is less accurate. Some prefer the effect.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1328,\"type\":\"ClownOptions_SDE_Beta\",\"pos\":[14186.4755859375,-132.6126251220703],\"size\":[315,266],\"flags\":{\"collapsed\":true},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas_substep\",\"localized_name\":\"etas_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3707],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Beta\"},\"widgets_values\":[\"gaussian\",\"gaussian\",\"hard\",\"hard\",0.5,0.75,-1,\"fixed\"]},{\"id\":1381,\"type\":\"Note\",\"pos\":[13881.6279296875,-217.62835693359375],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increase or decrease \\\"steps_to_run\\\" in ClownsharKSampler to change the effective denoise level.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1382,\"type\":\"Note\",\"pos\":[14718.0498046875,-295.4144592285156],\"size\":[268.1851806640625,124.49711608886719],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increasing cycles will increase the amount of change, but take longer.\\n\\nCycles will rerun the same step over and over, forwards and backwards, iteratively refining an image at a controlled noise level.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1373,\"type\":\"LoadImage\",\"pos\":[12810.2314453125,534.0346069335938],\"size\":[315,314],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3721],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"title\":\"Load Image (Composition)\",\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (476).png\",\"image\"]},{\"id\":1362,\"type\":\"PreviewImage\",\"pos\":[13380,620],\"size\":[210,246],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":3682}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":1390,\"type\":\"Note\",\"pos\":[13148.0439453125,257.643310546875],\"size\":[210,88],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Color Match SOMETIMES helps accelerate style transfer.\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1376,\"type\":\"Note\",\"pos\":[13710.3271484375,473.56817626953125],\"size\":[265.1909484863281,137.36415100097656],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increase or decrease weight in ClownGuide to alter adherence to the input image.\\n\\nFor now, set to low weights or bypass if using any model except HiDream. The HiDream code was adapted so that this composition guide doesn't fight the style guide. Others will be added soon.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1350,\"type\":\"ColorMatch\",\"pos\":[13380,160],\"size\":[210,102],\"flags\":{\"collapsed\":false},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"image_ref\",\"localized_name\":\"image_ref\",\"type\":\"IMAGE\",\"link\":3728},{\"name\":\"image_target\",\"localized_name\":\"image_target\",\"type\":\"IMAGE\",\"link\":3724}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[3682,3688],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ColorMatch\"},\"widgets_values\":[\"mkl\",0]},{\"id\":981,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[14758.255859375,-64.39308166503906],\"size\":[340.20001220703125,510],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3698},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3469],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_2s\",-1,4,\"resample\",true]},{\"id\":1393,\"type\":\"ReSDPatcher\",\"pos\":[13246.306640625,-162.28057861328125],\"size\":[210,82],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3742}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3741],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReSDPatcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":1394,\"type\":\"CheckpointLoaderSimple\",\"pos\":[12837.810546875,-94.67196655273438],\"size\":[375.491943359375,98],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[3742],\"slot_index\":0},{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[3743],\"slot_index\":1},{\"name\":\"VAE\",\"localized_name\":\"VAE\",\"type\":\"VAE\",\"links\":[3744],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"CheckpointLoaderSimple\"},\"widgets_values\":[\"_SDXL_/zavychromaxl_v70.safetensors\"]},{\"id\":1374,\"type\":\"LoadImage\",\"pos\":[12805.896484375,167.56053161621094],\"size\":[315,314],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3725],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"title\":\"Load Image (Style Guide)\",\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ChatGPT Image May 13, 2025, 09_18_45 AM.png\",\"image\"]},{\"id\":1333,\"type\":\"CLIPTextEncode\",\"pos\":[13688.255859375,-44.393089294433594],\"size\":[280.6252746582031,164.06936645507812],\"flags\":{\"collapsed\":false},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":3581}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[3602,3626],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"the inside of a car driving down a creepy road\"]},{\"id\":1318,\"type\":\"ClownGuide_Beta\",\"pos\":[13828.255859375,675.60693359375],\"size\":[263.102783203125,290],\"flags\":{},\"order\":25,\"mode\":4,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":3710},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3699,3708],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"inversion\",false,false,0.25,1,\"constant\",0,-1,false]},{\"id\":1317,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[14418.0478515625,-325.06365966796875],\"size\":[265.2884826660156,202],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3533],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[10,1,-1,\"none\",-1,4,false]},{\"id\":907,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[14008.255859375,-64.39308166503906],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":3740},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3602},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2882},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2983},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3708},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3578],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"exponential/res_2s\",\"beta57\",20,14,1,4,201,\"fixed\",\"unsample\",true]},{\"id\":980,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[14378.255859375,-64.39308166503906],\"size\":[340.20001220703125,570],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3626},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3627},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3578},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3604},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":3533},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":3707},{\"name\":\"options 3\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3698],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_2s\",1,4,\"resample\",true]}],\"links\":[[18,14,0,7,4,\"VAE\"],[2696,14,0,908,1,\"VAE\"],[2697,908,0,909,0,\"IMAGE\"],[2881,490,0,970,0,\"CLIP\"],[2882,970,0,907,2,\"CONDITIONING\"],[2983,7,0,907,3,\"LATENT\"],[3469,981,0,908,0,\"LATENT\"],[3533,1317,0,980,6,\"OPTIONS\"],[3578,907,0,980,4,\"LATENT\"],[3581,490,0,1333,0,\"CLIP\"],[3602,1333,0,907,1,\"CONDITIONING\"],[3604,1308,0,980,5,\"GUIDES\"],[3626,1333,0,980,1,\"CONDITIONING\"],[3627,970,0,980,2,\"CONDITIONING\"],[3682,1350,0,1362,0,\"IMAGE\"],[3688,1350,0,7,0,\"IMAGE\"],[3698,980,0,981,4,\"LATENT\"],[3699,1318,0,1308,3,\"GUIDES\"],[3707,1328,0,980,7,\"OPTIONS\"],[3708,1318,0,907,5,\"GUIDES\"],[3709,7,1,1308,0,\"LATENT\"],[3710,7,0,1318,0,\"LATENT\"],[3720,908,0,1377,0,\"IMAGE\"],[3721,1373,0,1378,0,\"*\"],[3724,1378,0,1350,1,\"IMAGE\"],[3725,1374,0,1379,0,\"*\"],[3726,1379,0,1371,0,\"IMAGE\"],[3727,1371,0,7,1,\"IMAGE\"],[3728,1371,0,1350,0,\"IMAGE\"],[3729,1378,0,1377,1,\"IMAGE\"],[3730,1380,0,1371,1,\"INT\"],[3731,1380,1,1371,2,\"INT\"],[3732,1380,0,7,5,\"INT\"],[3733,1380,1,7,6,\"INT\"],[3740,13,0,907,0,\"MODEL\"],[3741,1393,0,13,0,\"*\"],[3742,1394,0,1393,0,\"MODEL\"],[3743,1394,1,490,0,\"*\"],[3744,1394,2,14,0,\"*\"]],\"groups\":[{\"id\":1,\"title\":\"Model Loaders\",\"bounding\":[12796.72265625,-401.9004211425781,822.762451171875,436.0693359375],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":2,\"title\":\"Sampling\",\"bounding\":[13652.6533203125,-402.70721435546875,1470.8076171875,1409.0289306640625],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":3,\"title\":\"Input Prep\",\"bounding\":[12797.1396484375,77.69412231445312,817.4218139648438,820.6239624023438],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":4,\"title\":\"Save and Compare\",\"bounding\":[15180.705078125,-399.09112548828125,1050.6468505859375,615.8845825195312],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}}],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.486436280241595,\"offset\":[-10958.961513232216,457.651089011118]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/style transfer.json",
    "content": "{\"last_node_id\":1408,\"last_link_id\":3768,\"nodes\":[{\"id\":14,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-29.283178329467773],\"size\":[75,26],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3737}],\"outputs\":[{\"name\":\"\",\"type\":\"VAE\",\"links\":[18,2696,3767],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":490,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-69.28317260742188],\"size\":[75,26],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3736}],\"outputs\":[{\"name\":\"\",\"type\":\"CLIP\",\"links\":[2881,3581],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":970,\"type\":\"CLIPTextEncode\",\"pos\":[13688.255859375,165.60690307617188],\"size\":[281.9206848144531,109.87118530273438],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":2881}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[2882,3627],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"blurry, out of focus, shallow depth of field, jpeg artifacts, low quality, bad quality, unsharp\"]},{\"id\":1379,\"type\":\"Reroute\",\"pos\":[13185.853515625,168.15780639648438],\"size\":[75,26],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3747}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[3726],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":909,\"type\":\"SaveImage\",\"pos\":[15220,-259.5838928222656],\"size\":[457.3382263183594,422.2065124511719],\"flags\":{},\"order\":39,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":2697}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"SaveImage\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"ComfyUI\"]},{\"id\":1380,\"type\":\"SetImageSize\",\"pos\":[13324.7197265625,323.0480041503906],\"size\":[210,102],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[3730,3732],\"slot_index\":0},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[3731,3733],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"SetImageSize\"},\"widgets_values\":[1344,768]},{\"id\":1377,\"type\":\"Image Comparer (rgthree)\",\"pos\":[15742.4619140625,-253.3526153564453],\"size\":[461.9190368652344,413.5953369140625],\"flags\":{},\"order\":40,\"mode\":0,\"inputs\":[{\"name\":\"image_a\",\"type\":\"IMAGE\",\"dir\":3,\"link\":3720},{\"name\":\"image_b\",\"type\":\"IMAGE\",\"dir\":3,\"link\":3768}],\"outputs\":[],\"properties\":{\"comparer_mode\":\"Slide\"},\"widgets_values\":[[{\"name\":\"A\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_zdjno_00005_.png&type=temp&subfolder=&rand=0.40554525758657745\"},{\"name\":\"B\",\"selected\":true,\"url\":\"/api/view?filename=rgthree.compare._temp_zdjno_00006_.png&type=temp&subfolder=&rand=0.28640062579003533\"}]]},{\"id\":908,\"type\":\"VAEDecode\",\"pos\":[15217.7802734375,-312.1965637207031],\"size\":[210,46],\"flags\":{\"collapsed\":true},\"order\":38,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3469},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2696}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[2697,3720],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[]},{\"id\":1383,\"type\":\"Note\",\"pos\":[14428.40234375,580.1749877929688],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Samplers like res_2s in this cycling node will also work and are faster. res_2m and res_3m are even faster, but sometimes the effect takes longer in wall time to fully kick in.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1384,\"type\":\"Note\",\"pos\":[14793.0322265625,518.4120483398438],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"res_2m or res_3m can be used here instead and are faster, but are less likely to fully clean up lingering artifacts.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1328,\"type\":\"ClownOptions_SDE_Beta\",\"pos\":[14186.4755859375,-132.6126251220703],\"size\":[315,266],\"flags\":{\"collapsed\":true},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas_substep\",\"localized_name\":\"etas_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3707],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Beta\"},\"widgets_values\":[\"gaussian\",\"gaussian\",\"hard\",\"hard\",0.5,0.75,-1,\"fixed\"]},{\"id\":1381,\"type\":\"Note\",\"pos\":[13881.6279296875,-217.62835693359375],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increase or decrease \\\"steps_to_run\\\" in ClownsharKSampler to change the effective denoise level.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1385,\"type\":\"Note\",\"pos\":[14429.50390625,729.0418701171875],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"method = AdaIN is faster and uses less memory, but is less accurate. Some prefer the effect.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1386,\"type\":\"ClownModelLoader\",\"pos\":[12855.7509765625,-269.1963806152344],\"size\":[335.2314453125,266],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3734],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[3736],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[3737],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"sd3.5_medium.safetensors\",\"default\",\"clip_g_sd35.safetensors\",\"clip_l_sd35.safetensors\",\"t5xxl_fp16.safetensors\",\".none\",\"sd3\",\"sd35_vae.safetensors\"]},{\"id\":1378,\"type\":\"Reroute\",\"pos\":[13184.07421875,533.128662109375],\"size\":[75,26],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3751}],\"outputs\":[{\"name\":\"\",\"type\":\"IMAGE\",\"links\":[3742],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1333,\"type\":\"CLIPTextEncode\",\"pos\":[13688.255859375,-44.393089294433594],\"size\":[280.6252746582031,164.06936645507812],\"flags\":{\"collapsed\":false},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":3581}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[3602,3626],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\",\"cnr_id\":\"comfy-core\",\"ver\":\"0.3.29\"},\"widgets_values\":[\"evil blacklight mountains by a frozen lake at night at night, wild dangerous looking illustration ,dark pop art style, glowing inverted blackness, nothing\"]},{\"id\":980,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[14378.255859375,-64.39308166503906],\"size\":[340.20001220703125,570],\"flags\":{},\"order\":36,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3626},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3627},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3578},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3763},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":3533},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":3707},{\"name\":\"options 3\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3698],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_5s\",1,7,\"resample\",true]},{\"id\":907,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[14008.255859375,-64.39308166503906],\"size\":[340.55120849609375,666.8208618164062],\"flags\":{},\"order\":35,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":3765},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":3602},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":2882},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":2983},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3708},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3578],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"beta57\",20,14,1,1,202,\"fixed\",\"unsample\",true]},{\"id\":981,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[14758.255859375,-64.39308166503906],\"size\":[340.20001220703125,510],\"flags\":{},\"order\":37,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":3698},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[3469],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_5s\",-1,7,\"resample\",true]},{\"id\":1373,\"type\":\"LoadImage\",\"pos\":[12835.318359375,168.2541046142578],\"size\":[315,314],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3747],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"title\":\"Load Image (Composition)\",\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ComfyUI_00492_.png\",\"image\"]},{\"id\":431,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[13212.6708984375,-154.3930206298828],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3735},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1398}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3764],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":13,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-109.2831802368164],\"size\":[75,26],\"flags\":{},\"order\":33,\"mode\":0,\"inputs\":[{\"name\":\"\",\"type\":\"*\",\"link\":3764}],\"outputs\":[{\"name\":\"\",\"type\":\"MODEL\",\"links\":[3765],\"slot_index\":0}],\"properties\":{\"showOutputText\":false,\"horizontal\":false}},{\"id\":1387,\"type\":\"ReSD35Patcher\",\"pos\":[13242.98046875,-303.1613464355469],\"size\":[210,82],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3734}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[3735],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ReSD35Patcher\"},\"widgets_values\":[\"float64\",true]},{\"id\":1308,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[14122.4169921875,684.2660522460938],\"size\":[246.31312561035156,286],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":3709},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3740}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3762],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"scattersort\",1,1,\"constant\",0,-1,false]},{\"id\":1389,\"type\":\"ClownGuide_Style_TileSize\",\"pos\":[14761.21484375,704.8385009765625],\"size\":[223.3114471435547,106],\"flags\":{},\"order\":34,\"mode\":0,\"inputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":3762}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3763],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_TileSize\"},\"widgets_values\":[256,192,64]},{\"id\":1400,\"type\":\"Note\",\"pos\":[14773.240234375,866.2615966796875],\"size\":[298.4509582519531,104.02301025390625],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Your image dimensions need to be neatly divisible by these tile dimensions or you will get an error. This node currently will only have an effect with \\\"scattersort\\\". It will cause the image to follow your style reference's composition as well.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1376,\"type\":\"Note\",\"pos\":[13703.93359375,509.9842529296875],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increase or decrease weight in ClownGuide to alter adherence to the input image.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1318,\"type\":\"ClownGuide_Beta\",\"pos\":[13823.8046875,679.1676025390625],\"size\":[263.102783203125,290],\"flags\":{},\"order\":29,\"mode\":4,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":3710},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[3708,3740],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Beta\"},\"widgets_values\":[\"inversion\",false,false,0.5,1,\"constant\",0,-1,false]},{\"id\":1401,\"type\":\"Note\",\"pos\":[13818.6318359375,1056.417724609375],\"size\":[271.7456970214844,88],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This bypassed node can improve adherence to the composition, but the tradeoff is less movement with the style.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1402,\"type\":\"Note\",\"pos\":[14120.05859375,1058.747314453125],\"size\":[271.7456970214844,88],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"WCT is slower, but also an excellent style mode.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1390,\"type\":\"LoadImage\",\"pos\":[12836.228515625,550.88427734375],\"size\":[315,314],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3751],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"title\":\"Load Image (Style Guide)\",\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"6a985aaa-8a95-4382-97a9-91cdf96f43d3-Moraine_Lake_Dennis_Frates_Alamy_Stock_Photo.jpg\",\"image\"]},{\"id\":1403,\"type\":\"Note\",\"pos\":[12890.732421875,-557.807373046875],\"size\":[271.7456970214844,88],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"If you wish to use another model, just load it in the ClownModelLoader (which is an efficiency node) or via your usual loader nodes. There is a Flux loader specifically for loading Redux as well. \"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1405,\"type\":\"Note\",\"pos\":[12480.912109375,-186.05596923828125],\"size\":[271.7456970214844,88],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"If you load the wrong clip, you may get some very strange errors from ComfyUI about an \\\"attn_mask\\\" etc.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1404,\"type\":\"Note\",\"pos\":[13214.4140625,-591.9750366210938],\"size\":[561.9423828125,149.42193603515625],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"You will need to use the appropriate patcher node to use other models.\\n\\nSD1.5, SDXL: ReSDPatcher\\nStable Cascade: natively supported by https://github.com/ClownsharkBatwing/UltraCascade\\nSD3.5: ReSD3.5Patcher\\nFlux: ReFluxPatcher\\nHiDream: ReHiDreamPatcher\\nAuraFlow: ReAuraPatcher\\nWAN: ReWanPatcher\\nLTXV: ReLTXVPatcher\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1406,\"type\":\"Note\",\"pos\":[14420.2861328125,-528.9069213867188],\"size\":[261.9539489746094,88],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"res_5s is a very high quality sampler that can really help SD3.5M become a much more coherent model. It is slow, however. Try res_2s or even res_2m if you want more speed.\"],\"color\":\"#322\",\"bgcolor\":\"#533\"},{\"id\":1371,\"type\":\"Image Repeat Tile To Size\",\"pos\":[13345.26171875,497.8262939453125],\"size\":[210,146],\"flags\":{\"collapsed\":true},\"order\":27,\"mode\":4,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":3726},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,36],\"widget\":{\"name\":\"width\"},\"link\":3730},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,60],\"widget\":{\"name\":\"height\"},\"link\":3731}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":[3727],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"Image Repeat Tile To Size\"},\"widgets_values\":[1024,1024,true]},{\"id\":1407,\"type\":\"Note\",\"pos\":[13314.3076171875,171.45277404785156],\"size\":[271.7456970214844,88],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Enable the bypassed ImageRepeatToTile node if you're using Flux and getting blurry outputs.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1382,\"type\":\"Note\",\"pos\":[14718.0498046875,-295.4144592285156],\"size\":[288.7483215332031,156.81048583984375],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Increasing cycles will increase the amount of change, but take longer.\\n\\nCycles will rerun the same step over and over, forwards and backwards, iteratively refining an image at a controlled noise level.\\n\\nTry reducing cycles if you want to stay very close to the original composition.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":1317,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[14418.048828125,-327.3294982910156],\"size\":[265.2884826660156,202],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[3533],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[5,1,0.5,\"none\",-1,7,true]},{\"id\":7,\"type\":\"VAEEncodeAdvanced\",\"pos\":[13343.19140625,556.8784790039062],\"size\":[261.2217712402344,298],\"flags\":{\"collapsed\":false},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":3742},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":3727},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":18},{\"name\":\"width\",\"type\":\"INT\",\"pos\":[10,160],\"widget\":{\"name\":\"width\"},\"link\":3732},{\"name\":\"height\",\"type\":\"INT\",\"pos\":[10,184],\"widget\":{\"name\":\"height\"},\"link\":3733}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[2983,3710,3766],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":[3709],\"slot_index\":1},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":[],\"slot_index\":2},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[1398],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[],\"slot_index\":4},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[],\"slot_index\":5}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\",\"cnr_id\":\"RES4LYF\",\"ver\":\"5ce9b5a77c227bf864e447a1e65305bf6cada5c2\"},\"widgets_values\":[\"false\",1344,768,\"red\",false,\"16_channels\"]},{\"id\":1408,\"type\":\"VAEDecode\",\"pos\":[15377.6826171875,-315.0729064941406],\"size\":[210,46],\"flags\":{\"collapsed\":true},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":3766},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":3767}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[3768],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"}}],\"links\":[[18,14,0,7,4,\"VAE\"],[1398,7,3,431,1,\"LATENT\"],[2696,14,0,908,1,\"VAE\"],[2697,908,0,909,0,\"IMAGE\"],[2881,490,0,970,0,\"CLIP\"],[2882,970,0,907,2,\"CONDITIONING\"],[2983,7,0,907,3,\"LATENT\"],[3469,981,0,908,0,\"LATENT\"],[3533,1317,0,980,6,\"OPTIONS\"],[3578,907,0,980,4,\"LATENT\"],[3581,490,0,1333,0,\"CLIP\"],[3602,1333,0,907,1,\"CONDITIONING\"],[3626,1333,0,980,1,\"CONDITIONING\"],[3627,970,0,980,2,\"CONDITIONING\"],[3698,980,0,981,4,\"LATENT\"],[3707,1328,0,980,7,\"OPTIONS\"],[3708,1318,0,907,5,\"GUIDES\"],[3709,7,1,1308,0,\"LATENT\"],[3710,7,0,1318,0,\"LATENT\"],[3720,908,0,1377,0,\"IMAGE\"],[3726,1379,0,1371,0,\"IMAGE\"],[3727,1371,0,7,1,\"IMAGE\"],[3730,1380,0,1371,1,\"INT\"],[3731,1380,1,1371,2,\"INT\"],[3732,1380,0,7,5,\"INT\"],[3733,1380,1,7,6,\"INT\"],[3734,1386,0,1387,0,\"MODEL\"],[3735,1387,0,431,0,\"MODEL\"],[3736,1386,1,490,0,\"*\"],[3737,1386,2,14,0,\"*\"],[3740,1318,0,1308,3,\"GUIDES\"],[3742,1378,0,7,0,\"IMAGE\"],[3747,1373,0,1379,0,\"*\"],[3751,1390,0,1378,0,\"*\"],[3762,1308,0,1389,0,\"GUIDES\"],[3763,1389,0,980,5,\"GUIDES\"],[3764,431,0,13,0,\"*\"],[3765,13,0,907,0,\"MODEL\"],[3766,7,0,1408,0,\"LATENT\"],[3767,14,0,1408,1,\"VAE\"],[3768,1408,0,1377,1,\"IMAGE\"]],\"groups\":[{\"id\":1,\"title\":\"Model Loaders\",\"bounding\":[12796.72265625,-401.9004211425781,822.762451171875,436.0693359375],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":2,\"title\":\"Sampling\",\"bounding\":[13652.6533203125,-402.70721435546875,1470.8076171875,1409.0289306640625],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":3,\"title\":\"Input Prep\",\"bounding\":[12797.1396484375,77.69412231445312,817.4218139648438,820.6239624023438],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}},{\"id\":4,\"title\":\"Save and Compare\",\"bounding\":[15180.705078125,-399.09112548828125,1050.6468505859375,615.8845825195312],\"color\":\"#3f789e\",\"font_size\":24,\"flags\":{}}],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.188365497732567,\"offset\":[-11346.93636409885,735.4056846100609]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"ue_links\":[],\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/ultracascade txt2img style transfer.json",
    "content": "{\"last_node_id\":43,\"last_link_id\":52,\"nodes\":[{\"id\":1,\"type\":\"VAEDecode\",\"pos\":[2240,3610],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":37,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2,\"slot_index\":1}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[5],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":2,\"type\":\"LoraLoader\",\"pos\":[-24.50164031982422,3718.225341796875],\"size\":[359.7619323730469,126],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":4}],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[7],\"slot_index\":0},{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[6,8],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"LoraLoader\"},\"widgets_values\":[\"csbw_cascade_dark_ema.safetensors\",1,1]},{\"id\":4,\"type\":\"SharkOptions_UltraCascade_Latent_Beta\",\"pos\":[1890,4480],\"size\":[310.79998779296875,82],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[22],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_UltraCascade_Latent_Beta\"},\"widgets_values\":[1536,1536]},{\"id\":5,\"type\":\"SharkOptions_UltraCascade_Latent_Beta\",\"pos\":[797.6149291992188,4484.87158203125],\"size\":[310.79998779296875,82],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[12],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_UltraCascade_Latent_Beta\"},\"widgets_values\":[24,24]},{\"id\":6,\"type\":\"SharkOptions_UltraCascade_Latent_Beta\",\"pos\":[1157.109375,4484.87158203125],\"size\":[310.79998779296875,82],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[17],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_UltraCascade_Latent_Beta\"},\"widgets_values\":[36,36]},{\"id\":8,\"type\":\"VAELoader\",\"pos\":[1900,3600],\"size\":[294.6280212402344,58],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"VAE\",\"localized_name\":\"VAE\",\"type\":\"VAE\",\"links\":[2,51],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAELoader\"},\"widgets_values\":[\"stage_a_ft_hq.safetensors\"]},{\"id\":10,\"type\":\"UltraCascade_Loader\",\"pos\":[-394.08612060546875,3670.32373046875],\"size\":[345.5117492675781,82.95540618896484],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"shape\":3,\"links\":[3],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UltraCascade_Loader\"},\"widgets_values\":[\"stage_c_bf16.safetensors\",\"ultrapixel_t2i.safetensors\"]},{\"id\":13,\"type\":\"CLIPTextEncode\",\"pos\":[355.95135498046875,3972.858154296875],\"size\":[356.2470703125,110.6326904296875],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":8}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[11],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"low quality, bad quality, low detail, blurry, unsharp\"]},{\"id\":9,\"type\":\"CLIPLoader\",\"pos\":[-394.50164794921875,3810.115478515625],\"size\":[344.635498046875,98],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[4],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPLoader\"},\"widgets_values\":[\"cascade_text_encoder.safetensors\",\"stable_cascade\",\"default\"]},{\"id\":20,\"type\":\"VAELoader\",\"pos\":[-376.8145751953125,3973.57080078125],\"size\":[315,58],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"VAE\",\"localized_name\":\"VAE\",\"type\":\"VAE\",\"links\":[24,25]}],\"properties\":{\"Node name for S&R\":\"VAELoader\"},\"widgets_values\":[\"effnet_encoder.safetensors\"]},{\"id\":22,\"type\":\"UltraCascade_StageC_VAEEncode_Exact\",\"pos\":[-140,4520],\"size\":[302.3999938964844,102],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":34},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":25}],\"outputs\":[{\"name\":\"stage_c\",\"localized_name\":\"stage_c\",\"type\":\"LATENT\",\"links\":[31,32],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UltraCascade_StageC_VAEEncode_Exact\"},\"widgets_values\":[36,36]},{\"id\":19,\"type\":\"UltraCascade_StageC_VAEEncode_Exact\",\"pos\":[-140,4160],\"size\":[302.3999938964844,102],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":33},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":24}],\"outputs\":[{\"name\":\"stage_c\",\"localized_name\":\"stage_c\",\"type\":\"LATENT\",\"links\":[27,28],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UltraCascade_StageC_VAEEncode_Exact\"},\"widgets_values\":[24,24]},{\"id\":17,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[190,4160],\"size\":[244.26441955566406,286],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":27},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[23],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":18,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[470,4160],\"size\":[244.26441955566406,286],\"flags\":{},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":28},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":23}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[29],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"negative\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":12,\"type\":\"UltraCascade_PerturbedAttentionGuidance\",\"pos\":[361.78070068359375,3621.58740234375],\"size\":[344.3999938964844,58],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":7}],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[9],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UltraCascade_PerturbedAttentionGuidance\"},\"widgets_values\":[3]},{\"id\":3,\"type\":\"SaveImage\",\"pos\":[2240,3720],\"size\":[753.4503784179688,734.7869262695312],\"flags\":{},\"order\":38,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":5}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]},{\"id\":27,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[1158.6995849609375,3539.621337890625],\"size\":[315,130],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[]}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[10,1,0.5,5.5]},{\"id\":21,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[190,4520],\"size\":[244.26441955566406,286],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":32},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[26],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,20,false]},{\"id\":11,\"type\":\"CLIPTextEncode\",\"pos\":[359.33685302734375,3742.75537109375],\"size\":[351.592529296875,173.00360107421875],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":6}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[10],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"impasto oil painting by Yayoi Kusama and Lisa Frank, thick paint textures, tunning contrasts at night with stylish roughly drawn thick black lines, a nuclear explosion destroying a city, its towering wide glowing nuclear mushroom cloud enveloping the entire skyline, the nuclear fireball lighting up the dark sky\"]},{\"id\":7,\"type\":\"UNETLoader\",\"pos\":[1520,3580],\"size\":[356.544677734375,82],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[40],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UNETLoader\"},\"widgets_values\":[\"stage_b_lite_CSBW_v1.1.safetensors\",\"default\"]},{\"id\":31,\"type\":\"UltraCascade_StageB_Patcher\",\"pos\":[1901.8192138671875,3508.625244140625],\"size\":[235.1999969482422,26],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":40}],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[41],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UltraCascade_StageB_Patcher\"},\"widgets_values\":[]},{\"id\":15,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1155.5926513671875,3724.48974609375],\"size\":[314.421142578125,693.9824829101562],\"flags\":{},\"order\":34,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":16},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":30},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":17},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[35],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,1,1,5.5,100,\"fixed\",\"standard\",true]},{\"id\":26,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[1520.32470703125,3723.215087890625],\"size\":[315,510],\"flags\":{},\"order\":36,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":35},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":37},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",-1,5.5,\"resample\",true]},{\"id\":14,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[796.9224243164062,3725.34375],\"size\":[311.41375732421875,693.9824829101562],\"flags\":{},\"order\":32,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":9},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":10},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":11},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":29},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":12},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[16,43],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,-1,1,5.5,1,\"fixed\",\"standard\",true]},{\"id\":23,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[470,4520],\"size\":[244.26441955566406,286],\"flags\":{},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":31},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":26}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[30,37],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"negative\",\"WCT\",1,1,\"constant\",0,20,false]},{\"id\":24,\"type\":\"LoadImage\",\"pos\":[-497.6204833984375,4160.34375],\"size\":[315,314],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[33,34,49],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"ChatGPT Image May 13, 2025, 09_38_14 AM.png\",\"image\"]},{\"id\":16,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1890,3720],\"size\":[309.2452087402344,691.814208984375],\"flags\":{},\"order\":35,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":41},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":43},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":52},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":22},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,-1,1,1,-1,\"fixed\",\"standard\",true]},{\"id\":38,\"type\":\"Note\",\"pos\":[-398.6913757324219,3401.711669921875],\"size\":[336.9422302246094,88],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Check out the \\\"ultracascade txt2img\\\" workflow for non-style related explanations of this workflow.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":39,\"type\":\"Note\",\"pos\":[-515.8250732421875,4543.421875],\"size\":[342.7132263183594,118.7740249633789],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This image serves as a style/color palette reference.\\n\\nInclude something about the style in the prompt (painting, illustration, pen drawing, etc.) or use ClipVision (which is very good with Cascade) if you wish to ensure that more than just the color palette is transferred.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":32,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[2040,4710],\"size\":[236.5709686279297,286],\"flags\":{},\"order\":33,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":47},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":45}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[52],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"negative\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":33,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[1775.3868408203125,4709.03857421875],\"size\":[238.49423217773438,286],\"flags\":{},\"order\":31,\"mode\":0,\"inputs\":[{\"name\":\"guide\",\"localized_name\":\"guide\",\"type\":\"LATENT\",\"shape\":7,\"link\":48},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"shape\":7,\"link\":null},{\"name\":\"weights\",\"localized_name\":\"weights\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"links\":[45],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownGuide_Style_Beta\"},\"widgets_values\":[\"positive\",\"WCT\",1,1,\"constant\",0,-1,false]},{\"id\":34,\"type\":\"VAEEncode\",\"pos\":[1598.26904296875,4709.12841796875],\"size\":[140,46],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"pixels\",\"localized_name\":\"pixels\",\"type\":\"IMAGE\",\"link\":50},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":51}],\"outputs\":[{\"name\":\"LATENT\",\"localized_name\":\"LATENT\",\"type\":\"LATENT\",\"links\":[47,48],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEEncode\"},\"widgets_values\":[]},{\"id\":35,\"type\":\"ImageResize+\",\"pos\":[1359.3343505859375,4709.12890625],\"size\":[210,218],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":49}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[50],\"slot_index\":0},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ImageResize+\"},\"widgets_values\":[1536,1536,\"lanczos\",\"stretch\",\"always\",0]},{\"id\":40,\"type\":\"Note\",\"pos\":[778.7919921875,4684.98095703125],\"size\":[342.7132263183594,118.7740249633789],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Set end_step to -1 (which means \\\"infinity\\\", \\\"run until the end\\\") or 10000, etc. if you wish to use the style guide for all steps. Sometimes this can cause a bit of a CFG burned look, so mileage may vary. \"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":36,\"type\":\"Note\",\"pos\":[1889.730712890625,3353.24462890625],\"size\":[314.823486328125,88],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This patcher is only needed if you wish to use the style guide with stage B. It'll improve adherence to the colors in the style guide.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":37,\"type\":\"Note\",\"pos\":[1153.2738037109375,3351.5126953125],\"size\":[410.0306701660156,88],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Connect ClownOptions Cycles to the node below to increase the effect even more. It will cause it to rerun the single step this node is set to run (steps_to_run == 1), by unsampling, sampling, unsampling, sampling, etc. in a loop.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":42,\"type\":\"SharkOptions_Beta\",\"pos\":[478.9419860839844,3353.24462890625],\"size\":[230.37158203125,130],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"SharkOptions_Beta\"},\"widgets_values\":[\"perlin\",1,1,false]},{\"id\":43,\"type\":\"Note\",\"pos\":[97.72860717773438,3353.8193359375],\"size\":[336.9422302246094,88],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"TIP: Try connecting the options nodes to the right to some of the samplers. It'll replace the default noise types with perlin, which can be quite good with Cascade.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":41,\"type\":\"ClownOptions_SDE_Beta\",\"pos\":[801.105712890625,3352.283203125],\"size\":[301.5363464355469,266],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas_substep\",\"localized_name\":\"etas_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Beta\"},\"widgets_values\":[\"perlin\",\"perlin\",\"hard\",\"hard\",0.5,0.5,-1,\"fixed\"]}],\"links\":[[1,16,0,1,0,\"LATENT\"],[2,8,0,1,1,\"VAE\"],[3,10,0,2,0,\"MODEL\"],[4,9,0,2,1,\"CLIP\"],[5,1,0,3,0,\"IMAGE\"],[6,2,1,11,0,\"CLIP\"],[7,2,0,12,0,\"MODEL\"],[8,2,1,13,0,\"CLIP\"],[9,12,0,14,0,\"MODEL\"],[10,11,0,14,1,\"CONDITIONING\"],[11,13,0,14,2,\"CONDITIONING\"],[12,5,0,14,6,\"OPTIONS\"],[16,14,0,15,3,\"LATENT\"],[17,6,0,15,6,\"OPTIONS\"],[22,4,0,16,6,\"OPTIONS\"],[23,17,0,18,3,\"GUIDES\"],[24,20,0,19,1,\"VAE\"],[25,20,0,22,1,\"VAE\"],[26,21,0,23,3,\"GUIDES\"],[27,19,0,17,0,\"LATENT\"],[28,19,0,18,0,\"LATENT\"],[29,18,0,14,5,\"GUIDES\"],[30,23,0,15,5,\"GUIDES\"],[31,22,0,23,0,\"LATENT\"],[32,22,0,21,0,\"LATENT\"],[33,24,0,19,0,\"IMAGE\"],[34,24,0,22,0,\"IMAGE\"],[35,15,0,26,4,\"LATENT\"],[37,23,0,26,5,\"GUIDES\"],[40,7,0,31,0,\"MODEL\"],[41,31,0,16,0,\"MODEL\"],[43,14,0,16,3,\"LATENT\"],[45,33,0,32,3,\"GUIDES\"],[47,34,0,32,0,\"LATENT\"],[48,34,0,33,0,\"LATENT\"],[49,24,0,35,0,\"IMAGE\"],[50,35,0,34,0,\"IMAGE\"],[51,8,0,34,1,\"VAE\"],[52,32,0,16,5,\"GUIDES\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.2100000000000006,\"offset\":[2416.6858398230765,-3132.1930084977703]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/ultracascade txt2img.json",
    "content": "{\"last_node_id\":33,\"last_link_id\":23,\"nodes\":[{\"id\":1,\"type\":\"VAEDecode\",\"pos\":[1867.32421875,3610.962158203125],\"size\":[210,46],\"flags\":{\"collapsed\":false},\"order\":29,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":2,\"slot_index\":1}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"shape\":3,\"links\":[5],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":2,\"type\":\"LoraLoader\",\"pos\":[-24.50164031982422,3718.225341796875],\"size\":[359.7619323730469,126],\"flags\":{},\"order\":22,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":3},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":4}],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[7],\"slot_index\":0},{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[6,8],\"slot_index\":1}],\"properties\":{\"Node name for S&R\":\"LoraLoader\"},\"widgets_values\":[\"csbw_cascade_dark_ema.safetensors\",1,1]},{\"id\":4,\"type\":\"SharkOptions_UltraCascade_Latent_Beta\",\"pos\":[1522.302734375,4481.47900390625],\"size\":[310.79998779296875,82],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[22],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_UltraCascade_Latent_Beta\"},\"widgets_values\":[1536,1536]},{\"id\":5,\"type\":\"SharkOptions_UltraCascade_Latent_Beta\",\"pos\":[797.6149291992188,4484.87158203125],\"size\":[310.79998779296875,82],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[12],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_UltraCascade_Latent_Beta\"},\"widgets_values\":[24,24]},{\"id\":6,\"type\":\"SharkOptions_UltraCascade_Latent_Beta\",\"pos\":[1157.109375,4484.87158203125],\"size\":[310.79998779296875,82],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[17],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"SharkOptions_UltraCascade_Latent_Beta\"},\"widgets_values\":[36,36]},{\"id\":7,\"type\":\"UNETLoader\",\"pos\":[1149.8580322265625,3582.3779296875],\"size\":[356.544677734375,82],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[18],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UNETLoader\"},\"widgets_values\":[\"stage_b_lite_CSBW_v1.1.safetensors\",\"default\"]},{\"id\":8,\"type\":\"VAELoader\",\"pos\":[1533.0584716796875,3605.814697265625],\"size\":[294.6280212402344,58],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"VAE\",\"localized_name\":\"VAE\",\"type\":\"VAE\",\"links\":[2],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAELoader\"},\"widgets_values\":[\"stage_a_ft_hq.safetensors\"]},{\"id\":10,\"type\":\"UltraCascade_Loader\",\"pos\":[-394.08612060546875,3670.32373046875],\"size\":[345.5117492675781,82.95540618896484],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"shape\":3,\"links\":[3],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UltraCascade_Loader\"},\"widgets_values\":[\"stage_c_bf16.safetensors\",\"ultrapixel_t2i.safetensors\"]},{\"id\":11,\"type\":\"CLIPTextEncode\",\"pos\":[359.33685302734375,3742.75537109375],\"size\":[351.592529296875,173.00360107421875],\"flags\":{},\"order\":24,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":6}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[10,14,19],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"impasto oil painting by Yayoi Kusama and Lisa Frank, thick paint textures, tunning contrasts at  night with stylish roughly drawn thick black lines, a nuclear explosion destroying a city, its towering wide glowing nuclear mushroom cloud enveloping the entire skyline, the nuclear fireball lighting up the dark sky\"]},{\"id\":12,\"type\":\"UltraCascade_PerturbedAttentionGuidance\",\"pos\":[361.78070068359375,3621.58740234375],\"size\":[344.3999938964844,58],\"flags\":{},\"order\":23,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":7}],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":[9,13],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"UltraCascade_PerturbedAttentionGuidance\"},\"widgets_values\":[3]},{\"id\":13,\"type\":\"CLIPTextEncode\",\"pos\":[355.95135498046875,3972.858154296875],\"size\":[356.2470703125,110.6326904296875],\"flags\":{},\"order\":25,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":8}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[11,15,20],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"low quality, bad quality, low detail, blurry, unsharp\"]},{\"id\":14,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[796.9224243164062,3725.34375],\"size\":[311.41375732421875,693.9824829101562],\"flags\":{},\"order\":26,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":9},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":10},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":11},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":12},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[16],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,-1,1,5.5,1,\"fixed\",\"standard\",true]},{\"id\":16,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1522.29052734375,3722.670654296875],\"size\":[309.2452087402344,691.814208984375],\"flags\":{},\"order\":28,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":18},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":19},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":20},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":21},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":22},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,-1,1,1,-1,\"fixed\",\"standard\",true]},{\"id\":9,\"type\":\"CLIPLoader\",\"pos\":[-394.50164794921875,3810.115478515625],\"size\":[344.635498046875,98],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":[4],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPLoader\"},\"widgets_values\":[\"cascade_text_encoder.safetensors\",\"stable_cascade\",\"default\"]},{\"id\":15,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1155.5926513671875,3724.48974609375],\"size\":[314.421142578125,693.9824829101562],\"flags\":{},\"order\":27,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":13},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":14},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":15},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":16},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":17},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[21],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_3s\",\"beta57\",30,-1,1,5.5,100,\"fixed\",\"standard\",true]},{\"id\":20,\"type\":\"Note\",\"pos\":[1150,4640],\"size\":[331.63720703125,415.29815673828125],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Stage UP: a patched version of Stable Cascade stage C (\\\"UltraPixel\\\"). \\n\\nThe key with these dimensions is to keep the aspect ratio the same as the stage C latent. Typically, best results are with a 1.5x upscale. 2.0x works, but will result in somewhat more issues with doubling, and can be a lot slower. However, the detail level will also be very high.\\n\\nSome viable resolutions are listed below. Asterisks signify ones that have been verified to work particularly well.\\n\\n32x32\\n36x36 **\\n40x40\\n42x42\\n48x48 *\\n\\n40x24\\n50x30\\n60x36 **\\n70x42\\n80x48 *\\n\\n72x36 \\n80x40 *\\n96x48 (very slow!)\\n\\n\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":21,\"type\":\"Note\",\"pos\":[1520,4640],\"size\":[331.63720703125,415.29815673828125],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Stage B: the Stable Cascade superresolution model.\\n\\nAs with stage UP, the key with these dimensions is to keep the aspect ratio the same as the prior latents. Theoretically, any resolution may be used, though some odd distortions can occur when the ideal upscale ratio is not used. It's not entirely clear what those ratios are, so some experimentation may be necessary. \\n\\nSome resolutions that work particularly well are:\\n\\n1536x1536 *\\n2048x2048 *\\n\\n1600x960\\n2560x1536 **\\n2880x1792 *\\n3200x1920\\n\\nIf you use stage B lite, you can hit 4k resolutions without even using more than 12GB of VRAM.\\n\\nIt's highly recommended to use the CSBW finetune of stage B, as it fixes many of the severe artifact problems the original release had.\\n\\nNote: CFG is not needed for this stage!\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":19,\"type\":\"Note\",\"pos\":[780,4640],\"size\":[331.63720703125,415.29815673828125],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Stage C: the original Stable Cascade version. \\n\\nStable Cascade latents are actually quite small: typically, a 1024x1024 image will be generated from a stage C latent that is only 24x24 (for comparison, with SDXL or SD1.5, the dimensions are 128x128). \\n\\n\\\"Compression\\\" is just a shorthand method of determining these dimensions, such as 24x24 (1024 / 42 = 24.38, which means a \\\"compression\\\" of 42).\\n\\nThis poses a problem though: Cascade was only trained on a handful of resolutions. The difference between 24x24 and 25x25 is a significant drop in quality and coherence. Therefore, it is best to just set these dimensions directly.\\n\\nThe best trained resolutions are:\\n\\n24x24 > 32x32\\n30x16 > 40x24 \\n\\n48x24 also works, but seems to result in more doubling problems than the others.\\n\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":23,\"type\":\"Note\",\"pos\":[-1140,3810],\"size\":[715.61083984375,89.37511444091797],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Any clip G will do. The Cascade version is available at:\\n\\nhttps://huggingface.co/stabilityai/stable-cascade/blob/main/text_encoder/model.bf16.safetensors\\n\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":22,\"type\":\"Note\",\"pos\":[-1140,3590],\"size\":[717.709228515625,165.61032104492188],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"I recommend the BF16 version of stage C. There is no visible difference vs. the full precision weights, and it halves the disk space requirements.\\n\\nhttps://huggingface.co/stabilityai/stable-cascade/blob/main/stage_c_bf16.safetensors\\n\\nIMPORTANT: The original UltraPixel \\\"safetensors\\\" is not a safetensors at all - it is a PICKLE, where they lazily (at best) changed the file extension to \\\".safetensors\\\"!\\n\\nI converted it to a real safetensors file, and it's available below:\\n\\nhttps://huggingface.co/ClownsharkBatwing/ultrapixel_convert/blob/main/ultrapixel_t2i.safetensors\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":26,\"type\":\"Note\",\"pos\":[570,3250],\"size\":[457.5304870605469,94.27093505859375],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This is a checkpoint that, for convenience, includes the stage B lite CSBW finetune, clip G, and stage A (the FT_HQ finetune).\\n\\nhttps://huggingface.co/ClownsharkBatwing/CSBW_Style/blob/main/cascade_B-lite_refined_CSBW_v1.1.safetensors\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":27,\"type\":\"Note\",\"pos\":[1050,3420],\"size\":[457.5304870605469,94.27093505859375],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This is the stage B lite CSBW finetune (model only).\\n\\nhttps://huggingface.co/ClownsharkBatwing/Cascade_Stage_B_CSBW_Refined/blob/main/stage_b_lite_CSBW_v1.1.safetensors\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":25,\"type\":\"Note\",\"pos\":[305.43292236328125,3455.5634765625],\"size\":[457.5304870605469,94.27093505859375],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Perturbed attention guidance (PAG) makes an enormous difference with Stable Cascade stages C and UP. Like CFG, it will double the runtime.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":29,\"type\":\"Note\",\"pos\":[1534.365478515625,3422.38427734375],\"size\":[547.0546875,91.47331237792969],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"This is a finetune of stage A. You will get a sharper image, but in images with large white areas, small circular grey halos are sometimes visible.\\n\\nhttps://huggingface.co/madebyollin/stage-a-ft-hq/blob/main/stage_a_ft_hq.safetensors\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":28,\"type\":\"CheckpointLoaderSimple\",\"pos\":[1054.370849609375,3250],\"size\":[452.7829895019531,102.89583587646484],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"MODEL\",\"localized_name\":\"MODEL\",\"type\":\"MODEL\",\"links\":null},{\"name\":\"CLIP\",\"localized_name\":\"CLIP\",\"type\":\"CLIP\",\"links\":null},{\"name\":\"VAE\",\"localized_name\":\"VAE\",\"type\":\"VAE\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"CheckpointLoaderSimple\"},\"widgets_values\":[\"cascade_B-lite_refined_CSBW_v1.1.safetensors\"]},{\"id\":24,\"type\":\"Note\",\"pos\":[-1140,3960],\"size\":[715.61083984375,113.57872772216797],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"The LORA was trained with OneTrainer (https://github.com/Nerogar/OneTrainer) on some of my own SDXL generations. It has deep colors and is strong with wacky paint, illustration, and vector art styles. \\n\\nCascade learns extremely quickly and is very adept with artistic styles (it knows many artist names).\\n\\nhttps://huggingface.co/ClownsharkBatwing/CSBW_Style/blob/main/csbw_cascade_dark_ema.safetensors\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":30,\"type\":\"Note\",\"pos\":[796.0823364257812,3575.965576171875],\"size\":[315.20135498046875,88],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"res_3s can be replaced with res_2s or even res_2m or res_3m (in the multistep folder in the sampler_name dropdown) if more speed is desired.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":33,\"type\":\"Note\",\"pos\":[-220,4190],\"size\":[336.9422302246094,88],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"TIP: Try connecting the options nodes to the right to some of the samplers. It'll replace the default noise types with perlin, which can be quite good with Cascade.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":31,\"type\":\"SharkOptions_Beta\",\"pos\":[150,4190],\"size\":[234.2189178466797,130],\"flags\":{},\"order\":20,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"SharkOptions_Beta\"},\"widgets_values\":[\"perlin\",1,1,false]},{\"id\":32,\"type\":\"ClownOptions_SDE_Beta\",\"pos\":[420,4190],\"size\":[281.34088134765625,266],\"flags\":{},\"order\":21,\"mode\":0,\"inputs\":[{\"name\":\"etas\",\"localized_name\":\"etas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"etas_substep\",\"localized_name\":\"etas_substep\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownOptions_SDE_Beta\"},\"widgets_values\":[\"perlin\",\"perlin\",\"hard\",\"hard\",0.5,0.5,-1,\"fixed\"]},{\"id\":3,\"type\":\"SaveImage\",\"pos\":[1871.823974609375,3716.926025390625],\"size\":[670.7464599609375,700.1661987304688],\"flags\":{},\"order\":30,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":5}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\"]}],\"links\":[[1,16,0,1,0,\"LATENT\"],[2,8,0,1,1,\"VAE\"],[3,10,0,2,0,\"MODEL\"],[4,9,0,2,1,\"CLIP\"],[5,1,0,3,0,\"IMAGE\"],[6,2,1,11,0,\"CLIP\"],[7,2,0,12,0,\"MODEL\"],[8,2,1,13,0,\"CLIP\"],[9,12,0,14,0,\"MODEL\"],[10,11,0,14,1,\"CONDITIONING\"],[11,13,0,14,2,\"CONDITIONING\"],[12,5,0,14,6,\"OPTIONS\"],[13,12,0,15,0,\"MODEL\"],[14,11,0,15,1,\"CONDITIONING\"],[15,13,0,15,2,\"CONDITIONING\"],[16,14,0,15,3,\"LATENT\"],[17,6,0,15,6,\"OPTIONS\"],[18,7,0,16,0,\"MODEL\"],[19,11,0,16,1,\"CONDITIONING\"],[20,13,0,16,2,\"CONDITIONING\"],[21,15,0,16,3,\"LATENT\"],[22,4,0,16,6,\"OPTIONS\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.2100000000000006,\"offset\":[2786.903339088035,-3170.107825364122]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/wan img2vid 720p (fp8 fast).json",
    "content": "{\"last_node_id\":67,\"last_link_id\":138,\"nodes\":[{\"id\":56,\"type\":\"PreviewImage\",\"pos\":[480,600],\"size\":[210,246],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":118}],\"outputs\":[],\"properties\":{\"Node name for S&R\":\"PreviewImage\"},\"widgets_values\":[]},{\"id\":8,\"type\":\"VAEDecode\",\"pos\":[1140,80],\"size\":[210,46],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":121},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":137}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[56],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]},{\"id\":6,\"type\":\"CLIPTextEncode\",\"pos\":[30,20],\"size\":[422.84503173828125,164.31304931640625],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":134}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[97],\"slot_index\":0}],\"title\":\"CLIP Text Encode (Positive Prompt)\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"trump and putin kissing, two men in love making out\"],\"color\":\"#232\",\"bgcolor\":\"#353\"},{\"id\":61,\"type\":\"LoadImage\",\"pos\":[-169.0706024169922,588.6607666015625],\"size\":[315,314],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[128],\"slot_index\":0},{\"name\":\"MASK\",\"localized_name\":\"MASK\",\"type\":\"MASK\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"LoadImage\"},\"widgets_values\":[\"pasted/image (371).png\",\"image\"]},{\"id\":55,\"type\":\"ImageResize+\",\"pos\":[190.57818603515625,590.173583984375],\"size\":[251.91366577148438,218],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":128}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[118,119,120],\"slot_index\":0},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":null},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ImageResize+\"},\"widgets_values\":[1280,720,\"nearest\",\"fill / crop\",\"always\",0]},{\"id\":51,\"type\":\"CLIPVisionEncode\",\"pos\":[191.15573120117188,457.861572265625],\"size\":[253.60000610351562,78],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"clip_vision\",\"localized_name\":\"clip_vision\",\"type\":\"CLIP_VISION\",\"link\":94},{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"link\":120}],\"outputs\":[{\"name\":\"CLIP_VISION_OUTPUT\",\"localized_name\":\"CLIP_VISION_OUTPUT\",\"type\":\"CLIP_VISION_OUTPUT\",\"links\":[107],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPVisionEncode\"},\"widgets_values\":[\"none\"]},{\"id\":7,\"type\":\"CLIPTextEncode\",\"pos\":[29.393102645874023,230.72264099121094],\"size\":[425.27801513671875,180.6060791015625],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":135}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[98],\"slot_index\":0}],\"title\":\"CLIP Text Encode (Negative Prompt)\",\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"Overexposure, static, blurred details, subtitles, paintings, pictures, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, mutilated, redundant fingers, poorly painted hands, poorly painted faces, deformed, disfigured, deformed limbs, fused fingers, cluttered background, three legs, a lot of people in the background, upside down\"],\"color\":\"#322\",\"bgcolor\":\"#533\"},{\"id\":49,\"type\":\"CLIPVisionLoader\",\"pos\":[-169.1327362060547,459.3064880371094],\"size\":[315,58],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"CLIP_VISION\",\"localized_name\":\"CLIP_VISION\",\"type\":\"CLIP_VISION\",\"links\":[94],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPVisionLoader\"},\"widgets_values\":[\"clip_vision_vit_h.safetensors\"]},{\"id\":66,\"type\":\"ClownModelLoader\",\"pos\":[-330.852294921875,28.57785415649414],\"size\":[315,266],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[138],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[134,135],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[136,137],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"wan2.1_i2v_720p_14B_fp8_e4m3fn.safetensors\",\"fp8_e4m3fn_fast\",\"umt5_xxl_fp8_e4m3fn_scaled.safetensors\",\".none\",\".none\",\".none\",\"wan\",\"wan_2.1_vae.safetensors\"]},{\"id\":54,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[780,190],\"size\":[337.16485595703125,661.9249267578125],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":114},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":115},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":113},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[121],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_2m\",\"beta57\",30,-1,1,5.5,0,\"fixed\",\"standard\",true]},{\"id\":65,\"type\":\"TorchCompileModels\",\"pos\":[479.10052490234375,-32.837005615234375],\"size\":[273.09326171875,178],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":138}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"TorchCompileModels\"},\"widgets_values\":[\"inductor\",false,\"default\",false,64,0]},{\"id\":50,\"type\":\"WanImageToVideo\",\"pos\":[478.8801574707031,204.63995361328125],\"size\":[269.6244201660156,210],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"link\":97},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"link\":98},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":136},{\"name\":\"clip_vision_output\",\"localized_name\":\"clip_vision_output\",\"type\":\"CLIP_VISION_OUTPUT\",\"shape\":7,\"link\":107},{\"name\":\"start_image\",\"localized_name\":\"start_image\",\"type\":\"IMAGE\",\"shape\":7,\"link\":119}],\"outputs\":[{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"links\":[114],\"slot_index\":0},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"links\":[115],\"slot_index\":1},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"links\":[113],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"WanImageToVideo\"},\"widgets_values\":[1280,720,33,1]},{\"id\":28,\"type\":\"SaveAnimatedWEBP\",\"pos\":[1140,190],\"size\":[595.4246215820312,665.2847290039062],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":56}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\",16,false,100,\"default\"]},{\"id\":67,\"type\":\"Note\",\"pos\":[208.15240478515625,-120.98509979248047],\"size\":[244.7659149169922,88],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"TorchCompileModels may not work on older GPUs. After the first run, should lead to significant time savings with GPUs such as the 4090.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"}],\"links\":[[56,8,0,28,0,\"IMAGE\"],[94,49,0,51,0,\"CLIP_VISION\"],[97,6,0,50,0,\"CONDITIONING\"],[98,7,0,50,1,\"CONDITIONING\"],[107,51,0,50,3,\"CLIP_VISION_OUTPUT\"],[113,50,2,54,3,\"LATENT\"],[114,50,0,54,1,\"CONDITIONING\"],[115,50,1,54,2,\"CONDITIONING\"],[118,55,0,56,0,\"IMAGE\"],[119,55,0,50,4,\"IMAGE\"],[120,55,0,51,1,\"IMAGE\"],[121,54,0,8,0,\"LATENT\"],[128,61,0,55,0,\"IMAGE\"],[134,66,1,6,0,\"CLIP\"],[135,66,1,7,0,\"CLIP\"],[136,66,2,50,2,\"VAE\"],[137,66,2,8,1,\"VAE\"],[138,66,0,65,0,\"MODEL\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.6105100000000012,\"offset\":[2635.71214060565,417.84191139269006]},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0},\"version\":0.4}"
  },
  {
    "path": "example_workflows/wan txt2img (fp8 fast).json",
    "content": "{\"last_node_id\":698,\"last_link_id\":1748,\"nodes\":[{\"id\":676,\"type\":\"CLIPTextEncode\",\"pos\":[2651.457763671875,139.2773895263672],\"size\":[311.1542663574219,134.35691833496094],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1745}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1743],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"a woman picks up a coffee cup and smiles, then suddenly throws it out the window in her dirty apartment\"]},{\"id\":7,\"type\":\"CLIPTextEncode\",\"pos\":[2650.5888671875,336.779296875],\"size\":[310.6131286621094,150.69346618652344],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1746}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[1630],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"色调艳丽，过曝，静态，细节模糊不清，字幕，风格，作品，画作，画面，静止，整体发灰，最差质量，低质量，JPEG压缩残留，丑陋的，残缺的，多余的手指，画得不好的手部，画得不好的脸部，畸形的，毁容的，形态畸形的肢体，手指融合，静止不动的画面，杂乱的背景，三条腿，背景人很多，倒着走\"]},{\"id\":666,\"type\":\"EmptyHunyuanLatentVideo\",\"pos\":[2751.183349609375,552.1126708984375],\"size\":[210,130],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"LATENT\",\"localized_name\":\"LATENT\",\"type\":\"LATENT\",\"links\":[1631,1741],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"EmptyHunyuanLatentVideo\"},\"widgets_values\":[480,480,65,1]},{\"id\":696,\"type\":\"ClownModelLoader\",\"pos\":[2220,340],\"size\":[382.9175109863281,266],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1744],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1745,1746],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1747],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"wan2.1_t2v_14B_fp8_e4m3fn.safetensors\",\"fp8_e4m3fn_fast\",\"umt5_xxl_fp8_e4m3fn_scaled.safetensors\",\".none\",\".none\",\".none\",\"wan\",\"wan_2.1_vae.safetensors\"]},{\"id\":698,\"type\":\"Note\",\"pos\":[2347.1943359375,-37.566280364990234],\"size\":[244.7659149169922,88],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"TorchCompileModels may not work on older GPUs. After the first run, should lead to significant time savings with GPUs such as the 4090.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":346,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[2340,140],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1744},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1741}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1684,1748],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":665,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[3010,140],\"size\":[310.3046875,656.2719116210938],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1684},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1743},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":1630},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1631},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1643],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0.5,\"multistep/res_3m\",\"beta57\",20,-1,1,5.5,896816,\"fixed\",\"standard\",true]},{\"id\":667,\"type\":\"SaveAnimatedWEBP\",\"pos\":[3360,140],\"size\":[315,366],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":1632}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\",16,false,100,\"default\"]},{\"id\":697,\"type\":\"TorchCompileModels\",\"pos\":[2673.776611328125,-98.98099517822266],\"size\":[260.8105163574219,178],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1748}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"TorchCompileModels\"},\"widgets_values\":[\"inductor\",false,\"default\",false,64,0]},{\"id\":668,\"type\":\"VAEDecode\",\"pos\":[3359.884521484375,32.89006805419922],\"size\":[210,46],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":1643},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1747}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1632],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]}],\"links\":[[1630,7,0,665,2,\"CONDITIONING\"],[1631,666,0,665,3,\"LATENT\"],[1632,668,0,667,0,\"IMAGE\"],[1643,665,0,668,0,\"LATENT\"],[1684,346,0,665,0,\"MODEL\"],[1741,666,0,346,1,\"LATENT\"],[1743,676,0,665,1,\"CONDITIONING\"],[1744,696,0,346,0,\"MODEL\"],[1745,696,1,676,0,\"CLIP\"],[1746,696,1,7,0,\"CLIP\"],[1747,696,2,668,1,\"VAE\"],[1748,346,0,697,0,\"MODEL\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.6105100000000008,\"offset\":[-558.9420074905141,402.3405679733133]},\"node_versions\":{\"comfy-core\":\"0.3.26\",\"comfyui_controlnet_aux\":\"1e9eac6377c882da8bb360c7544607036904362c\",\"ComfyUI-VideoHelperSuite\":\"c36626c6028faca912eafcedbc71f1d342fb4d2a\"},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "example_workflows/wan vid2vid.json",
    "content": "{\"last_node_id\":406,\"last_link_id\":1039,\"nodes\":[{\"id\":7,\"type\":\"CLIPTextEncode\",\"pos\":[971.2105712890625,537.63671875],\"size\":[436.48480224609375,118.3749771118164],\"flags\":{},\"order\":11,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1017}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[832],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"色调艳丽，过曝，静态，细节模糊不清，字幕，风格，作品，画作，画面，静止，整体发灰，最差质量，低质量，JPEG压缩残留，丑陋的，残缺的，多余的手指，画得不好的手部，画得不好的脸部，畸形的，毁容的，形态畸形的肢体，手指融合，静止不动的画面，杂乱的背景，三条腿，背景人很多，倒着走\"]},{\"id\":346,\"type\":\"ModelSamplingAdvancedResolution\",\"pos\":[1152.6932373046875,133.92713928222656],\"size\":[260.3999938964844,126],\"flags\":{},\"order\":13,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1018},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"link\":1027}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1010,1011],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ModelSamplingAdvancedResolution\"},\"widgets_values\":[\"exponential\",1.35,0.85]},{\"id\":391,\"type\":\"TorchCompileModels\",\"pos\":[1438.64501953125,80.51760864257812],\"size\":[258.1737365722656,178],\"flags\":{},\"order\":14,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"link\":1010}],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"TorchCompileModels\"},\"widgets_values\":[\"inductor\",false,\"default\",false,64,0]},{\"id\":365,\"type\":\"SaveAnimatedWEBP\",\"pos\":[2500,310],\"size\":[315,366],\"flags\":{},\"order\":19,\"mode\":0,\"inputs\":[{\"name\":\"images\",\"localized_name\":\"images\",\"type\":\"IMAGE\",\"link\":945}],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"ComfyUI\",16,false,100,\"default\",\"\"]},{\"id\":393,\"type\":\"ClownModelLoader\",\"pos\":[626.4608154296875,313.0701904296875],\"size\":[315,266],\"flags\":{},\"order\":0,\"mode\":0,\"inputs\":[],\"outputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"links\":[1018],\"slot_index\":0},{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"links\":[1016,1017],\"slot_index\":1},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"links\":[1012,1013],\"slot_index\":2}],\"properties\":{\"Node name for S&R\":\"ClownModelLoader\"},\"widgets_values\":[\"wan2.1_t2v_14B_fp8_e4m3fn.safetensors\",\"fp8_e4m3fn\",\"umt5_xxl_fp8_e4m3fn_scaled.safetensors\",\".none\",\".none\",\".none\",\"wan\",\"wan_2.1_vae.safetensors\"]},{\"id\":394,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[1799.4302978515625,313.5021667480469],\"size\":[315,530],\"flags\":{},\"order\":16,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1028},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":1029},{\"name\":\"options 2\",\"type\":\"OPTIONS\",\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1033],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_2s\",1,5.5,\"resample\",true]},{\"id\":324,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[1433.78466796875,314.1369934082031],\"size\":[337.03857421875,670],\"flags\":{},\"order\":15,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":1011},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":997},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":832},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1026},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1028],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharKSampler_Beta\"},\"widgets_values\":[0,\"multistep/res_2m\",\"beta57\",20,12,1,1,0,\"fixed\",\"unsample\",true]},{\"id\":395,\"type\":\"ClownOptions_Cycles_Beta\",\"pos\":[1843.936767578125,125.13945007324219],\"size\":[210,130],\"flags\":{},\"order\":1,\"mode\":0,\"inputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":[1029],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"ClownOptions_Cycles_Beta\"},\"widgets_values\":[10,1,0.5,5.5]},{\"id\":6,\"type\":\"CLIPTextEncode\",\"pos\":[966.9983520507812,314.1016540527344],\"size\":[447.32421875,169.55857849121094],\"flags\":{},\"order\":10,\"mode\":0,\"inputs\":[{\"name\":\"clip\",\"localized_name\":\"clip\",\"type\":\"CLIP\",\"link\":1016}],\"outputs\":[{\"name\":\"CONDITIONING\",\"localized_name\":\"CONDITIONING\",\"type\":\"CONDITIONING\",\"links\":[997],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"CLIPTextEncode\"},\"widgets_values\":[\"A pretty black woman with thick gorgeous hair walks slowly through a tall, modern colonnade of concrete and glass, cradling a sleek silver laptop under her arm. She wears a sand-colored coat with a high collar and sharp tailoring, the buttons neatly fastened, exuding a quiet, focused confidence. Her complexion is porcelain-smooth, lightly touched by the soft overcast light that filters down through the glass canopy. Dark, straight hair is neatly parted and tucked behind one ear, moving ever so slightly as she walks. Her expression is thoughtful, eyes cast downward in introspection, lips gently pressed into a faint, unreadable line.\\n\\nThe camera begins off-center, panning slowly to align with the corridor’s clean architectural symmetry. Repeating vertical columns frame her movement, creating a visual rhythm that guides the viewer’s eye toward the vanishing point ahead. As she walks, she shifts just slightly to the side, a natural adjustment that causes the fabric of her coat to pull gently at the seams, adding a subtle sense of motion.\\n\\nReflections drift along the windows beside her — faint, soft, and ghostlike. The ambient light is cool and diffused, lending the scene a contemplative, almost suspended feeling. Her presence is calm, deliberate, as though she’s carrying not just the laptop, but something unspoken — a sense of purpose shaped quietly in her mind.\"]},{\"id\":397,\"type\":\"Note\",\"pos\":[639.0505981445312,128.31825256347656],\"size\":[301.3404235839844,112.45540618896484],\"flags\":{},\"order\":2,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Sometimes the first frame looks noisy with WAN. You can either throw it away, use more steps, use a more accurate sampler (2s > 2m, 3s > 2s), or ensure you aren't using a \\\"fast\\\" mode for the weights, such as fp8_e4m3fn_fast, which results in a significant hit to quality.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":398,\"type\":\"Note\",\"pos\":[1451.546142578125,1060.8258056640625],\"size\":[295.7769470214844,88],\"flags\":{},\"order\":3,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"More \\\"steps_to_run\\\" will increase the amount of denoise. Values between 12 and 15 are a good place to start.\\n\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":403,\"type\":\"Frames Slice Latent\",\"pos\":[555.5986938476562,1154.378173828125],\"size\":[210,82],\"flags\":{},\"order\":4,\"mode\":0,\"inputs\":[{\"name\":\"frames\",\"localized_name\":\"frames\",\"type\":\"LATENT\",\"link\":null}],\"outputs\":[{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Frames Slice Latent\"},\"widgets_values\":[0,1]},{\"id\":402,\"type\":\"Frames Slice\",\"pos\":[555.5987548828125,990.6537475585938],\"size\":[210,82],\"flags\":{},\"order\":5,\"mode\":0,\"inputs\":[{\"name\":\"frames\",\"localized_name\":\"frames\",\"type\":\"IMAGE\",\"link\":null}],\"outputs\":[{\"name\":\"image\",\"localized_name\":\"image\",\"type\":\"IMAGE\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"Frames Slice\"},\"widgets_values\":[0,1]},{\"id\":401,\"type\":\"Note\",\"pos\":[550.497802734375,712.9439086914062],\"size\":[239.34762573242188,200.06405639648438],\"flags\":{},\"order\":6,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Can use anything that will load a video as a sequence of frames. The core node \\\"Load Image\\\" will work in place of this one, if you are loading an animated .webp.\\n\\nThis node allows you to set the number of frames loaded.\\n\\nThe nodes below will also allow you to pick and choose ranges of frames. Be sure to use Image Preview to verify you're picking the ones you want!\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":316,\"type\":\"VHS_LoadVideo\",\"pos\":[808.8834228515625,711.6345825195312],\"size\":[319.19403076171875,808.9393920898438],\"flags\":{},\"order\":7,\"mode\":0,\"inputs\":[{\"name\":\"meta_batch\",\"localized_name\":\"meta_batch\",\"type\":\"VHS_BatchManager\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[1014],\"slot_index\":0},{\"name\":\"frame_count\",\"localized_name\":\"frame_count\",\"type\":\"INT\",\"links\":null},{\"name\":\"audio\",\"localized_name\":\"audio\",\"type\":\"AUDIO\",\"links\":null},{\"name\":\"video_info\",\"localized_name\":\"video_info\",\"type\":\"VHS_VIDEOINFO\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"VHS_LoadVideo\"},\"widgets_values\":{\"video\":\"3206567-hd_1080_1920_25fps.mp4\",\"force_rate\":0,\"force_size\":\"Disabled\",\"custom_width\":512,\"custom_height\":512,\"frame_load_cap\":35,\"skip_first_frames\":0,\"select_every_nth\":1,\"choose video to upload\":\"image\",\"videopreview\":{\"hidden\":false,\"paused\":false,\"params\":{\"force_rate\":0,\"frame_load_cap\":35,\"skip_first_frames\":0,\"select_every_nth\":1,\"filename\":\"3206567-hd_1080_1920_25fps.mp4\",\"type\":\"input\",\"format\":\"video/mp4\"},\"muted\":false}}},{\"id\":392,\"type\":\"VAEEncodeAdvanced\",\"pos\":[1157.1488037109375,712.4218139648438],\"size\":[244.18490600585938,278],\"flags\":{},\"order\":12,\"mode\":0,\"inputs\":[{\"name\":\"image_1\",\"localized_name\":\"image_1\",\"type\":\"IMAGE\",\"shape\":7,\"link\":1014},{\"name\":\"image_2\",\"localized_name\":\"image_2\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"IMAGE\",\"shape\":7,\"link\":null},{\"name\":\"latent\",\"localized_name\":\"latent\",\"type\":\"LATENT\",\"shape\":7,\"link\":null},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"shape\":7,\"link\":1013}],\"outputs\":[{\"name\":\"latent_1\",\"localized_name\":\"latent_1\",\"type\":\"LATENT\",\"links\":[1026,1027],\"slot_index\":0},{\"name\":\"latent_2\",\"localized_name\":\"latent_2\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"mask\",\"localized_name\":\"mask\",\"type\":\"MASK\",\"links\":null},{\"name\":\"empty_latent\",\"localized_name\":\"empty_latent\",\"type\":\"LATENT\",\"links\":[],\"slot_index\":3},{\"name\":\"width\",\"localized_name\":\"width\",\"type\":\"INT\",\"links\":[],\"slot_index\":4},{\"name\":\"height\",\"localized_name\":\"height\",\"type\":\"INT\",\"links\":[],\"slot_index\":5}],\"properties\":{\"Node name for S&R\":\"VAEEncodeAdvanced\"},\"widgets_values\":[\"false\",368,640,\"red\",false,\"16_channels\"]},{\"id\":396,\"type\":\"ClownsharkChainsampler_Beta\",\"pos\":[2146.74755859375,313.50225830078125],\"size\":[315,510],\"flags\":{},\"order\":17,\"mode\":0,\"inputs\":[{\"name\":\"model\",\"localized_name\":\"model\",\"type\":\"MODEL\",\"shape\":7,\"link\":null},{\"name\":\"positive\",\"localized_name\":\"positive\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"negative\",\"localized_name\":\"negative\",\"type\":\"CONDITIONING\",\"shape\":7,\"link\":null},{\"name\":\"sigmas\",\"localized_name\":\"sigmas\",\"type\":\"SIGMAS\",\"shape\":7,\"link\":null},{\"name\":\"latent_image\",\"localized_name\":\"latent_image\",\"type\":\"LATENT\",\"shape\":7,\"link\":1033},{\"name\":\"guides\",\"localized_name\":\"guides\",\"type\":\"GUIDES\",\"shape\":7,\"link\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"shape\":7,\"link\":null}],\"outputs\":[{\"name\":\"output\",\"localized_name\":\"output\",\"type\":\"LATENT\",\"links\":[1035],\"slot_index\":0},{\"name\":\"denoised\",\"localized_name\":\"denoised\",\"type\":\"LATENT\",\"links\":null},{\"name\":\"options\",\"localized_name\":\"options\",\"type\":\"OPTIONS\",\"links\":null}],\"properties\":{\"Node name for S&R\":\"ClownsharkChainsampler_Beta\"},\"widgets_values\":[0.5,\"exponential/res_2s\",-1,5.5,\"resample\",true]},{\"id\":400,\"type\":\"Note\",\"pos\":[2090.21728515625,70.1985855102539],\"size\":[324.38916015625,177.81007385253906],\"flags\":{},\"order\":8,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Each full cycle reruns the node twice:\\n\\nresample -> unsample -> resample -> ... \\n\\nHigher values will change the video more.\\n\\nres_2m and 3m will preserve more of the initial structure. Res_2s and especially 3s will result in more dramatic change.\\n\\nIf you use more steps_to_run in ClownsharKSampler, you'll need fewer cycles here.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":399,\"type\":\"Note\",\"pos\":[2153.567626953125,887.000244140625],\"size\":[303.7249755859375,88],\"flags\":{},\"order\":9,\"mode\":0,\"inputs\":[],\"outputs\":[],\"properties\":{},\"widgets_values\":[\"Using a sampler such as res_2s instead of res_2m in this node can reduce or eliminate first frame noise. It's not always necessary, mileage may vary.\"],\"color\":\"#432\",\"bgcolor\":\"#653\"},{\"id\":325,\"type\":\"VAEDecode\",\"pos\":[2496.82080078125,203.3095703125],\"size\":[210,46],\"flags\":{},\"order\":18,\"mode\":0,\"inputs\":[{\"name\":\"samples\",\"localized_name\":\"samples\",\"type\":\"LATENT\",\"link\":1035},{\"name\":\"vae\",\"localized_name\":\"vae\",\"type\":\"VAE\",\"link\":1012}],\"outputs\":[{\"name\":\"IMAGE\",\"localized_name\":\"IMAGE\",\"type\":\"IMAGE\",\"links\":[945],\"slot_index\":0}],\"properties\":{\"Node name for S&R\":\"VAEDecode\"},\"widgets_values\":[]}],\"links\":[[832,7,0,324,2,\"CONDITIONING\"],[945,325,0,365,0,\"IMAGE\"],[997,6,0,324,1,\"CONDITIONING\"],[1010,346,0,391,0,\"MODEL\"],[1011,346,0,324,0,\"MODEL\"],[1012,393,2,325,1,\"VAE\"],[1013,393,2,392,4,\"VAE\"],[1014,316,0,392,0,\"IMAGE\"],[1016,393,1,6,0,\"CLIP\"],[1017,393,1,7,0,\"CLIP\"],[1018,393,0,346,0,\"MODEL\"],[1026,392,0,324,3,\"LATENT\"],[1027,392,0,346,1,\"LATENT\"],[1028,324,0,394,4,\"LATENT\"],[1029,395,0,394,6,\"OPTIONS\"],[1033,394,0,396,4,\"LATENT\"],[1035,396,0,325,0,\"LATENT\"]],\"groups\":[],\"config\":{},\"extra\":{\"ds\":{\"scale\":1.464100000000001,\"offset\":[1126.1541105871463,24.96236469373386]},\"node_versions\":{\"comfy-core\":\"0.3.26\",\"comfyui_controlnet_aux\":\"1e9eac6377c882da8bb360c7544607036904362c\",\"ComfyUI-VideoHelperSuite\":\"c36626c6028faca912eafcedbc71f1d342fb4d2a\"},\"VHS_latentpreview\":false,\"VHS_latentpreviewrate\":0,\"VHS_MetadataImage\":true,\"VHS_KeepIntermediate\":true},\"version\":0.4}"
  },
  {
    "path": "flux/controlnet.py",
    "content": "#Original code can be found on: https://github.com/XLabs-AI/x-flux/blob/main/src/flux/controlnet.py\n#modified to support different types of flux controlnets\n\nimport torch\nimport math\nfrom torch import Tensor, nn\nfrom einops import rearrange, repeat\n\nfrom .layers import (DoubleStreamBlock, EmbedND, LastLayer,\n                                 MLPEmbedder, SingleStreamBlock,\n                                 timestep_embedding)\n\nfrom .model import Flux\nimport comfy.ldm.common_dit\n\nclass MistolineCondDownsamplBlock(nn.Module):\n    def __init__(self, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.encoder = nn.Sequential(\n            operations.Conv2d(3, 16, 3, padding=1, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 1, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, stride=2, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, stride=2, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, stride=2, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 1, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device)\n        )\n\n    def forward(self, x):\n        return self.encoder(x)\n\nclass MistolineControlnetBlock(nn.Module):\n    def __init__(self, hidden_size, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.linear = operations.Linear(hidden_size, hidden_size, dtype=dtype, device=device)\n        self.act = nn.SiLU()\n\n    def forward(self, x):\n        return self.act(self.linear(x))\n\n\nclass ControlNetFlux(Flux):\n    def __init__(self, latent_input=False, num_union_modes=0, mistoline=False, control_latent_channels=None, image_model=None, dtype=None, device=None, operations=None, **kwargs):\n        super().__init__(final_layer=False, dtype=dtype, device=device, operations=operations, **kwargs)\n\n        self.main_model_double = 19\n        self.main_model_single = 38\n\n        self.mistoline = mistoline\n        # add ControlNet blocks\n        if self.mistoline:\n            control_block = lambda : MistolineControlnetBlock(self.hidden_size, dtype=dtype, device=device, operations=operations)\n        else:\n            control_block = lambda : operations.Linear(self.hidden_size, self.hidden_size, dtype=dtype, device=device)\n\n        self.controlnet_blocks = nn.ModuleList([])\n        for _ in range(self.params.depth):\n            self.controlnet_blocks.append(control_block())\n\n        self.controlnet_single_blocks = nn.ModuleList([])\n        for _ in range(self.params.depth_single_blocks):\n            self.controlnet_single_blocks.append(control_block())\n\n        self.num_union_modes = num_union_modes\n        self.controlnet_mode_embedder = None\n        if self.num_union_modes > 0:\n            self.controlnet_mode_embedder = operations.Embedding(self.num_union_modes, self.hidden_size, dtype=dtype, device=device)\n\n        self.gradient_checkpointing = False\n        self.latent_input = latent_input\n        if control_latent_channels is None:\n            control_latent_channels = self.in_channels\n        else:\n            control_latent_channels *= 2 * 2 #patch size\n\n        self.pos_embed_input = operations.Linear(control_latent_channels, self.hidden_size, bias=True, dtype=dtype, device=device)\n        if not self.latent_input:\n            if self.mistoline:\n                self.input_cond_block = MistolineCondDownsamplBlock(dtype=dtype, device=device, operations=operations)\n            else:\n                self.input_hint_block = nn.Sequential(\n                    operations.Conv2d(3, 16, 3, padding=1, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, stride=2, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, stride=2, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, stride=2, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device)\n                )\n\n    def forward_orig(\n        self,\n        img: Tensor,\n        img_ids: Tensor,\n        controlnet_cond: Tensor,\n        txt: Tensor,\n        txt_ids: Tensor,\n        timesteps: Tensor,\n        y: Tensor,\n        guidance: Tensor = None,\n        control_type: Tensor = None,\n    ) -> Tensor:\n        if img.ndim != 3 or txt.ndim != 3:\n            raise ValueError(\"Input img and txt tensors must have 3 dimensions.\")\n\n        # running on sequences img\n        img = self.img_in(img)\n\n        controlnet_cond = self.pos_embed_input(controlnet_cond)\n        img = img + controlnet_cond\n        vec = self.time_in(timestep_embedding(timesteps, 256))\n        if self.params.guidance_embed:\n            vec = vec + self.guidance_in(timestep_embedding(guidance, 256))\n        vec = vec + self.vector_in(y)\n        txt = self.txt_in(txt)\n\n        if self.controlnet_mode_embedder is not None and len(control_type) > 0:\n            control_cond = self.controlnet_mode_embedder(torch.tensor(control_type, device=img.device), out_dtype=img.dtype).unsqueeze(0).repeat((txt.shape[0], 1, 1))\n            txt = torch.cat([control_cond, txt], dim=1)\n            txt_ids = torch.cat([txt_ids[:,:1], txt_ids], dim=1)\n\n        ids = torch.cat((txt_ids, img_ids), dim=1)\n        pe = self.pe_embedder(ids)\n\n        controlnet_double = ()\n\n        for i in range(len(self.double_blocks)):\n            img, txt = self.double_blocks[i](img=img, txt=txt, vec=vec, pe=pe)\n            controlnet_double = controlnet_double + (self.controlnet_blocks[i](img),)\n\n        img = torch.cat((txt, img), 1)\n\n        controlnet_single = ()\n\n        for i in range(len(self.single_blocks)):\n            img = self.single_blocks[i](img, vec=vec, pe=pe)\n            controlnet_single = controlnet_single + (self.controlnet_single_blocks[i](img[:, txt.shape[1] :, ...]),)\n\n        repeat = math.ceil(self.main_model_double / len(controlnet_double))\n        if self.latent_input:\n            out_input = ()\n            for x in controlnet_double:\n                    out_input += (x,) * repeat\n        else:\n            out_input = (controlnet_double * repeat)\n\n        out = {\"input\": out_input[:self.main_model_double]}\n        if len(controlnet_single) > 0:\n            repeat = math.ceil(self.main_model_single / len(controlnet_single))\n            out_output = ()\n            if self.latent_input:\n                for x in controlnet_single:\n                        out_output += (x,) * repeat\n            else:\n                out_output = (controlnet_single * repeat)\n            out[\"output\"] = out_output[:self.main_model_single]\n        return out\n\n    def forward(self, x, timesteps, context, y, guidance=None, hint=None, **kwargs):\n        patch_size = 2\n        if self.latent_input:\n            hint = comfy.ldm.common_dit.pad_to_patch_size(hint, (patch_size, patch_size))\n        elif self.mistoline:\n            hint = hint * 2.0 - 1.0\n            hint = self.input_cond_block(hint)\n        else:\n            hint = hint * 2.0 - 1.0\n            hint = self.input_hint_block(hint)\n\n        hint = rearrange(hint, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=patch_size, pw=patch_size)\n\n        bs, c, h, w = x.shape\n        x = comfy.ldm.common_dit.pad_to_patch_size(x, (patch_size, patch_size))\n\n        img = rearrange(x, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=patch_size, pw=patch_size)\n\n        h_len = ((h + (patch_size // 2)) // patch_size)\n        w_len = ((w + (patch_size // 2)) // patch_size)\n        img_ids = torch.zeros((h_len, w_len, 3), device=x.device, dtype=x.dtype)\n        img_ids[..., 1] = img_ids[..., 1] + torch.linspace(0, h_len - 1, steps=h_len, device=x.device, dtype=x.dtype)[:, None]\n        img_ids[..., 2] = img_ids[..., 2] + torch.linspace(0, w_len - 1, steps=w_len, device=x.device, dtype=x.dtype)[None, :]\n        img_ids = repeat(img_ids, \"h w c -> b (h w) c\", b=bs)\n\n        txt_ids = torch.zeros((bs, context.shape[1], 3), device=x.device, dtype=x.dtype)\n        return self.forward_orig(img, img_ids, hint, context, txt_ids, timesteps, y, guidance, control_type=kwargs.get(\"control_type\", []))\n"
  },
  {
    "path": "flux/layers.py",
    "content": "# Adapted from: https://github.com/black-forest-labs/flux\n\nimport math\nimport torch\nfrom torch import Tensor, nn\n\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKING, TypeVar\n\nimport torch.nn.functional as F\nimport einops\nfrom einops import rearrange\nfrom torch import Tensor\nfrom dataclasses import dataclass\n\nfrom .math import attention, rope, apply_rope\nimport comfy.ldm.common_dit\n\nclass EmbedND(nn.Module):\n    def __init__(self, dim: int, theta: int, axes_dim: list):\n        super().__init__()\n        self.dim      = dim\n        self.theta    = theta\n        self.axes_dim = axes_dim\n\n    def forward(self, ids: Tensor) -> Tensor:\n        n_axes = ids.shape[-1]\n        emb = torch.cat(\n            [rope(ids[..., i], self.axes_dim[i], self.theta) for i in range(n_axes)],\n            dim=-3,\n        )\n        return emb.unsqueeze(1)\n\ndef timestep_embedding(t: Tensor, dim, max_period=10000, time_factor: float = 1000.0):\n    \"\"\"\n    Create sinusoidal timestep embeddings.\n    :param t: a 1-D Tensor of N indices, one per batch element. \n                    These may be fractional.\n    :param dim: the dimension of the output.\n    :param max_period: controls the minimum frequency of the embeddings.\n    :return: an (N, D) Tensor of positional embeddings.\n    \"\"\"\n    t = time_factor * t\n    half = dim // 2\n    freqs = torch.exp(-math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32, device=t.device) / half)\n\n    args = t[:, None].float() * freqs[None]\n    embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)\n    if dim % 2:\n        embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)\n    if torch.is_floating_point(t):\n        embedding = embedding.to(t)\n    return embedding\n\nclass MLPEmbedder(nn.Module):\n    def __init__(self, in_dim: int, hidden_dim: int, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.in_layer  = operations.Linear(    in_dim, hidden_dim, bias=True, dtype=dtype, device=device)\n        self.silu      = nn.SiLU()\n        self.out_layer = operations.Linear(hidden_dim, hidden_dim, bias=True, dtype=dtype, device=device)\n\n    def forward(self, x: Tensor) -> Tensor:\n        return self.out_layer(self.silu(self.in_layer(x)))\n\n\nclass RMSNorm(torch.nn.Module):\n    def __init__(self, dim: int, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.scale = nn.Parameter(torch.empty((dim), dtype=dtype, device=device))    # self.scale.shape = 128\n\n    def forward(self, x: Tensor):\n        return comfy.ldm.common_dit.rms_norm(x, self.scale, 1e-6)\n\n\nclass QKNorm(torch.nn.Module):\n    def __init__(self, dim: int, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.query_norm = RMSNorm(dim, dtype=dtype, device=device, operations=operations)\n        self.key_norm   = RMSNorm(dim, dtype=dtype, device=device, operations=operations)\n\n    def forward(self, q: Tensor, k: Tensor, v: Tensor) -> tuple:\n        q = self.query_norm(q)\n        k = self.key_norm(k)\n        return q.to(v), k.to(v)\n\n\nclass SelfAttention(nn.Module):\n    def __init__(self, dim: int, num_heads: int = 8, qkv_bias: bool = False, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.num_heads = num_heads    # 24\n        head_dim  = dim // num_heads   # 128 = 3072 / 24\n\n        self.qkv  = operations.Linear(dim, dim * 3, bias=qkv_bias, dtype=dtype, device=device)\n        self.norm = QKNorm(head_dim,                               dtype=dtype, device=device, operations=operations)\n        self.proj = operations.Linear(dim, dim,                    dtype=dtype, device=device)    # dim is usually 3072\n\n\n@dataclass\nclass ModulationOut:\n    shift: Tensor\n    scale: Tensor\n    gate:  Tensor\n\nclass Modulation(nn.Module):\n    def __init__(self, dim: int, double: bool, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.is_double  = double\n        self.multiplier = 6 if double else 3\n        self.lin        = operations.Linear(dim, self.multiplier * dim, bias=True, dtype=dtype, device=device)\n\n    def forward(self, vec: Tensor) -> tuple:\n        out = self.lin(nn.functional.silu(vec))[:, None, :].chunk(self.multiplier, dim=-1)\n        return (ModulationOut(*out[:3]),    ModulationOut(*out[3:]) if self.is_double else None,)\n\n\nclass DoubleStreamBlock(nn.Module):\n    def __init__(self, hidden_size: int, num_heads: int, mlp_ratio: float, qkv_bias: bool = False, dtype=None, device=None, operations=None, idx=-1):\n        super().__init__()\n\n        self.idx         = idx\n\n        mlp_hidden_dim   = int(hidden_size * mlp_ratio)\n        self.num_heads   = num_heads\n        self.hidden_size = hidden_size\n        \n        self.img_mod     = Modulation(hidden_size, double=True,                                   dtype=dtype, device=device, operations=operations) # in_features=3072, out_features=18432 (3072*6)\n        self.txt_mod     = Modulation(hidden_size, double=True,                                   dtype=dtype, device=device, operations=operations) # in_features=3072, out_features=18432 (3072*6)\n\n        self.img_attn    = SelfAttention(dim=hidden_size, num_heads=num_heads, qkv_bias=qkv_bias, dtype=dtype, device=device, operations=operations) # .qkv: in_features=3072, out_features=9216   .proj: 3072,3072\n        self.txt_attn    = SelfAttention(dim=hidden_size, num_heads=num_heads, qkv_bias=qkv_bias, dtype=dtype, device=device, operations=operations) # .qkv: in_features=3072, out_features=9216   .proj: 3072,3072\n\n        self.img_norm1   = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6,  dtype=dtype, device=device)\n        self.txt_norm1   = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6,  dtype=dtype, device=device)\n\n        self.img_norm2   = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6,  dtype=dtype, device=device)\n        self.txt_norm2   = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6,  dtype=dtype, device=device)\n\n        self.img_mlp = nn.Sequential(\n            operations.Linear(hidden_size, mlp_hidden_dim, bias=True, dtype=dtype, device=device),\n            nn.GELU(approximate=\"tanh\"),\n            operations.Linear(mlp_hidden_dim, hidden_size, bias=True, dtype=dtype, device=device),\n        ) # 3072->12288, 12288->3072  (3072*4)\n        \n        self.txt_mlp = nn.Sequential(\n            operations.Linear(hidden_size, mlp_hidden_dim, bias=True, dtype=dtype, device=device),\n            nn.GELU(approximate=\"tanh\"),\n            operations.Linear(mlp_hidden_dim, hidden_size, bias=True, dtype=dtype, device=device),\n        ) # 3072->12288, 12288->3072  (3072*4)\n\n    def forward(self, img: Tensor, txt: Tensor, vec: Tensor, pe: Tensor, mask=None, idx=0, update_cross_attn=None, style_block=None) -> Tuple[Tensor, Tensor]: # vec 1,3072  # vec 1,3072    #mask.shape 4608,4608  #img_attn.shape 1,4096,3072    txt_attn.shape 1,512,3072\n\n        img_len = img.shape[-2]\n        txt_len = txt.shape[-2]\n\n        img_mod1, img_mod2  = self.img_mod(vec) # -> 3072, 3072\n        txt_mod1, txt_mod2  = self.txt_mod(vec)\n        \n        img_norm = self.img_norm1(img)\n        txt_norm = self.txt_norm1(txt)\n        \n        img_norm = style_block.img(img_norm, \"attn_norm\")\n        txt_norm = style_block.txt(txt_norm, \"attn_norm\")\n\n        img_norm = img_norm * (1+img_mod1.scale) + img_mod1.shift\n        txt_norm = txt_norm * (1+txt_mod1.scale) + txt_mod1.shift\n\n        img_norm = style_block.img(img_norm, \"attn_norm_mod\")\n        txt_norm = style_block.txt(txt_norm, \"attn_norm_mod\")\n        \n        \n        \n        ### ATTN ###\n        img_qkv             = self.img_attn.qkv(img_norm)\n        img_q, img_k, img_v = img_qkv.view(img_qkv.shape[0], img_qkv.shape[1], 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)\n        \n        img_q = style_block.img.ATTN(img_q, \"q_proj\")\n        img_k = style_block.img.ATTN(img_k, \"k_proj\")\n        img_v = style_block.img.ATTN(img_v, \"v_proj\")\n        \n        img_q, img_k        = self.img_attn.norm(img_q, img_k, img_v)\n        \n        img_q = style_block.img.ATTN(img_q, \"q_norm\")\n        img_k = style_block.img.ATTN(img_k, \"k_norm\")\n        \n        txt_qkv             = self.txt_attn.qkv(txt_norm)\n        txt_q, txt_k, txt_v = txt_qkv.view(txt_qkv.shape[0], txt_qkv.shape[1], 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)\n        \n        txt_q = style_block.txt.ATTN(txt_q, \"q_proj\")\n        txt_k = style_block.txt.ATTN(txt_k, \"k_proj\")\n        txt_v = style_block.txt.ATTN(txt_v, \"v_proj\")\n        \n        txt_q, txt_k        = self.txt_attn.norm(txt_q, txt_k, txt_v)\n        \n        txt_q = style_block.txt.ATTN(txt_q, \"q_norm\")\n        txt_k = style_block.txt.ATTN(txt_k, \"k_norm\")\n\n        q, k, v = torch.cat((txt_q, img_q), dim=2), torch.cat((txt_k, img_k), dim=2), torch.cat((txt_v, img_v), dim=2)\n        attn = attention(q, k, v, pe=pe, mask=mask)\n        \n        txt_attn = attn[:,:txt_len]                         # 1, 768,3072\n        img_attn = attn[:,txt_len:]  \n        \n        img_attn = style_block.img.ATTN(img_attn, \"out\")\n        txt_attn = style_block.txt.ATTN(txt_attn, \"out\")\n        \n        img_attn = self.img_attn.proj(img_attn)    #to_out\n        txt_attn = self.txt_attn.proj(txt_attn)\n        ### ATTN ###\n\n\n\n        img_attn = style_block.img(img_attn, \"attn\")\n        txt_attn = style_block.txt(txt_attn, \"attn\")\n        \n        img_attn *= img_mod1.gate\n        txt_attn *= txt_mod1.gate\n        \n        img_attn = style_block.img(img_attn, \"attn_gated\")\n        txt_attn = style_block.txt(txt_attn, \"attn_gated\")\n        \n        img += img_attn\n        txt += txt_attn\n        \n        img = style_block.img(img, \"attn_res\")\n        txt = style_block.txt(txt, \"attn_res\")\n        \n        \n        \n        img_norm = self.img_norm2(img)\n        txt_norm = self.txt_norm2(txt)\n        \n        img_norm = style_block.img(img_norm, \"ff_norm\")\n        txt_norm = style_block.txt(txt_norm, \"ff_norm\")\n        \n        img_norm = img_norm * (1+img_mod2.scale) + img_mod2.shift\n        txt_norm = txt_norm * (1+txt_mod2.scale) + txt_mod2.shift\n        \n        img_norm = style_block.img(img_norm, \"ff_norm_mod\")\n        txt_norm = style_block.txt(txt_norm, \"ff_norm_mod\")\n        \n        img_mlp = self.img_mlp(img_norm)\n        txt_mlp = self.txt_mlp(txt_norm)\n        \n        img_mlp = style_block.img(img_mlp, \"ff\")\n        txt_mlp = style_block.txt(txt_mlp, \"ff\")\n        \n        img_mlp *= img_mod2.gate\n        txt_mlp *= txt_mod2.gate\n        \n        img_mlp = style_block.img(img_mlp, \"ff_gated\")\n        txt_mlp = style_block.txt(txt_mlp, \"ff_gated\")\n        \n        img += img_mlp\n        txt += txt_mlp\n        \n        img = style_block.img(img, \"ff_res\")\n        txt = style_block.txt(txt, \"ff_res\")\n\n        if update_cross_attn is not None:\n            if not update_cross_attn['skip_cross_attn']:\n                UNCOND      = update_cross_attn['UNCOND']\n                \n                txt_update = self.txt_norm1(txt.cpu()).float()\n                txt_update = (1 + txt_mod1.scale.to(txt_update)) * txt_update + txt_mod1.shift.to(txt_update)\n                \n                if UNCOND:\n                    t5_start    = update_cross_attn['src_t5_start']\n                    t5_end      = update_cross_attn['src_t5_end']\n                \n                    txt_src    = txt_update[:,t5_start:t5_end,:].cpu() #.float()\n                    self.c_src = txt_src.transpose(-2,-1).squeeze(0)    # shape [C,1]\n                else:\n                    t5_start    = update_cross_attn['tgt_t5_start']\n                    t5_end      = update_cross_attn['tgt_t5_end']\n                    \n                    lamb  = update_cross_attn['lamb']\n                    erase = update_cross_attn['erase']\n\n                    c_guide   = txt_update[:,t5_start:t5_end,:].transpose(-2,-1).squeeze(0)  # [C,1]\n                    \n                    Wv_old       = self.txt_attn.qkv.weight.data.to(c_guide)              # [C,C]\n\n                    v_star       = Wv_old @ c_guide                             # [C,1]\n\n                    c_src        = self.c_src  #.cpu()                                   # [C,1]\n\n                    lamb         = lamb\n                    erase_scale  = erase\n                    d            = c_src.shape[0]\n\n                    C            = c_src @ c_src.T                              # [C,C]\n                    I            = torch.eye(d, device=C.device, dtype=C.dtype)\n\n                    mat1_v       = lamb*Wv_old + erase_scale*(v_star @ c_src.T)     # [C,C]\n                    mat2_v       = lamb*I      + erase_scale*(C)                    # [C,C]\n                    \n                    I       = I.to(\"cpu\")\n                    C       = C.to(\"cpu\")\n                    c_src   = c_src.to(\"cpu\")\n                    self.c_src   = self.c_src.to(\"cpu\")\n                    v_star  = v_star.to(\"cpu\")\n                    Wv_old  = Wv_old.to(\"cpu\")\n                    c_guide = c_guide.to(\"cpu\")\n                    del I, C, c_src, self.c_src, v_star, Wv_old, c_guide\n\n                    #Wv_new       = mat1_v @ torch.inverse(mat2_v.float()).to(mat1_v)                   # [C,C]\n                    Wv_new = torch.linalg.solve(mat2_v.T, mat1_v.T).T\n\n                    mat1_v = mat1_v.to(\"cpu\")\n                    mat2_v = mat2_v.to(\"cpu\")\n                    del mat1_v, mat2_v\n\n                    update_q = update_cross_attn['update_q']\n                    update_k = update_cross_attn['update_k']\n                    update_v = update_cross_attn['update_v']\n                    \n                    if not update_q:\n                        Wv_new[:3072,    :] = self.txt_attn.qkv.weight.data[:3072,    :].to(Wv_new)\n                    if not update_k:\n                        Wv_new[3072:6144,:] = self.txt_attn.qkv.weight.data[3072:6144,:].to(Wv_new)\n                    if not update_v:\n                        Wv_new[6144:    ,:] = self.txt_attn.qkv.weight.data[6144:    ,:].to(Wv_new)\n                    \n                    self.txt_attn.qkv.weight.data.copy_(Wv_new.to(self.txt_attn.qkv.weight.data.dtype))\n                    \n                    Wv_new = Wv_new.to(\"cpu\")\n                    del Wv_new\n                    #torch.cuda.empty_cache()\n                    \n        return img, txt\n        \n\nclass SingleStreamBlock(nn.Module):      #attn.shape = 1,4608,3072       mlp.shape = 1,4608,12288     4096*3 = 12288\n    \"\"\"\n    A DiT block with parallel linear layers as described in\n    https://arxiv.org/abs/2302.05442 and adapted modulation interface.\n    \"\"\"\n    def __init__(self, hidden_size: int,  num_heads: int, mlp_ratio: float = 4.0, qk_scale: float = None, dtype=None, device=None, operations=None, idx=-1):\n        super().__init__()\n        self.idx            = idx\n        self.hidden_dim     = hidden_size #3072\n        self.num_heads      = num_heads    #24\n        head_dim            = hidden_size // num_heads\n        self.scale          = qk_scale or head_dim**-0.5   #0.08838834764831845\n\n        self.mlp_hidden_dim = int(hidden_size * mlp_ratio)    #12288== 3072 * 4\n        # qkv and mlp_in\n        self.linear1        = operations.Linear(hidden_size, 3*hidden_size + self.mlp_hidden_dim, dtype=dtype, device=device)\n        # proj and mlp_out\n        self.linear2        = operations.Linear(hidden_size + self.mlp_hidden_dim, hidden_size,     dtype=dtype, device=device)\n\n        self.norm           = QKNorm(head_dim,                                                      dtype=dtype, device=device, operations=operations)\n\n        self.hidden_size    = hidden_size #3072\n        self.pre_norm       = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n\n        self.mlp_act        = nn.GELU(approximate=\"tanh\")\n        self.modulation     = Modulation(hidden_size, double=False,                                 dtype=dtype, device=device, operations=operations)\n\n\n    # vec 1,3072    x 1,9984,3072\n    def forward(self, img: Tensor, vec: Tensor, pe: Tensor, mask=None, idx=0, style_block=None) -> Tensor:   # x 1,9984,3072 if 2 reg embeds, 1,9472,3072 if none    # 9216x4096 = 16x1536x1536\n        mod, _    = self.modulation(vec)\n        \n        img_norm = self.pre_norm(img)\n        img_norm = style_block.img(img_norm, \"attn_norm\")\n        \n        img_norm  = (1 + mod.scale) * img_norm + mod.shift   # mod => vec\n        img_norm = style_block.img(img_norm, \"attn_norm_mod\")\n        \n        \n        \n        ### ATTN ###\n        qkv, mlp = torch.split(self.linear1(img_norm), [3*self.hidden_size, self.mlp_hidden_dim], dim=-1)\n\n        q, k, v  = qkv.view(qkv.shape[0], qkv.shape[1], 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)     #q, k, v  = rearrange(qkv, \"B L (K H D) -> K B H L D\", K=3, H=self.num_heads)\n        \n        q = style_block.img.ATTN(q, \"q_proj\")\n        k = style_block.img.ATTN(k, \"k_proj\")\n        v = style_block.img.ATTN(v, \"v_proj\")\n        \n        q, k     = self.norm(q, k, v)\n        \n        q = style_block.img.ATTN(q, \"q_norm\")\n        k = style_block.img.ATTN(k, \"k_norm\")\n        \n        attn = attention(q, k, v, pe=pe, mask=mask)\n        attn = style_block.img.ATTN(attn, \"out\")\n        ### ATTN ###\n\n\n\n        mlp = style_block.img(mlp, \"ff_norm\")\n\n        mlp_act = self.mlp_act(mlp)\n        mlp_act = style_block.img(mlp_act, \"ff_norm_mod\")\n\n        img_ff_i  = self.linear2(torch.cat((attn, mlp_act), 2))   # effectively FF smooshed into one line\n        img_ff_i = style_block.img(img_ff_i, \"ff\")\n\n        img_ff_i *= mod.gate\n        img_ff_i = style_block.img(img_ff_i, \"ff_gated\")\n\n        img      += img_ff_i\n        img = style_block.img(img, \"ff_res\")\n        \n        return img\n\n\n\nclass LastLayer(nn.Module):\n    def __init__(self, hidden_size: int, patch_size: int, out_channels: int,                                        dtype=None,  device=None, operations=None):\n        super().__init__()\n        self.norm_final       = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6,               dtype=dtype, device=device)\n        self.linear           = operations.Linear(hidden_size, patch_size * patch_size * out_channels,   bias=True, dtype=dtype, device=device)\n        self.adaLN_modulation = nn.Sequential(nn.SiLU(), operations.Linear(hidden_size, 2 * hidden_size, bias=True, dtype=dtype, device=device))\n\n    def forward(self, x: Tensor, vec: Tensor) -> Tensor:\n        shift, scale = self.adaLN_modulation(vec).chunk(2, dim=1)\n        \n        x = (1 + scale[:, None, :]) * self.norm_final(x) + shift[:, None, :]\n        x = self.linear(x)\n        return x\n\n    def forward_scale_shift(self, x: Tensor, vec: Tensor) -> Tensor:\n        shift, scale = self.adaLN_modulation(vec).chunk(2, dim=1)\n        \n        x = (1 + scale[:, None, :]) * self.norm_final(x) + shift[:, None, :]\n        return x\n\n    def forward_linear(self, x: Tensor, vec: Tensor) -> Tensor:\n        x = self.linear(x)\n        return x\n\n\n\n\n\n\n"
  },
  {
    "path": "flux/math.py",
    "content": "import torch\nfrom einops import rearrange\nfrom torch import Tensor\nfrom comfy.ldm.modules.attention import attention_pytorch\n\nimport comfy.model_management\n\nimport math\n\n\ndef attention(q: Tensor, k: Tensor, v: Tensor, pe: Tensor, mask=None) -> Tensor:\n\n    q, k = apply_rope(q, k, pe)\n\n    heads = q.shape[1]\n\n    x = attention_pytorch(q, k, v, heads, skip_reshape=True, mask=mask)\n\n    return x\n\n\n\n\ndef rope(pos: Tensor, dim: int, theta: int) -> Tensor:\n    assert dim % 2 == 0\n    if comfy.model_management.is_device_mps(pos.device) or comfy.model_management.is_intel_xpu() or comfy.model_management.is_directml_enabled():\n        device = torch.device(\"cpu\")\n    else:\n        device = pos.device\n\n    scale = torch.linspace(0, (dim - 2) / dim, steps=dim//2, dtype=torch.float64, device=device)\n    omega = 1.0 / (theta**scale)\n    out = torch.einsum(\"...n,d->...nd\", pos.to(dtype=torch.float32, device=device), omega)\n    out = torch.stack([torch.cos(out), -torch.sin(out), torch.sin(out), torch.cos(out)], dim=-1)\n    out = rearrange(out, \"b n d (i j) -> b n d i j\", i=2, j=2)\n    return out.to(dtype=torch.float32, device=pos.device)\n\n\ndef apply_rope(xq: Tensor, xk: Tensor, freqs_cis: Tensor):\n    xq_ = xq.float().reshape(*xq.shape[:-1], -1, 1, 2)\n    xk_ = xk.float().reshape(*xk.shape[:-1], -1, 1, 2)\n    xq_out = freqs_cis[..., 0] * xq_[..., 0] + freqs_cis[..., 1] * xq_[..., 1]\n    xk_out = freqs_cis[..., 0] * xk_[..., 0] + freqs_cis[..., 1] * xk_[..., 1]\n    return xq_out.reshape(*xq.shape).type_as(xq), xk_out.reshape(*xk.shape).type_as(xk)\n\n\n\n\n"
  },
  {
    "path": "flux/model.py",
    "content": "# Adapted from: https://github.com/black-forest-labs/flux\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import Tensor, nn\nfrom typing import Optional, Callable, Tuple, Dict, List, Any, Union\n\nfrom ..helper import ExtraOptions\n\nfrom dataclasses import dataclass\nimport copy\n\nfrom .layers import (\n    DoubleStreamBlock,\n    EmbedND,\n    LastLayer,\n    MLPEmbedder,\n    SingleStreamBlock,\n    timestep_embedding,\n)\n\nfrom . import layers\n\n#from comfy.ldm.flux.layers import timestep_embedding\nfrom comfy.ldm.flux.model import Flux as Flux\n\nimport math\nimport einops\nfrom einops import rearrange, repeat\nimport comfy.ldm.common_dit\n\nfrom ..latents import tile_latent, untile_latent, gaussian_blur_2d, median_blur_2d\nfrom ..style_transfer import apply_scattersort_masked, apply_scattersort_tiled, adain_seq_inplace, adain_patchwise_row_batch_med, adain_patchwise_row_batch, StyleMMDiT_Model\n#from ..latents import interpolate_spd\n\n@dataclass\nclass FluxParams:\n    in_channels        : int\n    out_channels       : int\n    vec_in_dim         : int\n    context_in_dim     : int\n    hidden_size        : int\n    mlp_ratio          : float\n    num_heads          : int\n    depth              : int\n    depth_single_blocks: int\n    axes_dim           : list\n    theta              : int\n    patch_size         : int\n    qkv_bias           : bool\n    guidance_embed     : bool\n\nclass ReFlux(Flux):\n    def __init__(self, image_model=None, final_layer=True, dtype=None, device=None, operations=None, **kwargs):\n        super().__init__()\n        self.dtype         = dtype\n        self.timestep      = -1.0\n        self.threshold_inv = False\n        params             = FluxParams(**kwargs)\n        \n        self.params        = params #self.params FluxParams(in_channels=16, out_channels=16, vec_in_dim=768, context_in_dim=4096, hidden_size=3072, mlp_ratio=4.0, num_heads=24, depth=19, depth_single_blocks=38, axes_dim=[16, 56, 56], theta=10000, patch_size=2, qkv_bias=True, guidance_embed=False)\n        self.patch_size    = params.patch_size\n        self.in_channels   = params.in_channels  * params.patch_size * params.patch_size    # in_channels 64\n        self.out_channels  = params.out_channels * params.patch_size * params.patch_size    # out_channels 64\n        \n        if params.hidden_size % params.num_heads != 0:\n            raise ValueError(f\"Hidden size {params.hidden_size} must be divisible by num_heads {params.num_heads}\")\n        pe_dim = params.hidden_size // params.num_heads\n        if sum(params.axes_dim) != pe_dim:\n            raise ValueError(f\"Got {params.axes_dim} but expected positional dim {pe_dim}\")\n        \n        self.hidden_size   = params.hidden_size  # 3072\n        self.num_heads     = params.num_heads    # 24\n        self.pe_embedder   = EmbedND(dim=pe_dim, theta=params.theta, axes_dim=params.axes_dim)\n        \n        self.img_in        = operations.Linear(     self.in_channels, self.hidden_size, bias=True,                                                    dtype=dtype, device=device)   # in_features=  64, out_features=3072\n        self.txt_in        = operations.Linear(params.context_in_dim, self.hidden_size,                                                               dtype=dtype, device=device)   # in_features=4096, out_features=3072, bias=True\n\n        self.time_in       = MLPEmbedder(           in_dim=256, hidden_dim=self.hidden_size,                                                          dtype=dtype, device=device, operations=operations)\n        self.vector_in     = MLPEmbedder(params.vec_in_dim,                self.hidden_size,                                                          dtype=dtype, device=device, operations=operations) # in_features=768, out_features=3072 (first layer) second layer 3072,3072\n        self.guidance_in   =(MLPEmbedder(           in_dim=256, hidden_dim=self.hidden_size,                                                          dtype=dtype, device=device, operations=operations) if params.guidance_embed else nn.Identity())\n\n        self.double_blocks = nn.ModuleList([DoubleStreamBlock(self.hidden_size, self.num_heads, mlp_ratio=params.mlp_ratio, qkv_bias=params.qkv_bias, dtype=dtype, device=device, operations=operations, idx=_) for _ in range(params.depth)])\n        self.single_blocks = nn.ModuleList([SingleStreamBlock(self.hidden_size, self.num_heads, mlp_ratio=params.mlp_ratio,                           dtype=dtype, device=device, operations=operations, idx=_) for _ in range(params.depth_single_blocks)])\n\n        if final_layer:\n            self.final_layer = layers.LastLayer(self.hidden_size, 1, self.out_channels,                                                                      dtype=dtype, device=device, operations=operations)\n\n\n    \n    \n    def forward_blocks(self,\n                        img      : Tensor,\n                        img_ids  : Tensor,\n                        txt      : Tensor,\n                        txt_ids  : Tensor,\n                        timesteps: Tensor,\n                        y        : Tensor,\n                        guidance : Tensor   = None,\n                        control             = None,\n                        update_cross_attn   = None,\n                        transformer_options = {},\n                        UNCOND : bool = False,\n                        SIGMA = None,\n                        StyleMMDiT_Model = None,\n                        RECON_MODE=False,\n                        ) -> Tensor:\n        \n        if img.ndim != 3 or txt.ndim != 3:\n            raise ValueError(\"Input img and txt tensors must have 3 dimensions.\")\n\n        # running on sequences img   img -> 1,4096,3072\n        img = self.img_in(img)    # 1,9216,64  == 768x192       # 1,9216,64   ==   1,16,128,256 + 1,16,64,64    # 1,8192,64 with uncond/cond   #:,:,64 -> :,:,3072\n        vec = self.time_in(timestep_embedding(timesteps, 256).to(img.dtype)) # 1 -> 1,3072\n        \n        if self.params.guidance_embed:\n            if guidance is None:\n                print(\"Guidance strength is none, not using distilled guidance.\")\n            else:\n                vec = vec + self.guidance_in(timestep_embedding(guidance, 256).to(img.dtype))\n\n        vec = vec + self.vector_in(y)  #y.shape=1,768  y==all 0s\n        \n        txt = self.txt_in(txt)\n\n        ids = torch.cat((txt_ids, img_ids), dim=1) # img_ids.shape=1,8192,3    txt_ids.shape=1,512,3    #ids.shape=1,8704,3\n        pe  = self.pe_embedder(ids)                 # pe.shape 1,1,8704,64,2,2\n        \n        \n        weight    = -1 * transformer_options.get(\"regional_conditioning_weight\", 0.0)\n        floor     = -1 * transformer_options.get(\"regional_conditioning_floor\",  0.0)\n        mask_zero = None\n        mask = None\n        \n        text_len = txt.shape[1] \n        \n        if not UNCOND and 'AttnMask' in transformer_options: \n            AttnMask = transformer_options['AttnMask']\n            mask = transformer_options['AttnMask'].attn_mask.mask.to('cuda')\n            if mask_zero is None:\n                mask_zero = torch.ones_like(mask)\n                img_len = transformer_options['AttnMask'].img_len\n                mask_zero[:text_len, :] = mask[:text_len, :]\n                mask_zero[:, :text_len] = mask[:, :text_len]\n            if weight == 0:\n                mask = None\n            \n        if UNCOND and 'AttnMask_neg' in transformer_options: \n            AttnMask = transformer_options['AttnMask_neg']\n            mask = transformer_options['AttnMask_neg'].attn_mask.mask.to('cuda')\n            if mask_zero is None:\n                mask_zero = torch.ones_like(mask)\n                img_len = transformer_options['AttnMask_neg'].img_len\n                mask_zero[:text_len, :] = mask[:text_len, :]\n                mask_zero[:, :text_len] = mask[:, :text_len]\n            if weight == 0:\n                mask = None\n            \n        elif UNCOND and 'AttnMask' in transformer_options:\n            AttnMask = transformer_options['AttnMask']\n            mask = transformer_options['AttnMask'].attn_mask.mask.to('cuda')\n            if mask_zero is None:\n                mask_zero = torch.ones_like(mask)\n                img_len = transformer_options['AttnMask'].img_len\n                mask_zero[:text_len, :] = mask[:text_len, :]\n                mask_zero[:, :text_len] = mask[:, :text_len]\n            if weight == 0:\n                mask = None\n\n        if mask is not None and not type(mask[0][0].item()) == bool:\n            mask = mask.to(img.dtype)\n        if mask_zero is not None and not type(mask_zero[0][0].item()) == bool:\n            mask_zero = mask_zero.to(img.dtype)\n\n        total_layers = len(self.double_blocks) + len(self.single_blocks)\n        \n        ca_idx = 0\n        for i, block in enumerate(self.double_blocks):\n\n            if   weight > 0 and mask is not None and     weight  <=      i/total_layers:\n                img, txt = block(img=img, txt=txt, vec=vec, pe=pe, mask=mask_zero, idx=i, update_cross_attn=update_cross_attn)\n                \n            elif (weight < 0 and mask is not None and abs(weight) <= (1 - i/total_layers)):\n                img_tmpZ, txt_tmpZ = img.clone(), txt.clone()\n                img_tmpZ, txt = block(img=img_tmpZ, txt=txt_tmpZ, vec=vec, pe=pe, mask=mask, idx=i, update_cross_attn=update_cross_attn)\n                img, txt_tmpZ = block(img=img     , txt=txt     , vec=vec, pe=pe, mask=mask_zero, idx=i, update_cross_attn=update_cross_attn)\n                \n            elif floor > 0 and mask is not None and     floor  >=      i/total_layers:\n                mask_tmp = mask.clone()\n                mask_tmp[text_len:, text_len:] = 1.0\n                img, txt = block(img=img, txt=txt, vec=vec, pe=pe, mask=mask_tmp, idx=i, update_cross_attn=update_cross_attn)\n                \n            elif floor < 0 and mask is not None and abs(floor) >= (1 - i/total_layers):\n                mask_tmp = mask.clone()\n                mask_tmp[text_len:, text_len:] = 1.0\n                img, txt = block(img=img, txt=txt, vec=vec, pe=pe, mask=mask_tmp, idx=i, update_cross_attn=update_cross_attn)\n\n            else:\n                img, txt = block(img=img, txt=txt, vec=vec, pe=pe, mask=mask, idx=i, update_cross_attn=update_cross_attn)\n\n\n            if control is not None: \n                control_i = control.get(\"input\")\n                if i < len(control_i):\n                    add = control_i[i]\n                    if add is not None:\n                        img[:1] += add\n                        \n            if hasattr(self, \"pulid_data\"):\n                if self.pulid_data:\n                    if i % self.pulid_double_interval == 0:\n                        for _, node_data in self.pulid_data.items():\n                            if torch.any((node_data['sigma_start'] >= timesteps) & (timesteps >= node_data['sigma_end'])):\n                                img = img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], img)\n                        ca_idx += 1\n\n        img = torch.cat((txt, img), 1)   #first 256 is txt embed\n        for i, block in enumerate(self.single_blocks):\n\n            if   weight > 0 and mask is not None and     weight  <=      (i+len(self.double_blocks))/total_layers:\n                img = block(img, vec=vec, pe=pe, mask=mask_zero)\n                \n            elif weight < 0 and mask is not None and abs(weight) <= (1 - (i+len(self.double_blocks))/total_layers):\n                img = block(img, vec=vec, pe=pe, mask=mask_zero)\n                \n            elif floor > 0 and mask is not None and     floor  >=      (i+len(self.double_blocks))/total_layers:\n                mask_tmp = mask.clone()\n                mask_tmp[text_len:, text_len:] = 1.0\n                img = block(img, vec=vec, pe=pe, mask=mask_tmp)\n                \n            elif floor < 0 and mask is not None and abs(floor) >= (1 - (i+len(self.double_blocks))/total_layers):\n                mask_tmp = mask.clone()\n                mask_tmp[text_len:, text_len:] = 1.0\n                img = block(img, vec=vec, pe=pe, mask=mask_tmp)\n                \n            else:\n                img = block(img, vec=vec, pe=pe, mask=mask)\n\n\n\n            if control is not None: # Controlnet\n                control_o = control.get(\"output\")\n                if i < len(control_o):\n                    add = control_o[i]\n                    if add is not None:\n                        img[:1, txt.shape[1] :, ...] += add\n                        \n            if hasattr(self, \"pulid_data\"):\n                # PuLID attention\n                if self.pulid_data:\n                    real_img, txt = img[:, txt.shape[1]:, ...], img[:, :txt.shape[1], ...]\n                    if i % self.pulid_single_interval == 0:\n                        # Will calculate influence of all nodes at once\n                        for _, node_data in self.pulid_data.items():\n                            if torch.any((node_data['sigma_start'] >= timesteps) & (timesteps >= node_data['sigma_end'])):\n                                real_img = real_img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], real_img)\n                        ca_idx += 1\n                    img = torch.cat((txt, real_img), 1)\n\n        img = img[:, txt.shape[1] :, ...]\n        img = self.final_layer(img, vec)  # (N, T, patch_size ** 2 * out_channels)     1,8192,3072 -> 1,8192,64 \n        return img\n    \n    def process_img(self, x, index=0, h_offset=0, w_offset=0):\n        bs, c, h, w = x.shape\n        patch_size = self.patch_size\n        x = comfy.ldm.common_dit.pad_to_patch_size(x, (patch_size, patch_size))\n\n        img = rearrange(x, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=patch_size, pw=patch_size)\n        h_len = ((h + (patch_size // 2)) // patch_size)\n        w_len = ((w + (patch_size // 2)) // patch_size)\n\n        h_offset = ((h_offset + (patch_size // 2)) // patch_size)\n        w_offset = ((w_offset + (patch_size // 2)) // patch_size)\n\n        img_ids = torch.zeros((h_len, w_len, 3), device=x.device, dtype=x.dtype)\n        img_ids[:, :, 0] = img_ids[:, :, 1] + index\n        img_ids[:, :, 1] = img_ids[:, :, 1] + torch.linspace(h_offset, h_len - 1 + h_offset, steps=h_len, device=x.device, dtype=x.dtype).unsqueeze(1)\n        img_ids[:, :, 2] = img_ids[:, :, 2] + torch.linspace(w_offset, w_len - 1 + w_offset, steps=w_len, device=x.device, dtype=x.dtype).unsqueeze(0)\n        return img, repeat(img_ids, \"h w c -> b (h w) c\", b=bs)\n\n    \n    def _get_img_ids(self, x, bs, h_len, w_len, h_start, h_end, w_start, w_end):\n        img_ids          = torch.zeros(  (h_len,   w_len, 3),              device=x.device, dtype=x.dtype)\n        img_ids[..., 1] += torch.linspace(h_start, h_end - 1, steps=h_len, device=x.device, dtype=x.dtype)[:, None]\n        img_ids[..., 2] += torch.linspace(w_start, w_end - 1, steps=w_len, device=x.device, dtype=x.dtype)[None, :]\n        img_ids          = repeat(img_ids, \"h w c -> b (h w) c\", b=bs)\n        return img_ids\n\n    def forward(self,\n                x,\n                timestep,\n                context,\n                y,\n                guidance,\n                ref_latents=None, \n                control             = None,\n                transformer_options = {},\n                mask                = None,\n                **kwargs\n                ):\n        t = timestep\n        self.max_seq = (128 * 128) // (2 * 2)\n        x_orig      = x.clone()\n        b, c, h, w  = x.shape\n        h_len = ((h + (self.patch_size // 2)) // self.patch_size) # h_len 96\n        w_len = ((w + (self.patch_size // 2)) // self.patch_size) # w_len 96\n        img_len = h_len * w_len\n        img_slice = slice(-img_len, None) #slice(None, img_len)\n        txt_slice = slice(None, -img_len)\n        SIGMA = t[0].clone() #/ 1000\n        EO = transformer_options.get(\"ExtraOptions\", ExtraOptions(\"\"))\n        if EO is not None:\n            EO.mute = True\n\n        if EO(\"zero_heads\"):\n            HEADS = 0\n        else:\n            HEADS = 24\n\n        StyleMMDiT = transformer_options.get('StyleMMDiT', StyleMMDiT_Model())        \n        StyleMMDiT.set_len(h_len, w_len, img_slice, txt_slice, HEADS=HEADS)\n        StyleMMDiT.Retrojector = self.Retrojector if hasattr(self, \"Retrojector\") else None\n        transformer_options['StyleMMDiT'] = None\n        \n        x_tmp = transformer_options.get(\"x_tmp\")\n        if x_tmp is not None:\n            x_tmp = x_tmp.expand(x.shape[0], -1, -1, -1).clone()\n            img = comfy.ldm.common_dit.pad_to_patch_size(x_tmp, (self.patch_size, self.patch_size))\n        else:\n            img = comfy.ldm.common_dit.pad_to_patch_size(x, (self.patch_size, self.patch_size))\n        \n        y0_style, img_y0_style = None, None\n\n        img_orig, t_orig, y_orig, context_orig = clone_inputs(img, t, y, context)\n    \n        weight    = -1 * transformer_options.get(\"regional_conditioning_weight\", 0.0)\n        floor     = -1 * transformer_options.get(\"regional_conditioning_floor\",  0.0)\n        update_cross_attn = transformer_options.get(\"update_cross_attn\")\n    \n        z_ = transformer_options.get(\"z_\")   # initial noise and/or image+noise from start of rk_sampler_beta() \n        rk_row = transformer_options.get(\"row\") # for \"smart noise\"\n        if z_ is not None:\n            x_init = z_[rk_row].to(x)\n        elif 'x_init' in transformer_options:\n            x_init = transformer_options.get('x_init').to(x)\n\n        # recon loop to extract exact noise pred for scattersort guide assembly\n        RECON_MODE = StyleMMDiT.noise_mode == \"recon\"\n        recon_iterations = 2 if StyleMMDiT.noise_mode == \"recon\" else 1\n        for recon_iter in range(recon_iterations):\n            y0_style = StyleMMDiT.guides\n            y0_style_active = True if type(y0_style) == torch.Tensor else False\n            \n            RECON_MODE = True     if StyleMMDiT.noise_mode == \"recon\" and recon_iter == 0     else False\n            \n            if StyleMMDiT.noise_mode == \"recon\" and recon_iter == 1:\n                x_recon = x_tmp if x_tmp is not None else x_orig\n                noise_prediction = x_recon + (1-SIGMA.to(x_recon)) * eps.to(x_recon)\n                denoised = x_recon - SIGMA.to(x_recon) * eps.to(x_recon)\n                \n                denoised = StyleMMDiT.apply_recon_lure(denoised, y0_style)\n\n                new_x = (1-SIGMA.to(denoised)) * denoised + SIGMA.to(denoised) * noise_prediction\n                img_orig = img = comfy.ldm.common_dit.pad_to_patch_size(new_x, (self.patch_size, self.patch_size))\n                \n                x_init = noise_prediction\n            elif StyleMMDiT.noise_mode == \"bonanza\":\n                x_init = torch.randn_like(x_init)\n\n            if y0_style_active:\n                if y0_style.sum() == 0.0 and y0_style.std() == 0.0:\n                    y0_style = img_orig.clone()\n                else:\n                    SIGMA_ADAIN         = (SIGMA * EO(\"eps_adain_sigma_factor\", 1.0)).to(y0_style)\n                    y0_style_noised     = (1-SIGMA_ADAIN) * y0_style + SIGMA_ADAIN * x_init[0:1].to(y0_style)   #always only use first batch of noise to avoid broadcasting\n                    img_y0_style_orig   = comfy.ldm.common_dit.pad_to_patch_size(y0_style_noised, (self.patch_size, self.patch_size))\n\n            mask_zero = None\n            \n            out_list = []\n            for cond_iter in range(len(transformer_options['cond_or_uncond'])):\n                UNCOND = transformer_options['cond_or_uncond'][cond_iter] == 1\n                \n                if update_cross_attn is not None:\n                    update_cross_attn['UNCOND'] = UNCOND\n\n                bsz_style = y0_style.shape[0] if y0_style_active else 0\n                bsz       = 1 if RECON_MODE else bsz_style + 1\n\n                img, t, y, context = clone_inputs(img_orig, t_orig, y_orig, context_orig, index=cond_iter)\n                \n                mask = None\n                if not UNCOND and 'AttnMask' in transformer_options: # and weight != 0:\n                    AttnMask = transformer_options['AttnMask']\n                    mask = transformer_options['AttnMask'].attn_mask.mask.to('cuda')\n                    if mask_zero is None:\n                        mask_zero = torch.ones_like(mask)\n                        mask_zero[txt_slice, txt_slice] = mask[txt_slice, txt_slice]\n\n                    if weight == 0:\n                        context = transformer_options['RegContext'].context.to(context.dtype).to(context.device)\n                        mask = None\n                    else:\n                        context = transformer_options['RegContext'].context.to(context.dtype).to(context.device)\n\n                if UNCOND and 'AttnMask_neg' in transformer_options: # and weight != 0:\n                    AttnMask = transformer_options['AttnMask_neg']\n                    mask = transformer_options['AttnMask_neg'].attn_mask.mask.to('cuda')\n                    if mask_zero is None:\n                        mask_zero = torch.ones_like(mask)\n                        mask_zero[txt_slice, txt_slice] = mask[txt_slice, txt_slice]\n\n                    if weight == 0:\n                        context = transformer_options['RegContext_neg'].context.to(context.dtype).to(context.device)\n                        mask = None\n                    else:\n                        context = transformer_options['RegContext_neg'].context.to(context.dtype).to(context.device)\n\n                elif UNCOND and 'AttnMask' in transformer_options:\n                    AttnMask = transformer_options['AttnMask']\n                    mask = transformer_options['AttnMask'].attn_mask.mask.to('cuda')\n                    \n                    if mask_zero is None:\n                        mask_zero = torch.ones_like(mask)\n\n                        mask_zero[txt_slice, txt_slice] = mask[txt_slice, txt_slice]\n                    if weight == 0:                                                                             # ADDED 5/23/2025\n                        context = transformer_options['RegContext'].context.to(context.dtype).to(context.device)  # ADDED 5/26/2025 14:53\n                        mask = None\n                    else:\n                        A       = context\n                        B       = transformer_options['RegContext'].context\n                        context = A.repeat(1,    (B.shape[1] // A.shape[1]) + 1, 1)[:,   :B.shape[1], :]\n\n\n                if y0_style_active and not RECON_MODE:\n                    if mask is None:\n                        context, y, _ = StyleMMDiT.apply_style_conditioning(\n                            UNCOND = UNCOND,\n                            base_context       = context,\n                            base_y             = y,\n                            base_llama3        = None,\n                        )\n                    else:\n                        context = context.repeat(bsz_style + 1, 1, 1)\n                        y = y.repeat(bsz_style + 1, 1)                   if y      is not None else None\n                    img_y0_style = img_y0_style_orig.clone()\n\n                if mask is not None and not type(mask[0][0].item()) == bool:\n                    mask = mask.to(x.dtype)\n                if mask_zero is not None and not type(mask_zero[0][0].item()) == bool:\n                    mask_zero = mask_zero.to(x.dtype)\n\n                clip = self.time_in(timestep_embedding(t, 256).to(x.dtype)) # 1 -> 1,3072\n                if self.params.guidance_embed:\n                    if guidance is None:\n                        print(\"Guidance strength is none, not using distilled guidance.\")\n                    else:\n                        clip = clip + self.guidance_in(timestep_embedding(guidance, 256).to(x.dtype))\n                clip = clip + self.vector_in(y[:,:self.params.vec_in_dim])  #y.shape=1,768  y==all 0s\n                clip = clip.to(x)\n        \n                img_in_dtype = self.img_in.weight.data.dtype\n                if img_in_dtype not in {torch.bfloat16, torch.float16, torch.float32, torch.float64}:\n                    img_in_dtype = x.dtype\n                \n                if ref_latents is not None:\n                    h, w = 0, 0\n                    for ref in ref_latents:\n                        h_offset = 0\n                        w_offset = 0\n                        if ref.shape[-2] + h > ref.shape[-1] + w:\n                            w_offset = w\n                        else:\n                            h_offset = h\n\n                        kontext, kontext_ids = self.process_img(ref, index=1, h_offset=h_offset, w_offset=w_offset)\n                        #kontext = self.img_in(kontext.to(img_in_dtype))\n                        img, img_ids = self.process_img(x)\n                        img = torch.cat([img, kontext], dim=1)\n                        img_ids = torch.cat([img_ids, kontext_ids], dim=1)\n                        h = max(h, ref.shape[-2] + h_offset)\n                        w = max(w, ref.shape[-1] + w_offset)\n                    img = self.img_in(img.to(img_in_dtype))\n                    \n                    img_slice = slice(-2*img_len, None)\n                    StyleMMDiT.KONTEXT = 1\n                    for style_block in StyleMMDiT.double_blocks + StyleMMDiT.single_blocks:\n                        style_block.KONTEXT = 1\n                        for style_block_imgtxt in [style_block.img, getattr(style_block, \"txt\")]:\n                            style_block_imgtxt.KONTEXT = 1\n                            style_block_imgtxt.ATTN.KONTEXT = 1\n                    StyleMMDiT.datashock_ref = ref_latents[0]\n                else:\n                    \n                    img = rearrange(x, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=self.patch_size, pw=self.patch_size)\n                    img = self.img_in(img.to(img_in_dtype))\n                    img_ids = self._get_img_ids(img, bsz, h_len, w_len, 0, h_len, 0, w_len)\n\n                if y0_style_active and not RECON_MODE:\n                    img_y0_style = rearrange(img_y0_style_orig, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=self.patch_size, pw=self.patch_size)\n                    img_y0_style = self.img_in(img_y0_style.to(img_in_dtype))  # hidden_states 1,4032,2560         for 1024x1024: -> 1,4096,2560      ,64 -> ,2560 (x40)\n                    if ref_latents is not None:\n                        img_kontext  = self.img_in(kontext.to(img_in_dtype))\n                        #img_base = rearrange(x, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=self.patch_size, pw=self.patch_size)\n                        #img_base = self.img_in(img_base.to(img_in_dtype))\n                        #img_ids = self._get_img_ids(img, bsz, h_len, w_len, 0, h_len, 0, w_len)\n                        img_ids      = img_ids     .repeat(bsz,1,1)\n                        #img_y0_style = img_y0_style.repeat(1,bsz,1) # torch.cat([img, img_y0_style], dim=0)\n                        img_y0_style = torch.cat([img_y0_style, img_kontext.repeat(bsz-1,1,1)], dim=1)\n                        \n                        StyleMMDiT.KONTEXT = 2\n                        for style_block in StyleMMDiT.double_blocks + StyleMMDiT.single_blocks:\n                            style_block.KONTEXT = 2\n                            for style_block_imgtxt in [style_block.img, getattr(style_block, \"txt\")]:\n                                style_block_imgtxt.KONTEXT = 2\n                                style_block_imgtxt.ATTN.KONTEXT = 2\n                        StyleMMDiT.datashock_ref = None\n                        \n                    img = torch.cat([img, img_y0_style], dim=0)\n\n\n                # txt_ids -> 1,414,3\n                txt_ids = torch.zeros((bsz, context.shape[-2], 3), device=img.device, dtype=x.dtype) \n                ids     = torch.cat((txt_ids, img_ids), dim=-2)   # ids -> 1,4446,3       # flipped from hidream\n                rope    = self.pe_embedder(ids)                  # rope -> 1, 4446, 1, 64, 2, 2\n\n                txt_init = self.txt_in(context)\n                txt_init_len = txt_init.shape[-2]                                       # 271\n\n                img = StyleMMDiT(img, \"proj_in\")\n                \n                img = img.to(x) if img is not None else None\n                \n                total_layers = len(self.double_blocks) + len(self.single_blocks)\n                \n                # DOUBLE STREAM\n                ca_idx = 0\n                for bid, (block, style_block) in enumerate(zip(self.double_blocks, StyleMMDiT.double_blocks)):\n                    txt = txt_init\n                    if   weight > 0 and mask is not None and     weight  <      bid/total_layers:\n                        img, txt_init = block(img, txt, clip, rope, mask_zero, style_block=style_block)\n                        \n                    elif (weight < 0 and mask is not None and abs(weight) < (1 - bid/total_layers)):\n                        img_tmpZ, txt_tmpZ = img.clone(), txt.clone()\n\n                        # more efficient than the commented lines below being used instead in the loop?\n                        img_tmpZ, txt_init = block(img_tmpZ, txt_tmpZ, clip, rope, mask, style_block=style_block)\n                        img     , txt_tmpZ = block(img     , txt     , clip, rope, mask_zero, style_block=style_block)\n                        \n                    elif floor > 0 and mask is not None and     floor  >      bid/total_layers:\n                        mask_tmp = mask.clone()\n                        mask_tmp[img_slice,img_slice] = 1.0\n                        img, txt_init = block(img, txt, clip, rope, mask_tmp, style_block=style_block)\n                        \n                    elif floor < 0 and mask is not None and abs(floor) > (1 - bid/total_layers):\n                        mask_tmp = mask.clone()\n                        mask_tmp[img_slice,img_slice] = 1.0\n                        img, txt_init = block(img, txt, clip, rope, mask_tmp, style_block=style_block)\n                        \n                    elif update_cross_attn is not None and update_cross_attn['skip_cross_attn']:\n                        img, txt_init = block(img, txt, clip, rope, mask, update_cross_attn=update_cross_attn)\n                        \n                    else:\n                        img, txt_init = block(img, txt, clip, rope, mask, update_cross_attn=update_cross_attn, style_block=style_block)\n\n                    if control is not None: \n                        control_i = control.get(\"input\")\n                        if bid < len(control_i):\n                            add = control_i[bid]\n                            if add is not None:\n                                img[:1] += add\n                    \n                    if hasattr(self, \"pulid_data\"):\n                        if self.pulid_data:\n                            if bid % self.pulid_double_interval == 0:\n                                for _, node_data in self.pulid_data.items():\n                                    if torch.any((node_data['sigma_start'] >= timestep) & (timestep >= node_data['sigma_end'])):\n                                        img = img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], img)\n                                ca_idx += 1\n\n                # END DOUBLE STREAM\n\n                #img = img[0:1]\n                #txt_init = txt_init[0:1]\n                img       = torch.cat([txt_init, img], dim=-2)   # 4032 + 271 -> 4303     # txt embed from double stream block   # flipped from hidream\n\n                double_layers = len(self.double_blocks)\n\n                # SINGLE STREAM\n                for bid, (block, style_block) in enumerate(zip(self.single_blocks, StyleMMDiT.single_blocks)):\n\n                    if   weight > 0 and mask is not None and     weight  <      (bid+double_layers)/total_layers:\n                        img = block(img, clip, rope, mask_zero, style_block=style_block)\n                    \n                    elif weight < 0 and mask is not None and abs(weight) < (1 - (bid+double_layers)/total_layers):\n                        img = block(img, clip, rope, mask_zero, style_block=style_block)\n                    \n                    elif floor > 0 and mask is not None and     floor  >      (bid+double_layers)/total_layers:\n                        mask_tmp = mask.clone()\n                        mask_tmp[img_slice,img_slice] = 1.0\n                        img = block(img, clip, rope, mask_tmp, style_block=style_block)\n                    \n                    elif floor < 0 and mask is not None and abs(floor) > (1 - (bid+double_layers)/total_layers):\n                        mask_tmp = mask.clone()\n                        mask_tmp[img_slice,img_slice] = 1.0\n                        img = block(img, clip, rope, mask_tmp, style_block=style_block)\n                    \n                    else:\n                        img = block(img, clip, rope, mask, style_block=style_block)\n                    \n                    if control is not None: # Controlnet\n                        control_o = control.get(\"output\")\n                        if bid < len(control_o):\n                            add = control_o[bid]\n                            if add is not None:\n                                img[:1, txt_slice, ...] += add\n                                \n                    if hasattr(self, \"pulid_data\"):\n                        # PuLID attention\n                        if self.pulid_data:\n                            real_img, txt = img[:, img_slice, ...], img[:, txt_slice, ...]\n                            if bid % self.pulid_single_interval == 0:\n                                # Will calculate influence of all nodes at once\n                                for _, node_data in self.pulid_data.items():\n                                    if torch.any((node_data['sigma_start'] >= timestep) & (timestep >= node_data['sigma_end'])):\n                                        real_img = real_img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], real_img)\n                                ca_idx += 1\n                            img = torch.cat((txt, real_img), 1)\n                            \n                # END SINGLE STREAM\n                \n                img = img[..., img_slice, :]\n                #img = self.final_layer(img, clip)   # 4096,2560 -> 4096,64\n                shift, scale = self.final_layer.adaLN_modulation(clip).chunk(2,dim=1)\n                img = (1 + scale[:, None, :]) * self.final_layer.norm_final(img) + shift[:, None, :]\n                \n                img = StyleMMDiT(img, \"proj_out\")\n\n                if y0_style_active and not RECON_MODE:\n                    img = img[0:1]\n                    #img = img[1:2]\n                \n                #img = self.final_layer.linear(img.to(self.final_layer.linear.weight.data))\n                img = self.final_layer.linear(img)\n\n                #img = self.unpatchify(img, img_sizes)\n                img = img[:,:img_len]  # accomodate kontext\n                img = rearrange(img, \"b (h w) (c ph pw) -> b c (h ph) (w pw)\", h=h_len, w=w_len, ph=self.patch_size, pw=self.patch_size)\n                out_list.append(img)\n                \n            output = torch.cat(out_list, dim=0)\n            eps = output[:, :, :h, :w]\n            \n            if recon_iter == 1:\n                denoised = new_x - SIGMA.to(new_x) * eps.to(new_x)\n                if x_tmp is not None:\n                    eps = (x_tmp - denoised.to(x_tmp)) / SIGMA.to(x_tmp)\n                else:\n                    eps = (x_orig - denoised.to(x_orig)) / SIGMA.to(x_orig)\n                    \n\n\n\n\n\n\n\n\n\n\n\n\n        freqsep_lowpass_method = transformer_options.get(\"freqsep_lowpass_method\")\n        freqsep_sigma          = transformer_options.get(\"freqsep_sigma\")\n        freqsep_kernel_size    = transformer_options.get(\"freqsep_kernel_size\")\n        freqsep_inner_kernel_size    = transformer_options.get(\"freqsep_inner_kernel_size\")\n        freqsep_stride    = transformer_options.get(\"freqsep_stride\")\n        \n        freqsep_lowpass_weight = transformer_options.get(\"freqsep_lowpass_weight\")\n        freqsep_highpass_weight= transformer_options.get(\"freqsep_highpass_weight\")\n        freqsep_mask           = transformer_options.get(\"freqsep_mask\")\n\n        y0_style_pos = transformer_options.get(\"y0_style_pos\")\n        y0_style_neg = transformer_options.get(\"y0_style_neg\")\n        \n        # end recon loop\n        self.style_dtype = torch.float32 if self.style_dtype is None else self.style_dtype\n        dtype = eps.dtype if self.style_dtype is None else self.style_dtype\n        \n        if y0_style_pos is not None:\n            y0_style_pos_weight    = transformer_options.get(\"y0_style_pos_weight\")\n            y0_style_pos_synweight = transformer_options.get(\"y0_style_pos_synweight\")\n            y0_style_pos_synweight *= y0_style_pos_weight\n            y0_style_pos_mask = transformer_options.get(\"y0_style_pos_mask\")\n            y0_style_pos_mask_edge = transformer_options.get(\"y0_style_pos_mask_edge\")\n\n            y0_style_pos = y0_style_pos.to(dtype)\n            x   = x_orig.to(dtype)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n            \n            denoised_embed = self.Retrojector.embed(denoised)\n            y0_adain_embed = self.Retrojector.embed(y0_style_pos)\n            \n            if transformer_options['y0_style_method'] == \"scattersort\":\n                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')\n                pad = transformer_options.get('y0_style_tile_padding')\n                if pad is not None and tile_h is not None and tile_w is not None:\n                    \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if EO(\"scattersort_median_LP\"):\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=EO(\"scattersort_median_LP\",7))\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=EO(\"scattersort_median_LP\",7))\n                        \n                        denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_LP = apply_scattersort_tiled(denoised_spatial_LP, y0_adain_spatial_LP, tile_h, tile_w, pad)\n                        \n                        denoised_spatial = denoised_spatial_LP + denoised_spatial_HP\n                        denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n                    else:\n                        denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)\n                    \n                    denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n                    \n                else:\n                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_pos_mask, y0_style_pos_mask_edge, h_len, w_len)\n\n\n\n            elif transformer_options['y0_style_method'] == \"AdaIN\":\n                if freqsep_mask is not None:\n                    freqsep_mask = freqsep_mask.view(1, 1, *freqsep_mask.shape[-2:]).float()\n                    freqsep_mask = F.interpolate(freqsep_mask.float(), size=(h_len, w_len), mode='nearest-exact')\n                \n                if hasattr(self, \"adain_tile\"):\n                    tile_h, tile_w = self.adain_tile\n                    \n                    denoised_pretile = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_pretile = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if self.adain_flag:\n                        h_off = tile_h // 2\n                        w_off = tile_w // 2\n                        denoised_pretile = denoised_pretile[:,:,h_off:-h_off, w_off:-w_off]\n                        self.adain_flag = False\n                    else:\n                        h_off = 0\n                        w_off = 0\n                        self.adain_flag = True\n                    \n                    tiles,    orig_shape, grid, strides = tile_latent(denoised_pretile, tile_size=(tile_h,tile_w))\n                    y0_tiles, orig_shape, grid, strides = tile_latent(y0_adain_pretile, tile_size=(tile_h,tile_w))\n                    \n                    tiles_out = []\n                    for i in range(tiles.shape[0]):\n                        tile = tiles[i].unsqueeze(0)\n                        y0_tile = y0_tiles[i].unsqueeze(0)\n                        \n                        tile    = rearrange(tile,    \"b c h w -> b (h w) c\", h=tile_h, w=tile_w)\n                        y0_tile = rearrange(y0_tile, \"b c h w -> b (h w) c\", h=tile_h, w=tile_w)\n                        \n                        tile = adain_seq_inplace(tile, y0_tile)\n                        tiles_out.append(rearrange(tile, \"b (h w) c -> b c h w\", h=tile_h, w=tile_w))\n                    \n                    tiles_out_tensor = torch.cat(tiles_out, dim=0)\n                    tiles_out_tensor = untile_latent(tiles_out_tensor, orig_shape, grid, strides)\n\n                    if h_off == 0:\n                        denoised_pretile = tiles_out_tensor\n                    else:\n                        denoised_pretile[:,:,h_off:-h_off, w_off:-w_off] = tiles_out_tensor\n                    denoised_embed = rearrange(denoised_pretile, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None and freqsep_lowpass_method.endswith(\"pw\"): #EO(\"adain_pw\"):\n\n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n\n                    if   freqsep_lowpass_method == \"median_pw\":\n                        denoised_spatial_new = adain_patchwise_row_batch_med(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size, use_median_blur=True, lowpass_weight=freqsep_lowpass_weight, highpass_weight=freqsep_highpass_weight)\n                    elif freqsep_lowpass_method == \"gaussian_pw\": \n                        denoised_spatial_new = adain_patchwise_row_batch(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None: \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if   freqsep_lowpass_method == \"median\":\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=freqsep_kernel_size)\n                    elif freqsep_lowpass_method == \"gaussian\":\n                        denoised_spatial_LP = gaussian_blur_2d(denoised_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = gaussian_blur_2d(y0_adain_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                    \n                    if EO(\"adain_fs_uhp\"):\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_ULP = gaussian_blur_2d(denoised_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        y0_adain_spatial_ULP = gaussian_blur_2d(y0_adain_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        \n                        denoised_spatial_UHP = denoised_spatial_HP  - denoised_spatial_ULP\n                        y0_adain_spatial_UHP = y0_adain_spatial_HP  - y0_adain_spatial_ULP\n                        \n                        #denoised_spatial_HP  = y0_adain_spatial_ULP + denoised_spatial_UHP\n                        denoised_spatial_HP  = denoised_spatial_ULP + y0_adain_spatial_UHP\n                    \n                    denoised_spatial_new = freqsep_lowpass_weight * y0_adain_spatial_LP + freqsep_highpass_weight * denoised_spatial_HP\n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                else:\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                self.StyleWCT.set(y0_adain_embed)\n                denoised_embed = self.StyleWCT.get(denoised_embed)\n                \n                if transformer_options.get('y0_standard_guide') is not None:\n                    y0_standard_guide = transformer_options.get('y0_standard_guide')\n                    \n                    y0_standard_guide_embed = self.Retrojector.embed(y0_standard_guide)\n                    f_cs = self.StyleWCT.get(y0_standard_guide_embed)\n                    self.y0_standard_guide = self.Retrojector.unembed(f_cs)\n\n                if transformer_options.get('y0_inv_standard_guide') is not None:\n                    y0_inv_standard_guide = transformer_options.get('y0_inv_standard_guide')\n\n                    y0_inv_standard_guide_embed = self.Retrojector.embed(y0_inv_standard_guide)\n                    f_cs = self.StyleWCT.get(y0_inv_standard_guide_embed)\n                    self.y0_inv_standard_guide = self.Retrojector.unembed(f_cs)\n\n            elif transformer_options['y0_style_method'] == \"WCT2\":\n                self.WaveletStyleWCT.set(y0_adain_embed, h_len, w_len)\n                denoised_embed = self.WaveletStyleWCT.get(denoised_embed, h_len, w_len)\n                \n                if transformer_options.get('y0_standard_guide') is not None:\n                    y0_standard_guide = transformer_options.get('y0_standard_guide')\n                    \n                    y0_standard_guide_embed = self.Retrojector.embed(y0_standard_guide)\n                    f_cs = self.WaveletStyleWCT.get(y0_standard_guide_embed, h_len, w_len)\n                    self.y0_standard_guide = self.Retrojector.unembed(f_cs)\n\n                if transformer_options.get('y0_inv_standard_guide') is not None:\n                    y0_inv_standard_guide = transformer_options.get('y0_inv_standard_guide')\n\n                    y0_inv_standard_guide_embed = self.Retrojector.embed(y0_inv_standard_guide)\n                    f_cs = self.WaveletStyleWCT.get(y0_inv_standard_guide_embed, h_len, w_len)\n                    self.y0_inv_standard_guide = self.Retrojector.unembed(f_cs)\n\n            denoised_approx = self.Retrojector.unembed(denoised_embed)\n            \n            eps = (x - denoised_approx) / sigma\n\n            if not UNCOND:\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_pos_weight * (eps[1] - eps_orig[1])\n                    eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n                else:\n                    eps[0] = eps_orig[0] + y0_style_pos_weight * (eps[0] - eps_orig[0])\n            elif eps.shape[0] == 1 and UNCOND:\n                eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n            \n            #eps = eps.float()\n        \n        if y0_style_neg is not None:\n            y0_style_neg_weight    = transformer_options.get(\"y0_style_neg_weight\")\n            y0_style_neg_synweight = transformer_options.get(\"y0_style_neg_synweight\")\n            y0_style_neg_synweight *= y0_style_neg_weight\n            y0_style_neg_mask = transformer_options.get(\"y0_style_neg_mask\")\n            y0_style_neg_mask_edge = transformer_options.get(\"y0_style_neg_mask_edge\")\n            \n            y0_style_neg = y0_style_neg.to(dtype)\n            x   = x_orig.to(dtype)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n\n            denoised_embed = self.Retrojector.embed(denoised)\n            y0_adain_embed = self.Retrojector.embed(y0_style_neg)\n            \n            if transformer_options['y0_style_method'] == \"scattersort\":\n                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')\n                pad = transformer_options.get('y0_style_tile_padding')\n                if pad is not None and tile_h is not None and tile_w is not None:\n                    \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)\n                    \n                    denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n\n                else:\n                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_neg_mask, y0_style_neg_mask_edge, h_len, w_len)\n            \n            \n            elif transformer_options['y0_style_method'] == \"AdaIN\":\n                denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                self.StyleWCT.set(y0_adain_embed)\n                denoised_embed = self.StyleWCT.get(denoised_embed)\n\n            elif transformer_options['y0_style_method'] == \"WCT2\":\n                self.WaveletStyleWCT.set(y0_adain_embed, h_len, w_len)\n                denoised_embed = self.WaveletStyleWCT.get(denoised_embed, h_len, w_len)\n\n            denoised_approx = self.Retrojector.unembed(denoised_embed)\n\n            if UNCOND:\n                eps = (x - denoised_approx) / sigma\n                eps[0] = eps_orig[0] + y0_style_neg_weight * (eps[0] - eps_orig[0])\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n            elif eps.shape[0] == 1 and not UNCOND:\n                eps[0] = eps_orig[0] + y0_style_neg_synweight * (eps[0] - eps_orig[0])\n            \n            #eps = eps.float()\n        if EO(\"model_eps_out\"):\n            self.eps_out = eps.clone()\n        return eps\n    \n\n    def expand_timesteps(self, t, batch_size, device):\n        if not torch.is_tensor(t):\n            is_mps = device.type == \"mps\"\n            if isinstance(t, float):\n                dtype = torch.float32 if is_mps else torch.float64\n            else:\n                dtype = torch.int32   if is_mps else torch.int64\n            t = Tensor([t], dtype=dtype, device=device)\n        elif len(t.shape) == 0:\n            t = t[None].to(device)\n        # broadcast to batch dimension in a way that's compatible with ONNX/Core ML\n        t = t.expand(batch_size)\n        return t\n\ndef clone_inputs(*args, index: int=None):\n\n    if index is None:\n        return tuple(x.clone() for x in args)\n    else:\n        return tuple(x[index].unsqueeze(0).clone() for x in args)\n\n\n\n"
  },
  {
    "path": "flux/redux.py",
    "content": "import torch\nimport comfy.ops\nimport torch.nn\nimport torch.nn.functional as F\n\nops = comfy.ops.manual_cast\n\nclass ReReduxImageEncoder(torch.nn.Module):\n    def __init__(\n        self,\n        redux_dim: int = 1152,\n        txt_in_features: int = 4096,\n        device=None,\n        dtype=None,\n    ) -> None:\n        super().__init__()\n\n        self.redux_dim = redux_dim\n        self.device = device\n        self.dtype = dtype\n        \n        self.style_dtype = None\n\n        self.redux_up = ops.Linear(redux_dim, txt_in_features * 3, dtype=dtype)\n        self.redux_down = ops.Linear(txt_in_features * 3, txt_in_features, dtype=dtype)\n\n    def forward(self, sigclip_embeds) -> torch.Tensor:\n        projected_x = self.redux_down(torch.nn.functional.silu(self.redux_up(sigclip_embeds)))\n        return projected_x\n    \n    def feature_match(self, cond, clip_vision_output, mode=\"WCT\"):\n        sigclip_embeds = clip_vision_output.last_hidden_state\n        dense_embed = torch.nn.functional.silu(self.redux_up(sigclip_embeds))\n        t_sqrt = int(dense_embed.shape[-2] ** 0.5)\n        dense_embed_sq = dense_embed.view(dense_embed.shape[-3], t_sqrt, t_sqrt, dense_embed.shape[-1])\n        \n        t_cond_sqrt = int(cond[0][0].shape[-2] ** 0.5) \n        dense_embed256 = F.interpolate(dense_embed_sq.transpose(-3,-1), size=(t_cond_sqrt,t_cond_sqrt), mode=\"bicubic\")\n        dense_embed256 = dense_embed256.flatten(-2,-1).transpose(-2,-1)\n        \n        dtype = self.style_dtype if hasattr(self, \"style_dtype\") and self.style_dtype is not None else dense_embed.dtype\n        pinv_dtype = torch.float32 if dtype != torch.float64 else dtype\n        \n        W = self.redux_down.weight.data.to(dtype)   # shape [2560, 64]\n        b = self.redux_down.bias.data.to(dtype)     # shape [2560]\n        \n        cond_256 = cond[0][0].clone()\n        \n        if not hasattr(self, \"W_pinv\"):\n            self.W_pinv = torch.linalg.pinv(W.to(pinv_dtype).cuda()).to(W)\n        \n        #cond_256_embed = (cond_256 - b) @ torch.linalg.pinv(W.to(pinv_dtype)).T.to(dtype)\n        cond_embed256 = (cond_256 - b.to(cond_256)) @ self.W_pinv.T.to(cond_256)\n        \n        \n        \n        \n        \n        if mode == \"AdaIN\":\n            cond_embed256 = adain_seq_inplace(cond_embed256, dense_embed256)\n            #for adain_iter in range(EO(\"style_iter\", 0)):\n            #    cond_embed256 = adain_seq_inplace(cond_embed256, dense_embed256)\n            #    cond_embed256 = (cond_embed256 - b) @ torch.linalg.pinv(W.to(pinv_dtype)).T.to(dtype)\n            #    cond_embed256 = F.linear(cond_embed256         .to(W), W, b).to(img)\n            #    cond_embed256 = adain_seq_inplace(cond_embed256, dense_embed256)\n\n        elif mode == \"WCT\":\n            if not hasattr(self, \"dense_embed256\") or self.dense_embed256 is None or self.dense_embed256.shape != dense_embed256.shape or torch.norm(self.dense_embed256 - dense_embed256) > 0:\n                self.dense_embed256 = dense_embed256\n                \n                f_s          = dense_embed256[0].clone()\n                self.mu_s    = f_s.mean(dim=0, keepdim=True)\n                f_s_centered = f_s - self.mu_s\n                \n                cov = (f_s_centered.T.double() @ f_s_centered.double()) / (f_s_centered.size(0) - 1)\n\n                S_eig, U_eig = torch.linalg.eigh((cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device)).cuda())\n                S_eig = S_eig.to(cov)\n                U_eig = U_eig.to(cov)\n                \n                S_eig_sqrt    = S_eig.clamp(min=0).sqrt() # eigenvalues -> singular values\n                \n                whiten = U_eig @ torch.diag(S_eig_sqrt) @ U_eig.T\n                self.y0_color  = whiten.to(f_s_centered)\n\n            for wct_i in range(cond_embed256.shape[-3]):\n                f_c          = cond_embed256[wct_i].clone()\n                mu_c         = f_c.mean(dim=0, keepdim=True)\n                f_c_centered = f_c - mu_c\n                \n                cov = (f_c_centered.T.double() @ f_c_centered.double()) / (f_c_centered.size(0) - 1)\n\n                S_eig, U_eig  = torch.linalg.eigh((cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device)).cuda())\n                S_eig = S_eig.to(cov)\n                U_eig = U_eig.to(cov)\n                \n                inv_sqrt_eig  = S_eig.clamp(min=0).rsqrt() \n                \n                whiten = U_eig @ torch.diag(inv_sqrt_eig) @ U_eig.T\n                whiten = whiten.to(f_c_centered)\n\n                f_c_whitened = f_c_centered @ whiten.T\n                f_cs         = f_c_whitened @ self.y0_color.T + self.mu_s\n                \n                cond_embed256[wct_i] = f_cs\n    \n        cond[0][0] = self.redux_down(cond_embed256)\n        return (cond,)\n        \n        \n        \n\ndef adain_seq_inplace(content: torch.Tensor, style: torch.Tensor, eps: float = 1e-7) -> torch.Tensor:\n    mean_c = content.mean(1, keepdim=True)\n    std_c  = content.std (1, keepdim=True).add_(eps)\n    mean_s = style.mean  (1, keepdim=True)\n    std_s  = style.std   (1, keepdim=True).add_(eps)\n\n    content.sub_(mean_c).div_(std_c).mul_(std_s).add_(mean_s)\n    return content\n\n\ndef adain_seq(content: torch.Tensor, style: torch.Tensor, eps: float = 1e-7) -> torch.Tensor:\n    return ((content - content.mean(1, keepdim=True)) / (content.std(1, keepdim=True) + eps)) * (style.std(1, keepdim=True) + eps) + style.mean(1, keepdim=True)\n\n"
  },
  {
    "path": "helper.py",
    "content": "import torch\nimport torch.nn.functional as F\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKING, TypeVar, List\n\nimport re\nimport functools\nimport copy\n\nfrom comfy.samplers import SCHEDULER_NAMES\n\nfrom .res4lyf import RESplain\n\n\n\n\n# EXTRA_OPTIONS OPS\n\nclass ExtraOptions():\n    def __init__(self, extra_options):\n        self.extra_options = extra_options\n        self.mute          = False\n    \n    # debugMode 0: Follow self.mute only\n    # debugMode 1: Print with debug flag if not muted\n    # debugMode 2: Never print\n    def __call__(self, option, default=None, ret_type=None, match_all_flags=False, debugMode=0):\n        if isinstance(option, (tuple, list)):\n            if match_all_flags:\n                return all(self(single_option, default, ret_type) for single_option in option)\n            else:\n                return any(self(single_option, default, ret_type) for single_option in option)\n\n        if default is None: # get flag\n            pattern = rf\"^(?:{re.escape(option)}\\s*$|{re.escape(option)}=)\"\n            return bool(re.search(pattern, self.extra_options, flags=re.MULTILINE))\n        elif ret_type is None:\n            ret_type = type(default)\n        \n            if ret_type.__module__ != \"builtins\":\n                mod = __import__(default.__module__)\n                ret_type = lambda v: getattr(mod, v, None)\n        \n        if ret_type == list:\n            pattern = rf\"^{re.escape(option)}\\s*=\\s*([a-zA-Z0-9_.,+-]+)\\s*$\"\n            match   = re.search(pattern, self.extra_options, flags=re.MULTILINE)\n            \n            if match:\n                value = match.group(1)\n                if not self.mute and debugMode != 2:\n                    if debugMode == 1:\n                        RESplain(\"Set extra_option: \", option, \"=\", value, debug=True)\n                    else:\n                        RESplain(\"Set extra_option: \", option, \"=\", value)\n            else:\n                value = default\n                \n            if type(value) == str: \n                value = value.split(',')\n            \n                if type(default[0]) == type:\n                    ret_type = default[0]\n                else:\n                    ret_type = type(default[0])\n                \n                value = [ret_type(value[_]) for _ in range(len(value))]\n        \n        else:\n            pattern = rf\"^{re.escape(option)}\\s*=\\s*([a-zA-Z0-9_.+-]+)\\s*$\"\n            match = re.search(pattern, self.extra_options, flags=re.MULTILINE)\n            if match:\n                if ret_type == bool:\n                    value_str = match.group(1).lower()\n                    value = value_str in (\"true\", \"1\", \"yes\", \"on\")\n                else:\n                    value = ret_type(match.group(1))\n                if not self.mute and debugMode != 2:\n                    if debugMode == 1:\n                        RESplain(\"Set extra_option: \", option, \"=\", value, debug=True)\n                    else:\n                        RESplain(\"Set extra_option: \", option, \"=\", value)\n            else:\n                value = default\n        \n        return value\n\n\n\n\ndef extra_options_flag(flag, extra_options):\n    pattern = rf\"^(?:{re.escape(flag)}\\s*$|{re.escape(flag)}=)\"\n    return bool(re.search(pattern, extra_options, flags=re.MULTILINE))\n\ndef get_extra_options_kv(key, default, extra_options, ret_type=None):\n    ret_type = type(default) if ret_type is None else ret_type\n\n    pattern = rf\"^{re.escape(key)}\\s*=\\s*([a-zA-Z0-9_.+-]+)\\s*$\"\n    match = re.search(pattern, extra_options, flags=re.MULTILINE)\n    \n    if match:\n        value = match.group(1)\n    else:\n        value = default\n        \n    return ret_type(value)\n\ndef get_extra_options_list(key, default, extra_options, ret_type=None):\n    default = [default] if type(default) != list else default\n    \n    #ret_type = type(default)    if ret_type is None else ret_type\n    ret_type = type(default[0]) if ret_type is None else ret_type\n\n    pattern = rf\"^{re.escape(key)}\\s*=\\s*([a-zA-Z0-9_.,+-]+)\\s*$\"\n    match   = re.search(pattern, extra_options, flags=re.MULTILINE)\n    \n    if match:\n        value = match.group(1)\n    else:\n        value = default\n    \n    if type(value) == str:\n        value = value.split(',')\n    \n    value = [ret_type(value[_]) for _ in range(len(value))]\n        \n    return value\n\n\n\nclass OptionsManager:\n    APPEND_OPTIONS = {\"extra_options\"}\n\n    def __init__(self, options, **kwargs):\n        self.options_list = []\n        if options is not None:\n            self.options_list.append(options)\n\n        for key, value in kwargs.items():\n            if key.startswith('options') and value is not None:\n                self.options_list.append(value)\n\n        self._merged_dict = None\n\n    def add_option(self, option):\n        \"\"\"Add a single options dictionary\"\"\"\n        if option is not None:\n            self.options_list.append(option)\n            self._merged_dict = None # invalidate cached merged options\n\n    @property\n    def merged(self):\n        \"\"\"Get merged options with proper priority handling\"\"\"\n        if self._merged_dict is None:\n            self._merged_dict = {}\n\n            special_string_options = {\n                key: [] for key in self.APPEND_OPTIONS\n            }\n\n            for options_dict in self.options_list:\n                if options_dict is not None:\n                    for key, value in options_dict.items():\n                        if key in self.APPEND_OPTIONS and value:\n                            special_string_options[key].append(value)\n                        elif isinstance(value, dict):\n                            # Deep merge dictionaries\n                            if key not in self._merged_dict:\n                                self._merged_dict[key] = {}\n\n                            if isinstance(self._merged_dict[key], dict):\n                                self._deep_update(self._merged_dict[key], value)\n                            else:\n                                self._merged_dict[key] = value.copy()\n                        # Special case for FrameWeightsManager\n                        elif key == \"frame_weights_mgr\" and hasattr(value, \"_weight_configs\"):\n                            if key not in self._merged_dict:\n                                self._merged_dict[key] = copy.deepcopy(value)\n                            else:\n                                existing_mgr = self._merged_dict[key]\n                                \n                                if hasattr(value, \"device\") and value.device != torch.device('cpu'):\n                                    existing_mgr.device = value.device\n                                \n                                if hasattr(value, \"dtype\") and value.dtype != torch.float64:\n                                    existing_mgr.dtype = value.dtype\n                                \n                                # Merge all weight_configs\n                                if hasattr(value, \"_weight_configs\"):\n                                    for name, config in value._weight_configs.items():\n                                        config_kwargs = config.copy()\n                                        existing_mgr.add_weight_config(name, **config_kwargs)\n                        else:\n                            self._merged_dict[key] = value\n\n            # append special case string options (e.g. extra_options)\n            for key, value in special_string_options.items():\n                if value:\n                    self._merged_dict[key] = \"\\n\".join(value)\n\n        return self._merged_dict\n\n    def update(self, key_or_dict, value=None, append=False):\n        \"\"\"Update options with a single key-value pair or a dictionary\"\"\"\n        if value is not None or isinstance(key_or_dict, (str, list)):\n            # single key-value update\n            key_path = key_or_dict\n            if isinstance(key_path, str):\n                key_path = key_path.split('.')\n\n            update_dict = {}\n            current = update_dict\n\n            for i, key in enumerate(key_path[:-1]):\n                current[key] = {}\n                current = current[key]\n\n            current[key_path[-1]] = value\n\n            self.add_option(update_dict)\n        else:\n            # dictionary update\n            flat_updates = {}\n\n            def _flatten_dict(d, prefix=\"\"):\n                for key, value in d.items():\n                    full_key = f\"{prefix}.{key}\" if prefix else key\n                    if isinstance(value, dict):\n                        _flatten_dict(value, full_key)\n                    else:\n                        flat_updates[full_key] = value\n\n            _flatten_dict(key_or_dict)\n\n            for key_path, value in flat_updates.items():\n                self.update(key_path, value)  # Recursive call\n\n        return self\n\n    def get(self, key, default=None):\n        return self.merged.get(key, default)\n\n    def _deep_update(self, target_dict, source_dict):\n        for key, value in source_dict.items():\n            if isinstance(value, dict) and key in target_dict and isinstance(target_dict[key], dict):\n                # recursive dict update\n                self._deep_update(target_dict[key], value)\n            else:\n                target_dict[key] = value\n\n    def __getitem__(self, key):\n        \"\"\"Allow dictionary-like access to options\"\"\"\n        return self.merged[key]\n\n    def __contains__(self, key):\n        \"\"\"Allow 'in' operator for options\"\"\"\n        return key in self.merged\n\n    def as_dict(self):\n        \"\"\"Return the merged options as a dictionary\"\"\"\n        return self.merged.copy()\n\n    def __bool__(self):\n        \"\"\"Return True if there are any options\"\"\"\n        return len(self.options_list) > 0 and any(opt is not None for opt in self.options_list)\n\n    def debug_print_options(self):\n        for i, options_dict in enumerate(self.options_list):\n            RESplain(f\"Options {i}:\", debug=True)\n            if options_dict is not None:\n                for key, value in options_dict.items():\n                    RESplain(f\"  {key}: {value}\", debug=True)\n            else:\n                RESplain(\"  None\", \"\\n\", debug=True)\n\n\n\n\n# MISCELLANEOUS OPS\n\ndef has_nested_attr(obj, attr_path):\n    attrs = attr_path.split('.')\n    for attr in attrs:\n        if not hasattr(obj, attr):\n            return False\n        obj = getattr(obj, attr)\n    return True\n\ndef safe_get_nested(d, keys, default=None):\n    for key in keys:\n        if isinstance(d, dict):\n            d = d.get(key, default)\n        else:\n            return default\n    return d\n\nclass AlwaysTrueList:\n    def __contains__(self, item):\n        return True\n\n    def __iter__(self):\n        while True:\n            yield True # kapow \n\n\ndef parse_range_string(s):\n    if \"all\" in s:\n        return AlwaysTrueList()\n\n    result = []\n    for part in s.split(','):\n        part = part.strip()\n        if not part:\n            continue\n        val = float(part) if '.' in part else int(part)\n        result.append(val)\n    return result\n\ndef parse_range_string_int(s):\n    if \"all\" in s:\n        return AlwaysTrueList()\n    \n    result = []\n    for part in s.split(','):\n        if '-' in part:\n            start, end = part.split('-')\n            result.extend(range(int(start), int(end) + 1))\n        elif part.strip() != '':\n            result.append(int(part))\n    return result\n\ndef parse_tile_sizes(tile_sizes: str):\n    \"\"\"\n    Converts multiline string like:\n        \"1024,1024\\n768,1344\\n1344,768\"\n    into:\n        [(1024, 1024), (768, 1344), (1344, 768)]\n    \"\"\"\n    return [tuple(map(int, line.strip().split(',')))\n            for line in tile_sizes.strip().splitlines()\n            if line.strip()]\n    \n\n\n# COMFY OPS\n\ndef is_video_model(model):\n    is_video_model = False\n    try :\n        is_video_model =    'video'  in model.inner_model.inner_model.model_config.unet_config['image_model'] or \\\n                            'cosmos' in model.inner_model.inner_model.model_config.unet_config['image_model'] or \\\n                            'wan2'   in model.inner_model.inner_model.model_config.unet_config['image_model'] or \\\n                            'ltxv'   in model.inner_model.inner_model.model_config.unet_config['image_model']    \n    except:\n        pass\n    return is_video_model\n\ndef is_RF_model(model):\n    from comfy import model_sampling\n    modelsampling = model.inner_model.inner_model.model_sampling\n    return isinstance(modelsampling, model_sampling.CONST)\n\ndef get_res4lyf_scheduler_list():\n    scheduler_names = SCHEDULER_NAMES.copy()\n    if \"beta57\" not in scheduler_names:\n        scheduler_names.append(\"beta57\")\n    return scheduler_names\n\ndef move_to_same_device(*tensors):\n    if not tensors:\n        return tensors\n    device = tensors[0].device\n    return tuple(tensor.to(device) for tensor in tensors)\n\ndef conditioning_set_values(conditioning, values={}):\n    c = []\n    for t in conditioning:\n        n = [t[0], t[1].copy()]\n        for k in values:\n            n[1][k] = values[k]\n        c.append(n)\n    return c\n\n\n\n\n\n# MISC OPS\n\ndef initialize_or_scale(tensor, value, steps):\n    if tensor is None:\n        return torch.full((steps,), value)\n    else:\n        return value * tensor\n\n\ndef pad_tensor_list_to_max_len(tensors: List[torch.Tensor], dim: int = -2) -> List[torch.Tensor]:\n    \"\"\"Zero-pad each tensor in `tensors` along `dim` up to their common maximum length.\"\"\"\n    max_len = max(t.shape[dim] for t in tensors)\n    padded = []\n    for t in tensors:\n        cur = t.shape[dim]\n        if cur < max_len:\n            pad_shape = list(t.shape)\n            pad_shape[dim] = max_len - cur\n            zeros = torch.zeros(*pad_shape, dtype=t.dtype, device=t.device)\n            t = torch.cat((t, zeros), dim=dim)\n        padded.append(t)\n    return padded\n\n\n\nclass PrecisionTool:\n    def __init__(self, cast_type='fp64'):\n        self.cast_type = cast_type\n\n    def cast_tensor(self, func):\n        @functools.wraps(func)\n        def wrapper(*args, **kwargs):\n            if self.cast_type not in ['fp64', 'fp32', 'fp16']:\n                return func(*args, **kwargs)\n\n            target_device = None\n            for arg in args:\n                if torch.is_tensor(arg):\n                    target_device = arg.device\n                    break\n            if target_device is None:\n                for v in kwargs.values():\n                    if torch.is_tensor(v):\n                        target_device = v.device\n                        break\n            \n        # recursively zs_recast tensors in nested dictionaries\n            def cast_and_move_to_device(data):\n                if torch.is_tensor(data):\n                    if self.cast_type == 'fp64':\n                        return data.to(torch.float64).to(target_device)\n                    elif self.cast_type == 'fp32':\n                        return data.to(torch.float32).to(target_device)\n                    elif self.cast_type == 'fp16':\n                        return data.to(torch.float16).to(target_device)\n                elif isinstance(data, dict):\n                    return {k: cast_and_move_to_device(v) for k, v in data.items()}\n                return data\n\n            new_args = [cast_and_move_to_device(arg) for arg in args]\n            new_kwargs = {k: cast_and_move_to_device(v) for k, v in kwargs.items()}\n            \n            return func(*new_args, **new_kwargs)\n        return wrapper\n\n    def set_cast_type(self, new_value):\n        if new_value in ['fp64', 'fp32', 'fp16']:\n            self.cast_type = new_value\n        else:\n            self.cast_type = 'fp64'\n\nprecision_tool = PrecisionTool(cast_type='fp64')\n\n\n\n\nclass FrameWeightsManager:\n    def __init__(self):\n        self._weight_configs = {}\n        \n        self._default_config = {\n            \"frame_weights\": None,  # Tensor of weights if directly specified\n            \"dynamics\": \"linear\",   # Function type for dynamic period\n            \"schedule\": \"moderate_early\",  # Schedule type\n            \"scale\": 0.5,           # Amount of change\n            \"is_reversed\": False,   # Whether to reverse weights\n            \"custom_string\": None,  # Per-configuration custom string\n        }\n        self.dtype = torch.float64\n        self.device = torch.device('cpu')\n    \n    def set_device_and_dtype(self, device=None, dtype=None):\n        \"\"\"Set the device and dtype for generated weights\"\"\"\n        if device is not None:\n            self.device = device\n        if dtype is not None:\n            self.dtype = dtype\n        return self\n    \n    def set_custom_weights(self, config_name, weights):\n        \"\"\"Set custom weights for a specific configuration\"\"\"\n        if config_name not in self._weight_configs:\n            self._weight_configs[config_name] = self._default_config.copy()\n\n        self._weight_configs[config_name][\"frame_weights\"] = weights\n        return self\n    \n    def add_weight_config(self, name, **kwargs):\n        if name not in self._weight_configs:\n            self._weight_configs[name] = self._default_config.copy()\n        \n        for key, value in kwargs.items():\n            if key in self._default_config:\n                self._weight_configs[name][key] = value\n            # ignore unknown parameters\n        \n        return self\n    \n    def get_weight_config(self, name):\n        if name not in self._weight_configs:\n            return None\n        return self._weight_configs[name].copy()\n    \n    def get_frame_weights_by_name(self, name, num_frames, step=None):\n        config = self.get_weight_config(name)\n        if config is None:\n            return None\n\n        weights_tensor =  self._generate_frame_weights(\n            num_frames,\n            config[\"dynamics\"],\n            config[\"schedule\"],\n            config[\"scale\"],\n            config[\"is_reversed\"],\n            config[\"frame_weights\"],\n            step=step,\n            custom_string=config[\"custom_string\"]\n        )\n\n        if config[\"custom_string\"] is not None and config[\"custom_string\"].strip() != \"\" and weights_tensor is not None:\n            # ensure that the custom_string has more than just lines that begin with non-numeric characters\n            custom_string = config[\"custom_string\"].strip()\n            custom_string = re.sub(r\"^[^0-9].*\", \"\", custom_string, flags=re.MULTILINE)\n            custom_string = re.sub(r\"^\\s*$\", \"\", custom_string, flags=re.MULTILINE)\n            if custom_string.strip() != \"\":\n                # If the custom_string is not empty, show the custom weights\n                formatted_weights = [f\"{w:.2f}\" for w in weights_tensor.tolist()]\n                RESplain(f\"Custom '{name}' for step {step}: {formatted_weights}\", debug=True)\n        elif weights_tensor is None:\n            weights_tensor = torch.ones(num_frames, dtype=self.dtype, device=self.device)\n\n        return weights_tensor\n\n    def _generate_custom_weights(self, num_frames, custom_string, step=None):\n        \"\"\"\n        Generate custom weights based on the provided frame weights from a string with one line per step.\n        \n        Args:\n            num_frames: Number of frames to generate weights for\n            custom_string: The custom weights string to parse\n            step: Specific step to use (0-indexed). If None, uses the last line.\n        \n        Features:\n        - Each line represents weights for one step\n        - Add *[multiplier] at the end of a line to scale those weights (e.g., \"1.0, 0.8, 0.6*1.5\")\n        - Include \"interpolate\" on its own line to interpolate each line to match num_frames\n        - Prefix line with the steps to apply it to (e.g. \"0-5: 1.0, 0.8, 0.6\")\n        \n        Example:\n        0-5:1.0, 0.8, 0.6, 0.4, 0.2, 0.0\n        6-10:0.0, 0.2, 0.4, 0.6, 0.8, 1.0*1.5\n        11-30:0.0, 0.5, 1.0, 0.5, 0.0, 0.0*0.8\n        interpolate\n        \"\"\"\n        if custom_string is not None:\n            interpolate_frames = \"interpolate\" in custom_string\n            \n            lines = custom_string.strip().split('\\n')\n            lines = [line for line in lines if line.strip() and not line.strip().startswith(\"interp\")]\n            \n            if not lines:\n                return None\n            \n            if step is not None:\n                matching_line = None\n                for line in lines:\n                    # Check if line has a step range prefix\n                    step_range_match = re.match(r'^(\\d+)-(\\d+):(.*)', line.strip())\n                    if step_range_match:\n                        start_step = int(step_range_match.group(1))\n                        end_step = int(step_range_match.group(2))\n                        if start_step <= step <= end_step:\n                            matching_line = step_range_match.group(3).strip()\n                    \n                if matching_line is not None:\n                    weights_str = matching_line\n                else:\n                    # if no matching line, try to use the step number line or the last line\n                    if step < len(lines):\n                        line_index = step\n                    else:\n                        line_index = len(lines) - 1\n                    \n                    if line_index < 0:\n                        return None\n                    \n                    weights_str = lines[line_index].strip()\n\n                    if \":\" in weights_str:\n                        weights_str = weights_str.split(\":\", 1)[1].strip()\n            else:\n                # When no specific step is provided, use the last line\n                line_index = len(lines) - 1\n                weights_str = lines[line_index].strip()\n                if \":\" in weights_str:\n                    weights_str = weights_str.split(\":\", 1)[1].strip()\n            \n            if not weights_str:\n                return None\n            \n            multiplier = 1.0\n            if \"*\" in weights_str:\n                parts = weights_str.rsplit(\"*\", 1)\n                if len(parts) == 2:\n                    weights_str = parts[0].strip()\n                    try:\n                        multiplier = float(parts[1].strip())\n                    except ValueError as e:\n                        RESplain(f\"Invalid multiplier format: {parts[1]}\")\n            \n            try:\n                weights = [float(w.strip()) for w in weights_str.split(',')]\n                weights_tensor = torch.tensor(weights, dtype=self.dtype, device=self.device)\n                \n                if multiplier != 1.0:\n                    weights_tensor = weights_tensor * multiplier\n                \n                if interpolate_frames and len(weights_tensor) != num_frames:\n                    if len(weights_tensor) > 1:\n                        orig_positions = torch.linspace(0, 1, len(weights_tensor), dtype=self.dtype, device=self.device)\n                        new_positions = torch.linspace(0, 1, num_frames, dtype=self.dtype, device=self.device)\n                        \n                        weights_tensor = torch.nn.functional.interpolate(\n                            weights_tensor.view(1, 1, -1), \n                            size=num_frames, \n                            mode='linear',\n                            align_corners=True\n                        ).squeeze()\n                    else:\n                        # If only one weight, repeat it for all frames\n                        weights_tensor = weights_tensor.repeat(num_frames)\n                else:\n                    if len(weights_tensor) < num_frames:\n                        # If fewer weights than frames, repeat the last weight\n                        weights_tensor = torch.cat([\n                            weights_tensor, \n                            torch.full((num_frames - len(weights_tensor),), weights_tensor[-1], \n                                    dtype=self.dtype, device=self.device)\n                        ])\n                    \n                    # Trim if too many weights\n                    if len(weights_tensor) > num_frames:\n                        weights_tensor = weights_tensor[:num_frames]\n\n                return weights_tensor\n                    \n            except (ValueError, IndexError) as e:\n                RESplain(f\"Error parsing custom frame weights: {e}\")\n                return None\n        \n        return None\n    \n    def _generate_frame_weights(self, num_frames, dynamics, schedule, scale, is_reversed, frame_weights, step=None, custom_string=None):\n        # Look for the multiplier= parameter in the custom string and store it as a float value\n        multiplier = None\n        rate_factor = None\n        start_change_factor = None\n        if custom_string is not None:\n            if \"multiplier\" in custom_string:\n                multiplier_match = re.search(r\"multiplier\\s*=\\s*([0-9.]+)\", custom_string)\n                if multiplier_match:\n                    multiplier = float(multiplier_match.group(1))\n                    # Remove the multiplier= from the custom string\n                    custom_string = re.sub(r\"multiplier\\s*=\\s*[0-9.]+\", \"\", custom_string).strip()\n                    RESplain(f\"Custom multiplier detected: {multiplier}\", debug=True)\n            if \"rate_factor\" in custom_string:\n                rate_factor_match = re.search(r\"rate_factor\\s*=\\s*([0-9.]+)\", custom_string)\n                if rate_factor_match:\n                    rate_factor = float(rate_factor_match.group(1))\n                    # Remove the rate_factor= from the custom string\n                    custom_string = re.sub(r\"rate_factor\\s*=\\s*[0-9.]+\", \"\", custom_string).strip()\n                    RESplain(f\"Custom rate factor detected: {rate_factor}\", debug=True)\n            if \"start_change_factor\" in custom_string:\n                start_change_factor_match = re.search(r\"start_change_factor\\s*=\\s*([0-9.]+)\", custom_string)\n                if start_change_factor_match:\n                    start_change_factor = float(start_change_factor_match.group(1))\n                    # Remove the start_change_factor= from the custom string\n                    custom_string = re.sub(r\"start_change_factor\\s*=\\s*[0-9.]+\", \"\", custom_string).strip()\n                    RESplain(f\"Custom start change factor detected: {start_change_factor}\", debug=True)\n            \n\n        if custom_string is not None and custom_string.strip() != \"\" and step is not None:\n            custom_weights = self._generate_custom_weights(num_frames, custom_string, step)\n            if custom_weights is not None:\n                weights = custom_weights\n                weights = torch.flip(weights, [0]) if is_reversed else weights\n                return weights\n            else:\n                RESplain(\"custom frame weights failed to parse, doing the normal thing...\", debug=True)\n\n        if rate_factor is None:\n            if \"fast\" in schedule:\n                rate_factor = 0.25\n            elif \"slow\" in schedule:\n                rate_factor = 1.0\n            else: # moderate\n                rate_factor = 0.5\n\n        if start_change_factor is None:\n            if \"early\" in schedule:\n                start_change_factor = 0.0\n            elif \"late\" in schedule:\n                start_change_factor = 0.2\n            else:\n                start_change_factor = 0.0\n\n        change_frames = max(round(num_frames * rate_factor), 2)\n        change_start = round(num_frames * start_change_factor)\n        low_value = 1.0 - scale\n\n        if frame_weights is not None:\n            weights = torch.cat([frame_weights, torch.full((num_frames,), frame_weights[-1])])\n            weights = weights[:num_frames]\n        else:\n            if dynamics == \"constant\":\n                weights = self._generate_constant_schedule(change_start, change_frames, low_value, num_frames)\n            elif dynamics == \"linear\":\n                weights = self._generate_linear_schedule(change_start, change_frames, low_value, num_frames)\n            elif dynamics == \"ease_out\":\n                weights = self._generate_easeout_schedule(change_start, change_frames, low_value, num_frames)\n            elif dynamics == \"ease_in\":\n                weights = self._generate_easein_schedule(change_start, change_frames, low_value, num_frames)\n            elif dynamics == \"middle\":\n                weights = self._generate_middle_schedule(change_start, change_frames, low_value, num_frames)\n            elif dynamics == \"trough\":\n                weights = self._generate_trough_schedule(change_start, change_frames, low_value, num_frames)\n            else:\n                raise ValueError(f\"Invalid schedule: {dynamics}\")\n        \n        if multiplier is None:\n            multiplier = 1.0\n        \n        weights = torch.flip(weights, [0]) if is_reversed else weights\n        weights = weights * multiplier\n        weights = torch.clamp(weights, min=0.0, max=(max(1.0, multiplier)))\n        weights = weights.to(dtype=self.dtype, device=self.device)\n\n        return weights\n\n    def _generate_constant_schedule(self, change_start, change_frames, low_value, num_frames):\n        \"\"\"constant schedule with the scale as the low weight\"\"\"\n        return torch.ones(num_frames) * low_value\n    \n    def _generate_linear_schedule(self, change_start, change_frames, low_value, num_frames):\n        \"\"\"linear schedule from 1 to the low weight\"\"\"\n        weights = torch.linspace(1, low_value, change_frames)\n\n        weights = torch.cat([torch.full((change_start,), 1.0), weights])\n        weights = torch.cat([weights, torch.full((num_frames,), weights[-1])])\n        weights = weights[:num_frames]\n        return weights\n    \n    def _generate_easeout_schedule(self, change_start, change_frames, low_value, num_frames, k=4.0):\n        \"\"\"exponential schedule from 1 to the low weight\"\"\"\n        change_frames = max(change_frames, 4)\n        t = torch.linspace(0, 1, change_frames, dtype=self.dtype, device=self.device)\n        weights = 1.0 - (1.0 - low_value) * (1.0 - torch.exp(-k * t))\n        weights = torch.cat([torch.full((change_start,), 1.0), weights])\n        weights = torch.cat([weights, torch.full((num_frames,), weights[-1])])\n        weights = weights[:num_frames]\n        return weights\n\n    def _generate_easein_schedule(self, change_start, change_frames, low_value, num_frames):\n        \"\"\"a monomial power schedule from 1 to the low weight\"\"\"\n        change_frames = max(change_frames, 4)\n        t = torch.linspace(0, 1, change_frames, dtype=self.dtype, device=self.device)\n        weights = 1 - (1 - low_value) * torch.pow(t, 2)\n        # Prepend with change_start frames of 1.0\n        weights = torch.cat([torch.full((change_start,), 1.0), weights])\n        total_frames_to_pad = num_frames - len(weights)\n        if (total_frames_to_pad > 1):\n            mid_value_between_low_value_and_second_to_last_value = (weights[-2] + low_value) / 2.0\n            weights[-1] = mid_value_between_low_value_and_second_to_last_value\n        # Fill remaining with final value\n        weights = torch.cat([weights, torch.full((num_frames,), weights[-1])])\n        weights = weights[:num_frames]\n        return weights\n\n    def _generate_middle_schedule(self, change_start, change_frames, low_value, num_frames):\n        \"\"\"gaussian middle peaking schedule from 1 to the low weight\"\"\"\n\n        change_frames = max(change_frames, 4)\n        t = torch.linspace(0, 1, change_frames, dtype=self.dtype, device=self.device)\n        weights = torch.exp(-0.5 * ((t - 0.5) / 0.2) ** 2)\n        weights = weights / torch.max(weights)\n        weights = low_value + (1 - low_value) * weights\n        total_frames_to_pad = num_frames - len(weights)\n        pad_left = total_frames_to_pad // 2\n        pad_right = total_frames_to_pad - pad_left\n        weights = torch.cat([torch.full((pad_left,), low_value), weights, torch.full((pad_right,), low_value)])\n        if change_start > 0:\n            # Pad the beginning with the first value, and truncate to num_frames\n            weights = torch.cat([torch.full((change_start,), low_value), weights])\n            weights = weights[:num_frames]     \n\n        return weights\n    \n    def _generate_trough_schedule(self, change_start, change_frames, low_value, num_frames):\n        \"\"\"\n        Trough schedule with both ends at 1 and the middle at the low weight.\n        When change_start > 0, creates asymmetry with shorter decay at beginning and longer at end.\n        \"\"\"\n        change_frames = max(change_frames, 4)\n        \n        # Calculate sigma based on change_frames - controls overall decay rate\n        sigma = max(0.2, change_frames / num_frames)\n        \n        if change_start == 0:\n            t = torch.linspace(-1, 1, num_frames, dtype=self.dtype, device=self.device)\n        else:\n\n            asymmetry_factor = min(0.5, change_start / num_frames)\n            \n            split_point = 0.5 - asymmetry_factor\n            \n            first_size = int(split_point * num_frames)\n            first_size = max(1, first_size)  # at least one frame\n            t1 = torch.linspace(-1, 0, first_size, dtype=self.dtype, device=self.device)\n            \n            second_size = num_frames - first_size\n            t2 = torch.linspace(0, 1, second_size, dtype=self.dtype, device=self.device)\n            \n            t = torch.cat([t1, t2])\n        \n        # shape using Gaussian function\n        trough = 1.0 - torch.exp(-0.5 * (t / sigma) ** 2)\n        \n        weights = low_value + (1.0 - low_value) * trough\n        \n        return weights\n    \n    \n    \n    \ndef check_projection_consistency(x, W, b):\n    W_pinv = torch.linalg.pinv(W.T)\n    x_proj = (x - b) @ W_pinv     \n    x_recon = x_proj @ W.T + b   \n    error = torch.norm(x - x_recon)\n    in_subspace = error < 1e-3\n    return error, in_subspace\n\n\n\n\ndef get_max_dtype(device='cpu'):\n    if torch.backends.mps.is_available():\n        MAX_DTYPE = torch.float32\n    else:\n        try:\n            torch.tensor([0.0], dtype=torch.float64, device=device)\n            MAX_DTYPE = torch.float64\n        except (RuntimeError, TypeError):\n            MAX_DTYPE = torch.float32\n    return MAX_DTYPE\n\n\n"
  },
  {
    "path": "helper_sigma_preview_image_preproc.py",
    "content": "import torch\nimport torch.nn.functional as F\n\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union\n\nimport numpy as np\nimport folder_paths\nfrom PIL.PngImagePlugin import PngInfo\nfrom PIL import Image\nimport json\nimport os \nimport random\nimport copy\n\nfrom io import BytesIO\n\nimport matplotlib.pyplot as plt\nimport matplotlib\nmatplotlib.use('Agg')  # use the Agg backend for non-interactive rendering... prevent crashes by not using tkinter (which requires running in the main thread)\nfrom comfy.cli_args import args\n\nimport comfy.samplers\nimport comfy.utils\n\nfrom nodes import MAX_RESOLUTION\n\n\n\nfrom .beta.rk_method_beta        import RK_Method_Beta\nfrom .beta.rk_noise_sampler_beta import RK_NoiseSampler, NOISE_MODE_NAMES\nfrom .helper                     import get_res4lyf_scheduler_list\nfrom .sigmas                     import get_sigmas\nfrom .images                     import image_resize\nfrom .res4lyf                    import RESplain\n\n\n\nclass SaveImage:\n    def __init__(self):\n        self.output_dir = folder_paths.get_output_directory()\n        self.type = \"output\"\n        self.prefix_append = \"\"\n        self.compress_level = 4\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"images\":          (\"IMAGE\",  {                      \"tooltip\": \"The images to save.\"}),\n                \"filename_prefix\": (\"STRING\", {\"default\": \"ComfyUI\", \"tooltip\": \"The prefix for the file to save. This may include formatting information such as %date:yyyy-MM-dd% or %Empty Latent Image.width% to include values from nodes.\"})\n            },\n            \"hidden\": {\n                \"prompt\": \"PROMPT\", \"extra_pnginfo\": \"EXTRA_PNGINFO\"\n            },\n        }\n\n    RETURN_TYPES = ()\n    FUNCTION = \"save_images\"\n\n    OUTPUT_NODE = True\n\n    CATEGORY = \"image\"\n    DESCRIPTION = \"Saves the input images to your ComfyUI output directory.\"\n\n    def save_images(self,\n                    images,\n                    filename_prefix = \"ComfyUI\",\n                    prompt          = None,\n                    extra_pnginfo   = None\n                    ):\n        \n        filename_prefix += self.prefix_append\n        full_output_folder, filename, counter, subfolder, filename_prefix = folder_paths.get_save_image_path(filename_prefix, self.output_dir, images[0].shape[1], images[0].shape[0])\n        results = list()\n        for (batch_number, image) in enumerate(images):\n            i = 255. * image.cpu().numpy()\n            img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))\n            metadata = None\n            if not args.disable_metadata:\n                metadata = PngInfo()\n                if prompt is not None:\n                    metadata.add_text(\"prompt\", json.dumps(prompt))\n                if extra_pnginfo is not None:\n                    for x in extra_pnginfo:\n                        metadata.add_text(x, json.dumps(extra_pnginfo[x]))\n\n            filename_with_batch_num = filename.replace(\"%batch_num%\", str(batch_number))\n            file = f\"{filename_with_batch_num}_{counter:05}_.png\"\n            img.save(os.path.join(full_output_folder, file), pnginfo=metadata, compress_level=self.compress_level)\n            results.append({\n                \"filename\": file,\n                \"subfolder\": subfolder,\n                \"type\": self.type\n            })\n            counter += 1\n\n        return { \"ui\": { \"images\": results } }\n\n\n\n# adapted from https://github.com/Extraltodeus/sigmas_tools_and_the_golden_scheduler\nclass SigmasPreview(SaveImage):\n    def __init__(self):\n        self.output_dir = folder_paths.get_temp_directory()\n        self.type = \"temp\"\n        self.prefix_append = \"_temp_\" + ''.join(random.choice(\"abcdefghijklmnopqrstupvxyz1234567890\") for x in range(5))\n        self.compress_level = 4\n\n    @classmethod\n    def INPUT_TYPES(self):\n        return {\n            \"required\": {\n                \"sigmas\":         (\"SIGMAS\",),\n                \"print_as_list\" : (\"BOOLEAN\", {\"default\": False}),\n                \"line_color\":     (\"STRING\", {\"default\": \"blue\"}),\n            },\n        }\n\n    RETURN_TYPES = (\"IMAGE\",)\n\n    FUNCTION = \"sigmas_preview\"\n    OUTPUT_NODE = True\n    CATEGORY = 'RES4LYF/sigmas'\n\n    @staticmethod\n    def tensor_to_graph_image(tensor, color='blue'):\n        \n        plt.figure()\n        plt.plot(tensor.numpy(), marker='o', linestyle='-', color=color)\n        plt.title(\"Graph from Tensor\")\n        plt.xlabel(\"Step Number\")\n        plt.ylabel(\"Sigma Value\")\n        \n        with BytesIO() as buf:\n            plt.savefig(buf, format='png')\n            buf.seek(0)\n            image = Image.open(buf).copy()\n            \n        plt.close()\n        return image\n\n    def sigmas_preview(self, sigmas, print_as_list, line_color):\n        \n        if print_as_list:\n            # Convert to list with 4 decimal places\n            sigmas_list = [round(float(s), 4) for s in sigmas.tolist()]\n            \n            # Print header using RESplain\n            RESplain(\"\\n\" + \"=\"*60)\n            RESplain(\"SIGMAS PREVIEW - PRINT LIST\")\n            RESplain(\"=\"*60)\n            \n            # Print basic stats\n            RESplain(f\"Total steps: {len(sigmas_list)}\")\n            RESplain(f\"Min sigma:   {min(sigmas_list):.4f}\")\n            RESplain(f\"Max sigma:   {max(sigmas_list):.4f}\")\n          \n            # Print the clean sigma values\n            RESplain(f\"\\nSigma values ({len(sigmas_list)} steps):\")\n            RESplain(\"-\" * 40)\n            \n            # Print in rows of 5 for readability\n            for i in range(0, len(sigmas_list), 5):\n                row = sigmas_list[i:i+5]\n                row_str = \"  \".join(f\"{val:8.4f}\" for val in row)\n                RESplain(f\"Step {i:2d}-{min(i+4, len(sigmas_list)-1):2d}: {row_str}\")\n            \n            # Calculate and print percentages (normalized 0-1)\n            sigmas_percentages = ((sigmas-sigmas.min())/(sigmas.max()-sigmas.min())).tolist()\n            sigmas_percentages = [round(p, 4) for p in sigmas_percentages]\n            \n            RESplain(f\"\\nNormalized percentages (0.0-1.0):\")\n            RESplain(\"-\" * 40)\n            \n            # Print step-by-step breakdown\n            RESplain(\"Step | Sigma    | Normalized | Step Size\")\n            RESplain(\"-----|----------|------------|----------\")\n            for i, (sigma, pct) in enumerate(zip(sigmas_list, sigmas_percentages)):\n                if i > 0:\n                    step_size = sigmas_list[i-1] - sigma\n                    RESplain(f\"{i:4d} | {sigma:8.4f} | {pct:10.4f} | {step_size:8.4f}\")\n                else:\n                    RESplain(f\"{i:4d} | {sigma:8.4f} | {pct:10.4f} | {'--':>8}\")\n            \n            RESplain(\"=\"*60 + \"\\n\")\n            \n        sigmas_graph = self.tensor_to_graph_image(sigmas.cpu(), line_color)\n        numpy_image = np.array(sigmas_graph)\n        numpy_image = numpy_image / 255.0\n        tensor_image = torch.from_numpy(numpy_image)\n        tensor_image = tensor_image.unsqueeze(0)\n        images_tensor = torch.cat([tensor_image], 0)\n        output = self.save_images(images_tensor, \"SigmasPreview\")\n        output[\"result\"] = (images_tensor,)\n\n        return output\n\n\n\n\nclass VAEEncodeAdvanced:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"resize_to_input\": ([\"false\", \"image_1\", \"image_2\", \"mask\", \"latent\"], {\"default\": \"false\"},),\n                \"width\":           (\"INT\",                                             {\"default\": 1024, \"min\": 0, \"max\": MAX_RESOLUTION, \"step\": 1, }),\n                \"height\":          (\"INT\",                                             {\"default\": 1024, \"min\": 0, \"max\": MAX_RESOLUTION, \"step\": 1, }),\n                \"mask_channel\":    ([\"red\", \"green\", \"blue\", \"alpha\"],),\n                \"invert_mask\":     (\"BOOLEAN\",                                         {\"default\": False}),\n                \"latent_type\":     ([\"4_channels\", \"16_channels\"],                     {\"default\": \"16_channels\",}),\n            },\n            \n            \"optional\": {\n                \"image_1\":         (\"IMAGE\",),\n                \"image_2\":         (\"IMAGE\",),\n                \"mask\":            (\"IMAGE\",),\n                \"latent\":          (\"LATENT\",),\n                \"vae\":             (\"VAE\", ),\n            }\n        }\n\n    RETURN_TYPES = (\"LATENT\",\n                    \"LATENT\",\n                    \"MASK\",\n                    \"LATENT\",\n                    \"INT\",\n                    \"INT\",\n                    )\n                    \n    RETURN_NAMES = (\"latent_1\",\n                    \"latent_2\",\n                    \"mask\",\n                    \"empty_latent\",\n                    \"width\",\n                    \"height\",\n                    )\n    \n    FUNCTION = \"main\"\n    CATEGORY = \"RES4LYF/vae\"\n\n    def main(self,\n            width,\n            height,\n            resize_to_input = \"false\",\n            image_1         = None,\n            image_2         = None,\n            mask            = None,\n            invert_mask     = False,\n            method          = \"stretch\",\n            interpolation   = \"lanczos\",\n            condition       = \"always\",\n            multiple_of     = 0,\n            keep_proportion = False,\n            mask_channel    = \"red\",\n            latent          = None, \n            latent_type     = \"16_channels\", \n            vae             = None\n            ):\n        \n        ratio = 8 # latent compression factor\n        \n        # this is unfortunately required to avoid apparent non-deterministic outputs. \n        # without setting the seed each time, the outputs of the VAE encode will change with every generation.\n        torch     .manual_seed    (42)          \n        torch.cuda.manual_seed_all(42)\n\n        image_1 = image_1.clone() if image_1 is not None else None\n        image_2 = image_2.clone() if image_2 is not None else None\n\n        if latent is not None and resize_to_input == \"latent\":\n            height, width = latent['samples'].shape[-2:]\n\n            #height, width = latent['samples'].shape[2:4]\n            height, width = height * ratio, width * ratio\n            \n        elif image_1 is not None and resize_to_input == \"image_1\":\n            height, width = image_1.shape[1:3]\n            \n        elif image_2 is not None and resize_to_input == \"image_2\":\n            height, width = image_2.shape[1:3]       \n            \n        elif mask is not None and resize_to_input == \"mask\":\n            height, width =    mask.shape[1:3]   \n            \n        if latent is not None:\n            c = latent['samples'].shape[1]\n        else:\n            if latent_type == \"4_channels\":\n                c = 4\n            else:\n                c = 16\n            if   image_1 is not None:\n                b = image_1.shape[0]\n            elif image_2 is not None:\n                b = image_2.shape[0]\n            else:\n                b = 1\n                \n            latent = {\"samples\": torch.zeros((b, c, height // ratio, width // ratio))}\n        \n        latent_1, latent_2 = None, None\n        if image_1 is not None:\n            image_1  = image_resize(image_1, width, height, method, interpolation, condition, multiple_of, keep_proportion)\n            latent_1 = {\"samples\": vae.encode(image_1[:,:,:,:3])}\n        if image_2 is not None:\n            image_2  = image_resize(image_2, width, height, method, interpolation, condition, multiple_of, keep_proportion)\n            latent_2 = {\"samples\": vae.encode(image_2[:,:,:,:3])}\n        \n        if mask is not None and mask.shape[-1] > 1:\n            channels = [\"red\", \"green\", \"blue\", \"alpha\"]\n            mask = mask[:, :, :, channels.index(mask_channel)]\n            \n        if mask is not None:\n            mask = F.interpolate(mask.unsqueeze(0), size=(height, width), mode='bilinear', align_corners=False).squeeze(0)\n            if invert_mask:\n                mask = 1.0 - mask\n\n        return (latent_1, \n                latent_2, \n                mask, \n                latent,\n                width, \n                height,\n                )\n\n\n\n\nclass VAEStyleTransferLatent:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"method\":    ([\"AdaIN\", \"WCT\"], {\"default\": \"AdaIN\"}),\n                \"latent\":    (\"LATENT\",),\n                \"style_ref\": (\"LATENT\",),\n                \"vae\":       (\"VAE\", ),\n            },\n            \n            \"optional\": {\n            }\n        }\n\n    RETURN_TYPES = (\"LATENT\",)\n                    \n    RETURN_NAMES = (\"latent\",)\n    \n    FUNCTION = \"main\"\n    CATEGORY = \"RES4LYF/vae\"\n\n    def main(self,\n            method    = None,\n            latent    = None,\n            style_ref = None,\n            vae       = False,\n            ):\n        \n        from comfy.ldm.cascade.stage_c_coder import StageC_coder\n        \n        # this is unfortunately required to avoid apparent non-deterministic outputs. \n        # without setting the seed each time, the outputs of the VAE encode will change with every generation.\n        torch     .manual_seed    (42)          \n        torch.cuda.manual_seed_all(42)\n        \n        denoised = latent   .get('state_info', {}).get('raw_x')\n        y0       = style_ref.get('state_info', {}).get('raw_x')\n        \n        denoised = latent['samples'] if denoised is None else denoised\n        y0       = style_ref['samples'] if y0 is None else y0\n            \n        #denoised = latent.get('state_info', latent['samples'].get('raw_x', latent['samples']))\n        #y0       = style_ref.get('state_info', style_ref['samples'].get('raw_x', style_ref['samples']))\n        \n        if denoised.ndim > 4:\n            denoised = denoised.squeeze(0)\n        if y0.ndim > 4:\n            y0 = y0.squeeze(0)\n        \n        if   hasattr(vae.first_stage_model, \"up_blocks\"): # probably stable cascade stage A\n            x_embedder = copy.deepcopy(vae.first_stage_model.up_blocks[0][0]).to(torch.float64)\n            denoised_embed = x_embedder(denoised.to(x_embedder.weight))\n            y0_embed       = x_embedder(y0.to(x_embedder.weight))\n            \n            denoised_embed = apply_style_to_latent(denoised_embed, y0_embed, method)\n            \n            denoised_styled = invert_conv2d(x_embedder, denoised_embed, denoised.shape).to(denoised)\n            \n            \n        elif hasattr(vae.first_stage_model, \"decoder\"):   # probably sd15, sdxl, sd35, flux, wan, etc. vae\n            x_embedder = copy.deepcopy(vae.first_stage_model.decoder.conv_in).to(torch.float64)\n            denoised_embed = x_embedder(denoised.to(x_embedder.weight))\n            y0_embed       = x_embedder(y0.to(x_embedder.weight))\n            \n            denoised_embed = apply_style_to_latent(denoised_embed, y0_embed, method)\n            \n            denoised_styled = invert_conv2d(x_embedder, denoised_embed, denoised.shape).to(denoised)\n        \n        elif type(vae.first_stage_model) == StageC_coder:\n            x_embedder = copy.deepcopy(vae.first_stage_model.encoder.mapper[0]).to(torch.float64)\n            #x_embedder = copy.deepcopy(vae.first_stage_model.previewer.blocks[0]).to(torch.float64) # use with strategy for decoder above, but exploding latent problem, 1.E30 etc. quick to nan\n\n            denoised_embed = invert_conv2d(x_embedder, denoised, denoised.shape)\n            y0_embed       = invert_conv2d(x_embedder, y0, y0.shape)\n            \n            denoised_embed = apply_style_to_latent(denoised_embed, y0_embed, method)\n            \n            denoised_styled = x_embedder(denoised_embed.to(x_embedder.weight))\n            \n            \n            \n        \n        latent_out = latent.copy() \n        #latent_out['state_info'] = copy.deepcopy(latent['state_info'])\n\n        if latent_out.get('state_info', {}).get('raw_x') is not None:\n            latent_out['state_info']['raw_x'] = denoised_styled\n        latent_out['samples'] = denoised_styled\n        \n        return (latent_out, )\n\n\n\n\n\n\n\ndef apply_style_to_latent(denoised_embed, y0_embed, method=\"WCT\"):\n    from einops import rearrange\n    import torch.nn as nn\n    \n    denoised_embed_shape = denoised_embed.shape\n\n    denoised_embed = rearrange(denoised_embed, \"B C H W -> B (H W) C\")\n    y0_embed       = rearrange(y0_embed,       \"B C H W -> B (H W) C\")\n    \n    if method == \"AdaIN\":\n        denoised_embed = adain_seq_inplace(denoised_embed, y0_embed)\n    \n    elif method == \"WCT\":\n        f_s  = y0_embed[0].clone()           # batched style guides not supported\n        mu_s = f_s.mean(dim=0, keepdim=True)\n        f_s_centered = f_s - mu_s\n        \n        cov = (f_s_centered.transpose(-2,-1).double() @ f_s_centered.double()) / (f_s_centered.size(0) - 1)\n\n        S_eig, U_eig = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n        S_eig_sqrt    = S_eig.clamp(min=0).sqrt() # eigenvalues -> singular values\n        \n        whiten = U_eig @ torch.diag(S_eig_sqrt) @ U_eig.transpose(-2,-1)\n        y0_color  = whiten.to(f_s_centered)\n\n        for wct_i in range(denoised_embed_shape[0]):\n            f_c          = denoised_embed[wct_i].clone()\n            mu_c         = f_c.mean(dim=0, keepdim=True)\n            f_c_centered = f_c - mu_c\n            \n            cov = (f_c_centered.transpose(-2,-1).double() @ f_c_centered.double()) / (f_c_centered.size(0) - 1)\n\n            S_eig, U_eig  = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n            inv_sqrt_eig  = S_eig.clamp(min=0).rsqrt() \n            \n            whiten = U_eig @ torch.diag(inv_sqrt_eig) @ U_eig.transpose(-2,-1)\n            whiten = whiten.to(f_c_centered)\n\n            f_c_whitened = f_c_centered @ whiten.transpose(-2,-1)\n            f_cs         = f_c_whitened @ y0_color.transpose(-2,-1).to(f_c_whitened) + mu_s.to(f_c_whitened)\n            \n            denoised_embed[wct_i] = f_cs\n    \n    denoised_embed = rearrange(denoised_embed, \"B (H W) C -> B C H W\", W=denoised_embed_shape[-1])\n    \n    return denoised_embed\n\n\n\ndef invert_conv2d(\n    conv: torch.nn.Conv2d,\n    z:    torch.Tensor,\n    original_shape: torch.Size,\n) -> torch.Tensor:\n    import torch.nn.functional as F\n\n    B, C_in, H, W = original_shape\n    C_out, _, kH, kW = conv.weight.shape\n    stride_h, stride_w = conv.stride\n    pad_h,    pad_w    = conv.padding\n\n    if conv.bias is not None:\n        b = conv.bias.view(1, C_out, 1, 1).to(z)\n        z_nobias = z - b\n    else:\n        z_nobias = z\n\n    W_flat = conv.weight.view(C_out, -1).to(z)  \n    W_pinv = torch.linalg.pinv(W_flat)    \n\n    Bz, Co, Hp, Wp = z_nobias.shape\n    z_flat = z_nobias.reshape(Bz, Co, -1)  \n\n    x_patches = W_pinv @ z_flat   \n\n    x_sum = F.fold(\n        x_patches,\n        output_size=(H + 2*pad_h, W + 2*pad_w),\n        kernel_size=(kH, kW),\n        stride=(stride_h, stride_w),\n    )\n    ones = torch.ones_like(x_patches)\n    count = F.fold(\n        ones,\n        output_size=(H + 2*pad_h, W + 2*pad_w),\n        kernel_size=(kH, kW),\n        stride=(stride_h, stride_w),\n    )  \n\n    x_recon = x_sum / count.clamp(min=1e-6)\n    if pad_h > 0 or pad_w > 0:\n        x_recon = x_recon[..., pad_h:pad_h+H, pad_w:pad_w+W]\n\n    return x_recon\n\n\n\"\"\"def invert_conv3d(conv: torch.nn.Conv3d,\n                z: torch.Tensor, original_shape: torch.Size, grid_sizes: Optional[Tuple[int,int,int]] = None) -> torch.Tensor:\n\n    import torch.nn.functional as F\n    B, C_in, D, H, W = original_shape\n    pD, pH, pW = 1,2,2\n    sD, sH, sW = pD, pH, pW\n\n    if z.ndim == 3:\n        # [B, S, C_out] -> reshape to [B, C_out, D', H', W']\n        S = z.shape[1]\n        if grid_sizes is None:\n            Dp = D // pD\n            Hp = H // pH\n            Wp = W // pW\n        else:\n            Dp, Hp, Wp = grid_sizes\n        C_out = z.shape[2]\n        z = z.transpose(1, 2).reshape(B, C_out, Dp, Hp, Wp)\n    else:\n        B2, C_out, Dp, Hp, Wp = z.shape\n        assert B2 == B, \"Batch size mismatch... ya sharked it.\"\n\n    # kncokout bias\n    if conv.bias is not None:\n        b = conv.bias.view(1, C_out, 1, 1, 1)\n        z_nobias = z - b\n    else:\n        z_nobias = z\n\n    # 2D filter -> pinv\n    w3 = conv.weight         # [C_out, C_in, 1, pH, pW]\n    w2 = w3.squeeze(2)                       # [C_out, C_in, pH, pW]\n    out_ch, in_ch, kH, kW = w2.shape\n    \n    W_flat = w2.view(out_ch, -1)            # [C_out, in_ch*pH*pW]\n    W_pinv = torch.linalg.pinv(W_flat)      # [in_ch*pH*pW, C_out]\n\n    # merge depth for 2D unfold wackiness\n    z2 = z_nobias.permute(0,2,1,3,4).reshape(B*Dp, C_out, Hp, Wp)\n\n    # apply pinv ... get patch vectors\n    z_flat    = z2.reshape(B*Dp, C_out, -1)  # [B*Dp, C_out, L]\n    x_patches = W_pinv @ z_flat              # [B*Dp, in_ch*pH*pW, L]\n\n    # fold -> spatial frames\n    x2 = F.fold(\n        x_patches,\n        output_size=(H, W),\n        kernel_size=(pH, pW),\n        stride=(sH, sW)\n    )  # → [B*Dp, C_in, H, W]\n\n    # un-merge depth\n    x2 = x2.reshape(B, Dp, in_ch, H, W)           # [B, Dp,  C_in, H, W]\n    x_recon = x2.permute(0,2,1,3,4).contiguous()  # [B, C_in,   D, H, W]\n    return x_recon\n\"\"\"\n\n\n\ndef adain_seq_inplace(content: torch.Tensor, style: torch.Tensor, eps: float = 1e-7) -> torch.Tensor:\n    mean_c = content.mean(1, keepdim=True)\n    std_c  = content.std (1, keepdim=True).add_(eps)  # in-place add\n    mean_s = style.mean  (1, keepdim=True)\n    std_s  = style.std   (1, keepdim=True).add_(eps)\n\n    content.sub_(mean_c).div_(std_c).mul_(std_s).add_(mean_s)  # in-place chain\n    return content\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nclass LatentUpscaleWithVAE:\n    def __init__(self):\n        pass\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"latent\":   (\"LATENT\", ),      \n                \"width\" : (\"INT\", {\"default\": 1024, \"min\": 8, \"max\": 1024 ** 2, \"step\": 8}),\n                \"height\": (\"INT\", {\"default\": 1024, \"min\": 8, \"max\": 1024 ** 2, \"step\": 8}),\n                \"vae\": (\"VAE\", ),\n                },\n                }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self,\n            latent,\n            width,\n            height,\n            vae,\n            method          = \"stretch\",\n            interpolation   = \"lanczos\",\n            condition       = \"always\",\n            multiple_of     = 0,\n            keep_proportion = False,\n            ):\n        \n        ratio = 8 # latent compression factor\n        \n        # this is unfortunately required to avoid apparent non-deterministic outputs. \n        # without setting the seed each time, the outputs of the VAE encode will change with every generation.\n        torch     .manual_seed    (42)          \n        torch.cuda.manual_seed_all(42)\n        \n        images_prev_list, latent_prev_list = [], []\n        \n        if 'state_info' in latent:\n            #images      = vae.decode(latent['state_info']['raw_x']  ) # .to(latent['samples']) )\n            images      = vae.decode(latent['state_info']['denoised']  ) # .to(latent['samples']) )\n            \n            data_prev_ = latent['state_info']['data_prev_'].squeeze(0)\n            for i in range(data_prev_.shape[0]):\n                images_prev_list.append(   vae.decode(data_prev_[i])  ) # .to(latent['samples'])  )\n        else:\n            images = vae.decode(latent['samples'])\n            \n        if len(images.shape) == 5: #Combine batches\n            images = images.reshape(-1, images.shape[-3], images.shape[-2], images.shape[-1])\n            \n        images = image_resize(images, width, height, method, interpolation, condition, multiple_of, keep_proportion)\n        latent_tensor = vae.encode(images[:,:,:,:3])\n        \n        if images_prev_list:\n            for i in range(data_prev_.shape[0]):\n                image_data_p = image_resize(images_prev_list[i], width, height, method, interpolation, condition, multiple_of, keep_proportion)\n                latent_prev_list.append(   vae.encode(image_data_p[:,:,:,:3])   )\n            latent_prev = torch.stack(latent_prev_list).unsqueeze(0)     #.view_as(latent['state_info']['data_prev_'])\n            #images_prev = image_resize(images_prev, width, height, method, interpolation, condition, multiple_of, keep_proportion)\n            #latent_tensor = vae.encode(image_1[:,:,:,:3])\n        \n        if 'state_info' in latent:\n            #latent['state_info']['raw_x']      = latent_tensor\n            latent['state_info']['denoised']   = latent_tensor\n            latent['state_info']['data_prev_'] = latent_prev\n            \n        latent['samples'] = latent_tensor.to(latent['samples'])\n\n        return (latent,)\n\n\n\nclass SigmasSchedulePreview(SaveImage):\n    def __init__(self):\n        self.output_dir = folder_paths.get_temp_directory()\n        self.type = \"temp\"\n        self.prefix_append = \"_temp_\" + ''.join(random.choice(\"abcdefghijklmnopqrstupvxyz1234567890\") for x in range(5))\n        self.compress_level = 4\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"model\":       (\"MODEL\",),\n                \"noise_mode\":  (NOISE_MODE_NAMES,             {\"default\": 'hard',                                        \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\n                \"eta\":         (\"FLOAT\",                      {\"default\": 0.25, \"step\": 0.01, \"min\": -1000.0, \"max\": 1000.0}),\n                \"s_noise\":     (\"FLOAT\",                      {\"default\": 1.00, \"step\": 0.01, \"min\": -1000.0, \"max\": 1000.0}),\n                \"denoise\":     (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\n                \"denoise_alt\": (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\n                \"scheduler\":   (get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\n                \"steps\":       (\"INT\",                        {\"default\": 30, \"min\": 1, \"max\": 10000}),\n                \"plot_max\":    (\"FLOAT\",                      {\"default\": 2.1, \"min\": -10000, \"max\": 10000, \"step\":0.01, \"tooltip\": \"Set to a negative value to have the plot scale automatically.\"}),\n                \"plot_min\":    (\"FLOAT\",                      {\"default\": 0.0, \"min\": -10000, \"max\": 10000, \"step\":0.01, \"tooltip\": \"Set to a negative value to have the plot scale automatically.\"}),\n            },\n            \"optional\": {\n                \"sigmas\":      (\"SIGMAS\",),\n            },\n        }\n\n    FUNCTION = \"plot_schedule\"\n    CATEGORY = \"RES4LYF/sigmas\"\n    OUTPUT_NODE = True\n\n\n    @staticmethod\n    def tensor_to_graph_image(tensors, labels, colors, plot_min, plot_max, input_params):\n        plt.figure(figsize=(6.4, 6.4), dpi=320) \n        ax = plt.gca()\n        ax.set_facecolor(\"black\") \n        ax.patch.set_alpha(1.0)  \n\n        for _ in range(50):\n            for tensor, color in zip(tensors, colors):\n                plt.plot(tensor.numpy(), color=color, alpha=0.1)\n\n        plt.axhline(y=1.0, color='gray', linestyle='dotted', linewidth=1.5)\n\n        plt.xlabel(\"Step\", color=\"white\", weight=\"bold\", antialiased=False)\n        plt.ylabel(\"Value\", color=\"white\", weight=\"bold\", antialiased=False)\n        ax.tick_params(colors=\"white\") \n\n        if plot_max > 0:\n            plt.ylim(plot_min, plot_max)\n\n        input_text = (\n            f\"noise_mode:  {input_params['noise_mode']}  |  \"\n            f\"eta:         {input_params['eta']}  |  \"\n            f\"s_noise:     {input_params['s_noise']}  |  \"\n            f\"denoise:     {input_params['denoise']}  |  \"\n            f\"denoise_alt: {input_params['denoise_alt']}  |  \"\n            f\"scheduler:   {input_params['scheduler']}\"\n        )\n        plt.text(0.5, 1.05, input_text, ha='center', va='center', color='white', fontsize=8, transform=ax.transAxes)\n\n        from matplotlib.lines import Line2D\n        legend_handles = [Line2D([0], [0], color=color, lw=2, label=label) for label, color in zip(labels, colors)]\n        plt.legend(handles=legend_handles, facecolor=\"black\", edgecolor=\"white\", labelcolor=\"white\", framealpha=1.0)\n\n        with BytesIO() as buf:\n            plt.savefig(buf, format='png', facecolor=\"black\")\n            buf.seek(0)\n            image = Image.open(buf).copy()\n        plt.close()\n        return image\n\n\n    def plot_schedule(self, model, noise_mode, eta, s_noise, denoise, denoise_alt, scheduler, steps, plot_min, plot_max, sigmas=None):\n        sigma_vals               = []\n        sigma_next_vals          = []\n        sigma_down_vals          = []\n        sigma_up_vals            = []\n        sigma_plus_up_vals       = []\n        sigma_hat_vals           = []\n        alpha_ratio_vals         = []\n        sigma_step_size_vals     = []\n        sigma_step_size_sde_vals = []\n        \n        eta_var = eta\n        \n        rk_type = \"res_2s\"\n        noise_anchor = 1.0\n\n        if sigmas is not None:\n            sigmas = sigmas.clone()\n        else: \n            sigmas = get_sigmas(model, scheduler, steps, denoise)\n        sigmas *= denoise_alt\n\n        RK = RK_Method_Beta.create(model, rk_type, noise_anchor, model_device=sigmas.device, work_device=sigmas.device, dtype=sigmas.dtype, extra_options=\"\")\n        NS = RK_NoiseSampler(RK, model, device=sigmas.device, dtype=sigmas.dtype, extra_options=\"\")\n\n        for i in range(len(sigmas) - 1):\n            sigma = sigmas[i]\n            sigma_next = sigmas[i + 1]\n            \n            su, sigma_hat, sd, alpha_ratio = NS.get_sde_step(sigma, sigma_next, eta, noise_mode_override=noise_mode, )\n            #su, sigma_hat, sd, alpha_ratio = get_res4lyf_step_with_model(model, sigma, sigma_next, eta, noise_mode)\n\n            su = su * s_noise\n            \n            sigma_vals              .append(sigma)\n            sigma_next_vals         .append(sigma_next)\n            sigma_down_vals         .append(sd)\n            sigma_up_vals           .append(su)\n            sigma_plus_up_vals      .append(sigma + su)\n            alpha_ratio_vals        .append(alpha_ratio)\n            sigma_step_size_vals    .append(sigma - sigma_next)\n            sigma_step_size_sde_vals.append(sigma + su - sd)\n\n            if sigma_hat != sigma:\n                sigma_hat_vals.append(sigma_hat)\n\n        sigma_tensor               = torch.tensor(sigma_vals)\n        sigma_next_tensor          = torch.tensor(sigma_next_vals)\n        sigma_down_tensor          = torch.tensor(sigma_down_vals)\n        sigma_up_tensor            = torch.tensor(sigma_up_vals)\n        sigma_plus_up_tensor       = torch.tensor(sigma_plus_up_vals)\n        alpha_ratio_tensor         = torch.tensor(alpha_ratio_vals)\n        sigma_step_size_tensor     = torch.tensor(sigma_step_size_vals)\n        sigma_step_size_sde_tensor = torch.tensor(sigma_step_size_sde_vals)\n\n        tensors = [sigma_tensor, sigma_next_tensor, sigma_down_tensor, sigma_up_tensor]\n        labels = [\"$σ$\", \"$σ_{next}$\", \"$σ_{down}$\", \"$σ_{up}$\"]\n        colors = [\"white\", \"dodgerblue\", \"green\", \"red\"]\n        \n        if torch.norm(sigma_next_tensor - sigma_down_tensor) < 1e-2:\n            tensors = [sigma_tensor, sigma_next_tensor, sigma_up_tensor]\n            labels = [\"$σ$\", \"$σ_{next,down}$\", \"$σ_{up}$\"]\n            colors = [\"white\", \"cyan\", \"red\"]\n            \n        elif torch.norm(sigma_next_tensor - sigma_up_tensor) < 1e-2:\n            tensors = [sigma_tensor, sigma_next_tensor, sigma_down_tensor]\n            labels = [\"$σ$\", \"$σ_{next,up}$\", \"$σ_{down}$\"]\n            colors = [\"white\", \"violet\", \"green\",]\n        \n        if torch.norm(sigma_tensor - sigma_plus_up_tensor) > 1e-2:\n            tensors.append(sigma_plus_up_tensor)\n            labels.append(\"$σ + σ_{up}$\")\n            colors.append(\"brown\")\n        \n        if torch.norm(sigma_step_size_tensor - sigma_step_size_sde_tensor) > 1e-2:\n            tensors.append(sigma_step_size_sde_tensor)\n            labels.append(\"$Δ \\hat{t}$\")\n            colors.append(\"gold\")\n            \n        if sigma_hat_vals:\n            sigma_hat_tensor = torch.tensor(sigma_hat_vals)\n            tensors.append(sigma_hat_tensor)\n            labels.append(\"$σ̂$\")\n            colors.append(\"maroon\")\n            \n            tensors.append(sigma_step_size_tensor)\n            labels.append(\"$σ̂ - σ_{next}$\")\n            colors.append(\"darkorange\")\n        else:\n            tensors.append(sigma_step_size_tensor)\n            #labels.append(\"$σ - σ_{next}$\")\n            labels.append(\"$Δt$\")\n            colors.append(\"darkorange\")\n        \n        tensors.append(alpha_ratio_tensor)\n        labels.append(\"$α_{ratio}$\")\n        colors.append(\"grey\")\n        \n        \n        graph_image = self.tensor_to_graph_image(\n            tensors, labels, colors, plot_min, plot_max,\n            input_params={\n                \"noise_mode\": noise_mode,\n                \"eta\": eta,\n                \"s_noise\": s_noise,\n                \"denoise\": denoise,\n                \"denoise_alt\": denoise_alt,\n                \"scheduler\": scheduler,\n            }\n        )\n\n        numpy_image   = np.array(graph_image)\n        numpy_image   = numpy_image / 255.0\n        tensor_image  = torch.from_numpy(numpy_image)\n        tensor_image  = tensor_image.unsqueeze(0)\n        images_tensor = torch.cat([tensor_image], 0)\n\n        return self.save_images(images_tensor, \"SigmasSchedulePreview\")\n    \n    "
  },
  {
    "path": "hidream/model.py",
    "content": "import torch\nimport torch.nn.functional as F\nimport math\n\nimport torch.nn as nn\nfrom torch import Tensor, FloatTensor\nfrom typing import Optional, Callable, Tuple, List, Dict, Any, Union, TYPE_CHECKING, TypeVar\nfrom dataclasses import dataclass\n\nimport einops\nfrom einops import repeat, rearrange\n\nfrom comfy.ldm.lightricks.model import TimestepEmbedding, Timesteps\nimport torch.nn.functional as F\n\nfrom comfy.ldm.flux.math import apply_rope, rope\n#from comfy.ldm.flux.layers import LastLayer\n#from ..flux.layers import LastLayer\n\nfrom comfy.ldm.modules.attention import optimized_attention, attention_pytorch\nimport comfy.model_management\nimport comfy.ldm.common_dit\n\nfrom ..helper  import ExtraOptions\nfrom ..latents import slerp_tensor, interpolate_spd, tile_latent, untile_latent, gaussian_blur_2d, median_blur_2d\nfrom ..style_transfer import StyleMMDiT_Model, apply_scattersort_masked, apply_scattersort_tiled, adain_seq_inplace, adain_patchwise_row_batch_med, adain_patchwise_row_batch, adain_seq, apply_scattersort\n\n@dataclass\nclass ModulationOut:\n    shift: Tensor\n    scale: Tensor\n    gate : Tensor\n    \n\n\nclass BlockType:\n    Double = 2\n    Single = 1\n    Zero   = 0\n\n\n#########################################################################################################################################################################\nclass HDBlock(nn.Module):\n    def __init__(\n        self,\n        dim                   : int,\n        heads                 : int,\n        head_dim              : int,\n        num_routed_experts    : int = 4,\n        num_activated_experts : int = 2,\n        block_type            : BlockType = BlockType.Zero,\n        dtype=None, device=None, operations=None\n    ):\n        super().__init__()\n        block_classes = {\n            BlockType.Double : HDBlockDouble,\n            BlockType.Single : HDBlockSingle,\n        }\n        self.block = block_classes[block_type](dim, heads, head_dim, num_routed_experts, num_activated_experts, dtype=dtype, device=device, operations=operations)\n\n    def forward(\n        self,\n        img       :          FloatTensor,\n        img_masks : Optional[FloatTensor]  = None,\n        txt       : Optional[FloatTensor]  = None,\n        clip      :          FloatTensor   = None,\n        rope      :          FloatTensor   = None,\n        mask      : Optional[FloatTensor]  = None,\n        update_cross_attn : Optional[Dict] = None,\n        style_block  = None,\n    ) -> FloatTensor:\n        return self.block(img, img_masks, txt, clip, rope, mask, update_cross_attn, style_block=style_block)\n\n\n\n# Copied from https://github.com/black-forest-labs/flux/blob/main/src/flux/modules/layers.py\nclass EmbedND(nn.Module):\n    def __init__(self, theta: int, axes_dim: List[int]):\n        super().__init__()\n        self.theta    = theta\n        self.axes_dim = axes_dim\n\n    def forward(self, ids: Tensor) -> Tensor:\n        n_axes = ids.shape[-1]\n        emb = torch.cat([  rope(ids[..., i], self.axes_dim[i], self.theta)   for i in range(n_axes)],    dim=-3,)\n        return emb.unsqueeze(2)\n\nclass PatchEmbed(nn.Module):\n    def __init__(\n        self,\n        patch_size   = 2,\n        in_channels  = 4,\n        out_channels = 1024,\n        dtype=None, device=None, operations=None\n    ):\n        super().__init__()\n        self.patch_size   = patch_size\n        self.out_channels = out_channels\n        self.proj         = operations.Linear(in_channels * patch_size * patch_size, out_channels, bias=True, dtype=dtype, device=device)\n\n    def forward(self, latent):\n        latent = self.proj(latent)\n        return latent\n\nclass PooledEmbed(nn.Module):\n    def __init__(self, text_emb_dim, hidden_size, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.pooled_embedder = TimestepEmbedding(in_channels=text_emb_dim, time_embed_dim=hidden_size, dtype=dtype, device=device, operations=operations)\n\n    def forward(self, pooled_embed):\n        return self.pooled_embedder(pooled_embed)\n\nclass TimestepEmbed(nn.Module):\n    def __init__(self, hidden_size, frequency_embedding_size=256, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.time_proj         = Timesteps       (num_channels=frequency_embedding_size, flip_sin_to_cos=True, downscale_freq_shift=0)\n        self.timestep_embedder = TimestepEmbedding(in_channels=frequency_embedding_size, time_embed_dim=hidden_size, dtype=dtype, device=device, operations=operations)\n\n    def forward(self, t, wdtype):\n        t_emb = self.time_proj(t).to(dtype=wdtype)\n        t_emb = self.timestep_embedder(t_emb)\n        return t_emb\n\nclass TextProjection(nn.Module):\n    def __init__(self, in_features, hidden_size, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.linear = operations.Linear(in_features=in_features, out_features=hidden_size, bias=False, dtype=dtype, device=device)\n\n    def forward(self, caption):\n        hidden_states = self.linear(caption)\n        return hidden_states\n\n\n\n\n\n\nclass HDFeedForwardSwiGLU(nn.Module):\n    def __init__(\n        self,\n        dim                : int,\n        hidden_dim         : int,\n        multiple_of        : int             = 256,\n        ffn_dim_multiplier : Optional[float] = None,\n        dtype=None, device=None, operations=None\n    ):\n        super().__init__()\n        hidden_dim = int(2 * hidden_dim / 3)\n        \n        if ffn_dim_multiplier is not None:  # custom dim factor multiplier\n            hidden_dim = int(ffn_dim_multiplier * hidden_dim)\n        hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of)\n\n        self.w1 = operations.Linear(dim, hidden_dim, bias=False, dtype=dtype, device=device)\n        self.w2 = operations.Linear(hidden_dim, dim, bias=False, dtype=dtype, device=device)\n        self.w3 = operations.Linear(dim, hidden_dim, bias=False, dtype=dtype, device=device)\n\n    def forward(self, x, style_block=None): # 1,4096,2560 -> \n        if style_block is not None and x.shape[0] > 1 and x.ndim == 3:\n            x1 = self.w1(x)\n            x1 = style_block(x1, \"ff_1\")\n            \n            x1 = torch.nn.functional.silu(x1)\n            x1 = style_block(x1, \"ff_1_silu\")\n            \n            x3 = self.w3(x)\n            x3 = style_block(x3, \"ff_3\")\n            \n            x13 = x1 * x3\n            x13 = style_block(x13, \"ff_13\")\n            \n            x2 = self.w2(x13)\n            x2 = style_block(x2, \"ff_2\")\n            \n            return x2\n        else:\n            return self.w2(torch.nn.functional.silu(self.w1(x)) * self.w3(x))\n\n# Modified from https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inference/model.py\nclass HDMoEGate(nn.Module):\n    def __init__(self, dim, num_routed_experts=4, num_activated_experts=2, dtype=None, device=None):\n        super().__init__()\n        self.top_k            = num_activated_experts # 2\n        self.n_routed_experts = num_routed_experts    # 4\n        self.gating_dim       = dim                   # 2560\n        self.weight           = nn.Parameter(torch.empty((self.n_routed_experts, self.gating_dim), dtype=dtype, device=device))\n\n    def forward(self, x):\n        dtype = self.weight.dtype\n        if dtype not in {torch.bfloat16, torch.float16, torch.float32, torch.float64}:\n            dtype = torch.float32\n            self.weight.data = self.weight.data.to(dtype)\n        \n        logits = F.linear(x.to(dtype), self.weight.to(x.device), None)\n        scores = logits.softmax(dim=-1).to(x)       # logits.shape == 4032,4   scores.shape == 4032,4\n        return torch.topk(scores, k=self.top_k, dim=-1, sorted=False)\n\nclass HDMOEFeedForwardSwiGLU(nn.Module):\n    def __init__(\n        self,\n        dim                   : int,\n        hidden_dim            : int,\n        num_routed_experts    : int,\n        num_activated_experts : int,\n        dtype=None, device=None, operations=None\n    ):\n        super().__init__()\n        self.shared_experts =                HDFeedForwardSwiGLU(dim, hidden_dim // 2,  dtype=dtype, device=device, operations=operations)\n        self.experts        = nn.ModuleList([HDFeedForwardSwiGLU(dim, hidden_dim     ,  dtype=dtype, device=device, operations=operations) for i in range(num_routed_experts)])\n        self.gate           = HDMoEGate(dim, num_routed_experts, num_activated_experts, dtype=dtype, device=device)\n        self.num_activated_experts = num_activated_experts\n\n    def forward(self, x, style_block=None):\n        y_shared = self.shared_experts(x, style_block.FF_SHARED)   # 1,4096,2560 -> 1,4096,2560 \n        y_shared = style_block(y_shared, \"shared\")\n\n        topk_weight, topk_idx = self.gate(x) # -> 4096,2   4096,2\n        topk_weight = style_block(topk_weight, \"topk_weight\")\n                \n        if y_shared.shape[0] > 1 and style_block.gate[0] and not HDModel.RECON_MODE:\n            topk_idx[0] = topk_idx[1]\n        tk_idx_flat = topk_idx.view(topk_idx.shape[0], -1) \n        \n        x = x.repeat_interleave(self.num_activated_experts, dim=-2)\n        y = torch.empty_like(x)\n        \n        if style_block.gate[0] and not HDModel.RECON_MODE and y_shared.shape[0] > 1:\n            for i, expert in enumerate(self.experts): # TODO: check for empty expert lists and continue if found to avoid CUBLAS errors\n                x_list = []\n                for b in range(x.shape[0]):\n                    x_sel = x[b][tk_idx_flat[b]==i]\n                    x_list.append(x_sel)\n                x_list = torch.stack(x_list, dim=0)\n                x_out = expert(x_list, style_block.FF_SEPARATE).to(x.dtype)\n                for b in range(y.shape[0]):\n                    y[b][tk_idx_flat[b]==i] = x_out[b]\n        else:\n            for i, expert in enumerate(self.experts): \n                x_sel = x[tk_idx_flat == i, :]\n                if x_sel.shape[0] == 0:\n                    continue \n                y[tk_idx_flat == i, :] = expert(x_sel).to(x.dtype)\n                \n        y = style_block(y, \"separate\")\n\n        y_sum = torch.einsum('abk,abkd->abd', topk_weight, y.view(*topk_weight.shape, -1))\n        \n        y_sum = style_block(y_sum, \"sum\")\n        \n        y_sum = y_sum.view_as(y_shared) + y_shared\n\n        y_sum = style_block(y_sum, \"out\")\n        \n        return y_sum\n\n\ndef apply_passthrough(denoised_embed, *args, **kwargs):\n    return denoised_embed\n\nclass AttentionBuffer:\n    buffer = {}\n\n\ndef attention(q: Tensor, k: Tensor, v: Tensor, rope: Tensor, mask: Optional[Tensor] = None):\n    q, k = apply_rope(q, k, rope)\n    if mask is not None:\n        AttentionBuffer.buffer = attention_pytorch(\n            q.view(q.shape[0], -1, q.shape[-1] * q.shape[-2]), \n            k.view(k.shape[0], -1, k.shape[-1] * k.shape[-2]), \n            v.view(v.shape[0], -1, v.shape[-1] * v.shape[-2]), \n            q.shape[2],\n            mask=mask,\n            )\n    else:\n        AttentionBuffer.buffer = optimized_attention(\n            q.view(q.shape[0], -1, q.shape[-1] * q.shape[-2]), \n            k.view(k.shape[0], -1, k.shape[-1] * k.shape[-2]), \n            v.view(v.shape[0], -1, v.shape[-1] * v.shape[-2]), \n            q.shape[2],\n            mask=mask,\n            )\n    return AttentionBuffer.buffer\n\nclass HDAttention(nn.Module):\n    def __init__(\n        self,\n        query_dim        : int,\n        heads            : int   = 8,\n        dim_head         : int   = 64,\n\n        eps              : float = 1e-5,\n        out_dim          : int   = None,\n        single           : bool  = False,\n        dtype=None, device=None, operations=None\n    ):\n\n        super().__init__()\n        self.inner_dim          = out_dim if out_dim is not None else dim_head * heads\n        self.query_dim          = query_dim\n        self.out_dim            = out_dim if out_dim is not None else query_dim\n\n        self.heads              = out_dim // dim_head if out_dim is not None else heads\n        self.single             = single\n\n        self.to_q               = operations.Linear (self.query_dim, self.inner_dim, dtype=dtype, device=device)\n        self.to_k               = operations.Linear (self.inner_dim, self.inner_dim, dtype=dtype, device=device)\n        self.to_v               = operations.Linear (self.inner_dim, self.inner_dim, dtype=dtype, device=device)\n        self.to_out             = operations.Linear (self.inner_dim, self.out_dim,   dtype=dtype, device=device)\n        self.q_rms_norm         = operations.RMSNorm(self.inner_dim, eps,            dtype=dtype, device=device)\n        self.k_rms_norm         = operations.RMSNorm(self.inner_dim, eps,            dtype=dtype, device=device)\n\n        if not single:\n            self.to_q_t         = operations.Linear (self.query_dim, self.inner_dim, dtype=dtype, device=device)\n            self.to_k_t         = operations.Linear (self.inner_dim, self.inner_dim, dtype=dtype, device=device)\n            self.to_v_t         = operations.Linear (self.inner_dim, self.inner_dim, dtype=dtype, device=device)\n            self.to_out_t       = operations.Linear (self.inner_dim, self.out_dim,   dtype=dtype, device=device)\n            self.q_rms_norm_t   = operations.RMSNorm(self.inner_dim, eps,            dtype=dtype, device=device)\n            self.k_rms_norm_t   = operations.RMSNorm(self.inner_dim, eps,            dtype=dtype, device=device)\n\n    def forward(\n        self,\n        img       :          FloatTensor,\n        img_masks : Optional[FloatTensor] = None,\n        txt       : Optional[FloatTensor] = None,\n        rope      :          FloatTensor  = None,\n        mask      : Optional[FloatTensor] = None,\n        update_cross_attn : Optional[Dict]= None,\n        style_block = None,\n    ) -> Tensor:\n        bsz = img.shape[0]\n        \n        img_q = self.to_q(img)\n        img_k = self.to_k(img)\n        img_v = self.to_v(img)\n        \n        img_q = style_block.img.ATTN(img_q, \"q_proj\")\n        img_k = style_block.img.ATTN(img_k, \"k_proj\")\n        img_v = style_block.img.ATTN(img_v, \"v_proj\")\n        \n        img_q = self.q_rms_norm(img_q)\n        img_k = self.k_rms_norm(img_k)\n        \n        img_q = style_block.img.ATTN(img_q, \"q_norm\")\n        img_k = style_block.img.ATTN(img_k, \"k_norm\")\n\n        inner_dim = img_k.shape[-1]\n        head_dim  = inner_dim // self.heads\n\n        img_q = img_q.view(bsz, -1, self.heads, head_dim)\n        img_k = img_k.view(bsz, -1, self.heads, head_dim)\n        img_v = img_v.view(bsz, -1, self.heads, head_dim)\n        \n        if img_masks is not None:\n            img_k = img_k * img_masks.view(bsz, -1, 1, 1)\n\n        if self.single:\n            attn = attention(img_q, img_k, img_v, rope=rope, mask=mask)\n            attn = style_block.img.ATTN(attn, \"out\")\n            return self.to_out(attn)\n        else:\n            txt_q = self.to_q_t(txt)\n            txt_k = self.to_k_t(txt)\n            txt_v = self.to_v_t(txt)\n            \n            txt_q = style_block.txt.ATTN(txt_q, \"q_proj\")\n            txt_k = style_block.txt.ATTN(txt_k, \"k_proj\")\n            txt_v = style_block.txt.ATTN(txt_v, \"v_proj\")\n            \n            txt_q = self.q_rms_norm_t(txt_q)\n            txt_k = self.k_rms_norm_t(txt_k)\n            \n            txt_q = style_block.txt.ATTN(txt_q, \"q_norm\")\n            txt_k = style_block.txt.ATTN(txt_k, \"k_norm\")\n\n            txt_q   = txt_q.view(bsz, -1, self.heads, head_dim)\n            txt_k   = txt_k.view(bsz, -1, self.heads, head_dim)\n            txt_v   = txt_v.view(bsz, -1, self.heads, head_dim)\n            \n            img_len = img_q.shape[1]\n            txt_len = txt_q.shape[1]\n            \n            attn    = attention(torch.cat([img_q, txt_q], dim=1), \n                                torch.cat([img_k, txt_k], dim=1), \n                                torch.cat([img_v, txt_v], dim=1), rope=rope, mask=mask)\n            \n            img_attn, txt_attn = torch.split(attn, [img_len, txt_len], dim=1)   #1, 4480, 2560\n            \n            img_attn = style_block.img.ATTN(img_attn, \"out\")\n            txt_attn = style_block.txt.ATTN(txt_attn, \"out\")\n\n            if update_cross_attn is not None:\n                if not update_cross_attn['skip_cross_attn']:\n                    UNCOND      = update_cross_attn['UNCOND']\n                    \n                    if UNCOND:\n                        llama_start = update_cross_attn['src_llama_start']\n                        llama_end   = update_cross_attn['src_llama_end']\n                        t5_start    = update_cross_attn['src_t5_start']\n                        t5_end      = update_cross_attn['src_t5_end']\n                    \n                        txt_src    = torch.cat([txt[:,t5_start:t5_end,:], txt[:,128+llama_start:128+llama_end,:], txt[:,256+llama_start:256+llama_end],], dim=-2).float()\n                        self.c_src = txt_src.transpose(-2,-1).squeeze(0)    # shape [C,1]\n                    else:\n                        llama_start = update_cross_attn['tgt_llama_start']\n                        llama_end   = update_cross_attn['tgt_llama_end']\n                        t5_start    = update_cross_attn['tgt_t5_start']\n                        t5_end      = update_cross_attn['tgt_t5_end']\n                        \n                        lamb  = update_cross_attn['lamb']\n                        erase = update_cross_attn['erase']\n                        \n                        txt_guide = torch.cat([txt[:,t5_start:t5_end,:], txt[:,128+llama_start:128+llama_end,:], txt[:,256+llama_start:256+llama_end],], dim=-2).float()\n                        c_guide   = txt_guide.transpose(-2,-1).squeeze(0)  # [C,1]\n                        \n                        Wv_old       = self.to_v_t.weight.data.float()              # [C,C]\n                        Wk_old       = self.to_k_t.weight.data.float()              # [C,C]\n\n                        v_star       = Wv_old @ c_guide                             # [C,1]\n                        k_star       = Wk_old @ c_guide                             # [C,1]\n\n                        c_src        = self.c_src                                   # [C,1]\n\n                        erase_scale  = erase\n                        d            = c_src.shape[0]\n\n                        C            = c_src @ c_src.T                              # [C,C]\n                        I            = torch.eye(d, device=C.device, dtype=C.dtype)\n\n                        mat1_v       = lamb*Wv_old + erase_scale*(v_star @ c_src.T)     # [C,C]\n                        mat2_v       = lamb*I      + erase_scale*(C)                    # [C,C]\n                        Wv_new       = mat1_v @ torch.inverse(mat2_v)                   # [C,C]\n\n                        mat1_k       = lamb*Wk_old + erase_scale*(k_star @ c_src.T)     # [C,C]\n                        mat2_k       = lamb*I      + erase_scale*(C)                    # [C,C]\n                        Wk_new       = mat1_k @ torch.inverse(mat2_k)                   # [C,C]\n\n                        self.to_v_t.weight.data.copy_(Wv_new.to(self.to_v_t.weight.data.dtype))\n                        self.to_k_t.weight.data.copy_(Wk_new.to(self.to_k_t.weight.data.dtype))\n                \n            return self.to_out(img_attn), self.to_out_t(txt_attn)\n\n    \n    \n    \n#########################################################################################################################################################################\nclass HDBlockDouble(nn.Module):\n    buffer = {}\n    \n    def __init__(\n        self,\n        dim                   : int,\n        heads                 : int,\n        head_dim              : int,\n        num_routed_experts    : int = 4,\n        num_activated_experts : int = 2,\n        dtype=None, device=None, operations=None\n    ):\n        super().__init__()\n        self.adaLN_modulation = nn.Sequential(\n            nn.SiLU(),\n            operations.Linear(dim, 12*dim, bias=True,                                               dtype=dtype, device=device)\n        )\n\n        self.norm1_i = operations.LayerNorm(dim, eps = 1e-06, elementwise_affine = False,           dtype=dtype, device=device)\n        self.norm1_t = operations.LayerNorm(dim, eps = 1e-06, elementwise_affine = False,           dtype=dtype, device=device)\n        \n        self.attn1   = HDAttention         (dim, heads, head_dim, single=False,                     dtype=dtype, device=device, operations=operations)\n\n        self.norm3_i = operations.LayerNorm(dim, eps = 1e-06, elementwise_affine = False,           dtype=dtype, device=device)\n        self.ff_i    = HDMOEFeedForwardSwiGLU(dim, 4*dim, num_routed_experts, num_activated_experts,  dtype=dtype, device=device, operations=operations)\n        \n        self.norm3_t = operations.LayerNorm(dim, eps = 1e-06, elementwise_affine = False,           dtype=dtype, device=device)                                 \n        self.ff_t    =  HDFeedForwardSwiGLU(dim, 4*dim,                                             dtype=dtype, device=device, operations=operations)\n\n    def forward(\n        self,\n        img       :          FloatTensor,\n        img_masks : Optional[FloatTensor] = None,\n        txt       : Optional[FloatTensor] = None,\n        clip      : Optional[FloatTensor] = None,    # clip = t + p_embedder (from pooled)\n        rope      :          FloatTensor  = None,\n        mask      : Optional[FloatTensor] = None,\n        update_cross_attn : Optional[Dict]= None,\n        style_block = None,\n    ) -> FloatTensor:\n        \n        img_msa_shift, img_msa_scale, img_msa_gate, img_mlp_shift, img_mlp_scale, img_mlp_gate, \\\n        txt_msa_shift, txt_msa_scale, txt_msa_gate, txt_mlp_shift, txt_mlp_scale, txt_mlp_gate = self.adaLN_modulation(clip)[:,None].chunk(12, dim=-1)      # 1,1,2560           \n\n        img_norm = self.norm1_i(img)\n        txt_norm = self.norm1_t(txt)\n        \n        img_norm = style_block.img(img_norm, \"attn_norm\")\n        txt_norm = style_block.txt(txt_norm, \"attn_norm\")\n        \n        img_norm = img_norm * (1+img_msa_scale) + img_msa_shift\n        txt_norm = txt_norm * (1+txt_msa_scale) + txt_msa_shift\n        \n        img_norm = style_block.img(img_norm, \"attn_norm_mod\")\n        txt_norm = style_block.txt(txt_norm, \"attn_norm_mod\")\n\n        img_attn, txt_attn = self.attn1(img_norm, img_masks, txt_norm, rope=rope, mask=mask, update_cross_attn=update_cross_attn, style_block=style_block)\n        \n        img_attn = style_block.img(img_attn, \"attn\")\n        txt_attn = style_block.txt(txt_attn, \"attn\")\n\n        img_attn *= img_msa_gate\n        txt_attn *= txt_msa_gate\n\n        img_attn = style_block.img(img_attn, \"attn_gated\")\n        txt_attn = style_block.txt(txt_attn, \"attn_gated\")\n\n        img += img_attn\n        txt += txt_attn\n\n        img = style_block.img(img, \"attn_res\")\n        txt = style_block.txt(txt, \"attn_res\")\n\n        # FEED FORWARD\n\n        img_norm = self.norm3_i(img)\n        txt_norm = self.norm3_t(txt)\n\n        img_norm = style_block.img(img_norm, \"ff_norm\")\n        txt_norm = style_block.txt(txt_norm, \"ff_norm\")\n\n        img_norm = img_norm * (1+img_mlp_scale) + img_mlp_shift\n        txt_norm = txt_norm * (1+txt_mlp_scale) + txt_mlp_shift\n\n        img_norm = style_block.img(img_norm, \"ff_norm_mod\")\n        txt_norm = style_block.txt(txt_norm, \"ff_norm_mod\")\n\n        img_ff_i = self.ff_i(img_norm, style_block.img.FF)\n        txt_ff_t = self.ff_t(txt_norm, style_block.txt.FF)\n        \n        img_ff_i = style_block.img(img_ff_i, \"ff\")\n        txt_ff_t = style_block.txt(txt_ff_t, \"ff\")\n        \n        img_ff_i *= img_mlp_gate\n        txt_ff_t *= txt_mlp_gate\n\n        img_ff_i = style_block.img(img_ff_i, \"ff_gated\")\n        txt_ff_t = style_block.txt(txt_ff_t, \"ff_gated\")\n\n        img += img_ff_i\n        txt += txt_ff_t\n        \n        img = style_block.img(img, \"ff_res\")\n        txt = style_block.txt(txt, \"ff_res\")\n\n        return img, txt\n\n\n#########################################################################################################################################################################\nclass HDBlockSingle(nn.Module):\n    buffer = {}\n    \n    def __init__(\n        self,\n        dim                   : int,\n        heads                 : int,\n        head_dim              : int,\n        num_routed_experts    : int = 4,\n        num_activated_experts : int = 2,\n        dtype=None, device=None, operations=None\n    ):\n        super().__init__()\n        self.adaLN_modulation = nn.Sequential(\n            nn.SiLU(),\n            operations.Linear(dim, 6 * dim, bias=True,                                               dtype=dtype, device=device)\n        )\n\n        self.norm1_i = operations.LayerNorm(dim, eps = 1e-06, elementwise_affine = False,            dtype=dtype, device=device)\n        self.attn1   = HDAttention         (dim, heads, head_dim, single=True,                       dtype=dtype, device=device, operations=operations)\n\n        self.norm3_i = operations.LayerNorm(dim, eps = 1e-06, elementwise_affine = False,            dtype=dtype, device=device)\n        self.ff_i    = HDMOEFeedForwardSwiGLU(dim, 4*dim, num_routed_experts, num_activated_experts, dtype=dtype, device=device, operations=operations)\n\n    def forward(\n        self,\n        img        :          FloatTensor,\n        img_masks  : Optional[FloatTensor]  = None,\n        txt        : Optional[FloatTensor]  = None,\n        clip       : Optional[FloatTensor]  = None,\n        rope       :          FloatTensor   = None,\n        mask       : Optional[FloatTensor]  = None,\n        update_cross_attn : Optional[Dict] = None,\n        style_block = None,\n    ) -> FloatTensor:\n        \n        img_msa_shift, img_msa_scale, img_msa_gate, img_mlp_shift, img_mlp_scale, img_mlp_gate = self.adaLN_modulation(clip)[:,None].chunk(6, dim=-1)\n\n        img_norm = self.norm1_i(img)  \n        img_norm = style_block.img(img_norm, \"attn_norm\")        #\n        \n        img_norm = img_norm * (1+img_msa_scale) + img_msa_shift\n        img_norm = style_block.img(img_norm, \"attn_norm_mod\")    #\n\n        img_attn = self.attn1(img_norm, img_masks, rope=rope, mask=mask, style_block=style_block)\n        img_attn = style_block.img(img_attn, \"attn\")\n\n        img_attn *= img_msa_gate\n        img_attn = style_block.img(img_attn, \"attn_gated\")\n\n        img += img_attn\n        img = style_block.img(img, \"attn_res\")\n\n        img_norm = self.norm3_i(img)\n        img_norm = style_block.img(img_norm, \"ff_norm\")\n        \n        img_norm = img_norm * (1+img_mlp_scale) + img_mlp_shift\n        img_norm = style_block.img(img_norm, \"ff_norm_mod\")\n\n        img_ff_i = self.ff_i(img_norm, style_block.img.FF)\n        img_ff_i = style_block.img(img_ff_i, \"ff\")            # fused... \"ff\" + \"attn\"\n        \n        img_ff_i *= img_mlp_gate\n        img_ff_i = style_block.img(img_ff_i, \"ff_gated\")         # \n\n        img += img_ff_i\n        img = style_block.img(img, \"ff_res\")       # \n\n        return img\n\n\n#########################################################################################################################################################################\nclass HDModel(nn.Module):\n    CHANNELS   = 2560\n    RECON_MODE = False\n\n    def __init__(\n        self,\n        patch_size            : Optional[int]   = None,\n        in_channels           : int             = 64,\n        out_channels          : Optional[int]   = None,\n        num_layers            : int             = 16,\n        num_single_layers     : int             = 32,\n        attention_head_dim    : int             = 128,\n        num_attention_heads   : int             = 20,\n        caption_channels      : List[int]       = None,\n        text_emb_dim          : int             = 2048,\n        num_routed_experts    : int             = 4,\n        num_activated_experts : int             = 2,\n        axes_dims_rope        : Tuple[int, int] = ( 32,  32),\n        max_resolution        : Tuple[int, int] = (128, 128),\n        llama_layers          : List[int]       = None,\n        image_model                             = None,     # unused, what was this supposed to be??\n        dtype=None, device=None, operations=None\n    ):\n        self.patch_size          = patch_size\n        self.num_attention_heads = num_attention_heads\n        self.attention_head_dim  = attention_head_dim\n        self.num_layers          = num_layers\n        self.num_single_layers   = num_single_layers\n\n        self.gradient_checkpointing = False\n\n        super().__init__()\n        self.dtype        = dtype\n        self.out_channels = out_channels or in_channels\n        self.inner_dim    = self.num_attention_heads * self.attention_head_dim\n        self.llama_layers = llama_layers\n\n        self.t_embedder   = TimestepEmbed(              self.inner_dim, dtype=dtype, device=device, operations=operations)\n        self.p_embedder   =   PooledEmbed(text_emb_dim, self.inner_dim, dtype=dtype, device=device, operations=operations)\n        self.x_embedder   =    PatchEmbed(\n            patch_size   = patch_size,\n            in_channels  = in_channels,\n            out_channels = self.inner_dim,\n            dtype=dtype, device=device, operations=operations\n        )\n        self.pe_embedder = EmbedND(theta=10000, axes_dim=axes_dims_rope)\n\n        self.double_stream_blocks = nn.ModuleList(\n            [\n                HDBlock(\n                    dim                   = self.inner_dim,\n                    heads                 = self.num_attention_heads,\n                    head_dim              = self.attention_head_dim,\n                    num_routed_experts    = num_routed_experts,\n                    num_activated_experts = num_activated_experts,\n                    block_type            = BlockType.Double,\n                    dtype=dtype, device=device, operations=operations\n                )\n                for i in range(self.num_layers)\n            ]\n        )\n\n        self.single_stream_blocks = nn.ModuleList(\n            [\n                HDBlock(\n                    dim                   = self.inner_dim,\n                    heads                 = self.num_attention_heads,\n                    head_dim              = self.attention_head_dim,\n                    num_routed_experts    = num_routed_experts,\n                    num_activated_experts = num_activated_experts,\n                    block_type            = BlockType.Single,\n                    dtype=dtype, device=device, operations=operations\n                )\n                for i in range(self.num_single_layers)\n            ]\n        )\n\n        self.final_layer = HDLastLayer(self.inner_dim, patch_size, self.out_channels, dtype=dtype, device=device, operations=operations)\n\n        caption_channels   = [caption_channels[1], ] * (num_layers + num_single_layers) + [caption_channels[0], ]\n        caption_projection = []\n        for caption_channel in caption_channels:\n            caption_projection.append(TextProjection(in_features=caption_channel, hidden_size=self.inner_dim, dtype=dtype, device=device, operations=operations))\n        self.caption_projection = nn.ModuleList(caption_projection)\n        self.max_seq            = max_resolution[0] * max_resolution[1] // (patch_size * patch_size)\n\n    def prepare_contexts(self, llama3, context, bsz, img_num_fea):\n        contexts = llama3.movedim(1, 0)\n        contexts = [contexts[k] for k in self.llama_layers]    # len == 48..... of tensors that are 1,143,4096\n\n        if self.caption_projection is not None:\n            contexts_list = []\n            for i, cxt in enumerate(contexts):\n                cxt = self.caption_projection[i](cxt)                          # linear in_features=4096, out_features=2560      len(self.caption_projection) == 49\n                cxt = cxt.view(bsz, -1, img_num_fea)\n                contexts_list.append(cxt)\n            contexts = contexts_list\n            context  = self.caption_projection[-1](context)\n            context  = context.view(bsz, -1, img_num_fea)\n            \n            contexts.append(context)                      # len == 49...... of tensors that are 1,143,2560.   last chunk is T5\n\n        return contexts\n\n    ### FORWARD ... FORWARD ... FORWARD ... FORWARD ... FORWARD ... FORWARD ... FORWARD ... FORWARD ... FORWARD ... FORWARD ... FORWARD ... FORWARD ... FORWARD ###\n    def forward(\n        self,\n        x       :          Tensor,\n        t       :          Tensor,\n        y       : Optional[Tensor]   = None,\n        context : Optional[Tensor]   = None,\n        encoder_hidden_states_llama3 = None,  # 1,32,143,4096\n        image_cond                   = None,  # HiDream E1\n        control                      = None,\n        transformer_options          = {},\n        mask    : Optional[Tensor]   = None,\n    ) -> Tensor:\n        x_orig      = x.clone()\n        b, c, h, w  = x.shape\n        if image_cond is not None: # HiDream E1\n            x = torch.cat([x, image_cond], dim=-1)\n        h_len = ((h + (self.patch_size // 2)) // self.patch_size) # h_len 96\n        w_len = ((w + (self.patch_size // 2)) // self.patch_size) # w_len 96\n        img_len = h_len * w_len\n        txt_slice = slice(img_len, None)\n        img_slice = slice(None, img_len)\n        SIGMA = t[0].clone() / 1000\n        EO = transformer_options.get(\"ExtraOptions\", ExtraOptions(\"\"))\n        if EO is not None:\n            EO.mute = True\n\n        if EO(\"zero_heads\"):\n            HEADS = 0\n        else:\n            HEADS = 20\n\n        StyleMMDiT = transformer_options.get('StyleMMDiT', StyleMMDiT_Model())        \n        StyleMMDiT.set_len(h_len, w_len, img_slice, txt_slice, HEADS=HEADS)\n        StyleMMDiT.Retrojector = self.Retrojector if hasattr(self, \"Retrojector\") else None\n        transformer_options['StyleMMDiT'] = None\n\n        x_tmp = transformer_options.get(\"x_tmp\")\n        if x_tmp is not None:\n            x_tmp = x_tmp.expand(x.shape[0], -1, -1, -1).clone()\n            img = comfy.ldm.common_dit.pad_to_patch_size(x_tmp, (self.patch_size, self.patch_size))\n        else:\n            img = comfy.ldm.common_dit.pad_to_patch_size(x, (self.patch_size, self.patch_size))\n        \n        y0_style, img_y0_style = None, None\n\n        img_orig, t_orig, y_orig, context_orig, llama3_orig = clone_inputs(img, t, y, context, encoder_hidden_states_llama3)\n    \n        weight    = -1 * transformer_options.get(\"regional_conditioning_weight\", 0.0)\n        floor     = -1 * transformer_options.get(\"regional_conditioning_floor\",  0.0)\n        update_cross_attn = transformer_options.get(\"update_cross_attn\")\n    \n        z_ = transformer_options.get(\"z_\")   # initial noise and/or image+noise from start of rk_sampler_beta() \n        rk_row = transformer_options.get(\"row\") # for \"smart noise\"\n        if z_ is not None:\n            x_init = z_[rk_row].to(x)\n        elif 'x_init' in transformer_options:\n            x_init = transformer_options.get('x_init').to(x)\n\n        # recon loop to extract exact noise pred for scattersort guide assembly\n        HDModel.RECON_MODE = StyleMMDiT.noise_mode == \"recon\"\n        recon_iterations = 2 if StyleMMDiT.noise_mode == \"recon\" else 1\n        for recon_iter in range(recon_iterations):\n            y0_style = StyleMMDiT.guides\n            y0_style_active = True if type(y0_style) == torch.Tensor else False\n            \n            HDModel.RECON_MODE = True     if StyleMMDiT.noise_mode == \"recon\" and recon_iter == 0     else False\n            \n            if StyleMMDiT.noise_mode == \"recon\" and recon_iter == 1:\n                x_recon = x_tmp if x_tmp is not None else x_orig\n                noise_prediction = x_recon + (1-SIGMA.to(x_recon)) * eps.to(x_recon)\n                denoised = x_recon - SIGMA.to(x_recon) * eps.to(x_recon)\n                \n                denoised = StyleMMDiT.apply_recon_lure(denoised, y0_style)\n\n                new_x = (1-SIGMA.to(denoised)) * denoised + SIGMA.to(denoised) * noise_prediction\n                img_orig = img = comfy.ldm.common_dit.pad_to_patch_size(new_x, (self.patch_size, self.patch_size))\n                \n                x_init = noise_prediction\n            elif StyleMMDiT.noise_mode == \"bonanza\":\n                x_init = torch.randn_like(x_init)\n\n            if y0_style_active:\n                SIGMA_ADAIN         = (SIGMA * EO(\"eps_adain_sigma_factor\", 1.0)).to(y0_style)\n                y0_style_noised     = (1-SIGMA_ADAIN) * y0_style + SIGMA_ADAIN * x_init[0:1].to(y0_style)   #always only use first batch of noise to avoid broadcasting\n                img_y0_style_orig   = comfy.ldm.common_dit.pad_to_patch_size(y0_style_noised, (self.patch_size, self.patch_size))\n\n            mask_zero = None\n            \n            out_list = []\n            for cond_iter in range(len(transformer_options['cond_or_uncond'])):\n                UNCOND = transformer_options['cond_or_uncond'][cond_iter] == 1\n                \n                if update_cross_attn is not None:\n                    update_cross_attn['UNCOND'] = UNCOND\n\n                bsz_style = y0_style.shape[0] if y0_style_active else 0\n                bsz       = 1 if HDModel.RECON_MODE else bsz_style + 1\n\n                img, t, y, context, llama3 = clone_inputs(img_orig, t_orig, y_orig, context_orig, llama3_orig, index=cond_iter)\n                \n                mask = None\n                if not UNCOND and 'AttnMask' in transformer_options: # and weight != 0:\n                    AttnMask = transformer_options['AttnMask']\n                    mask = transformer_options['AttnMask'].attn_mask.mask.to('cuda')\n                    if mask_zero is None:\n                        mask_zero = torch.ones_like(mask)\n                        #img_len = transformer_options['AttnMask'].img_len\n                        mask_zero[img_len:, img_len:] = mask[img_len:, img_len:]\n\n                    if weight == 0:\n                        context = transformer_options['RegContext'].context.to(context.dtype).to(context.device)\n                        context = context.view(128, -1, context.shape[-1]).sum(dim=-2)                                    # 128 !!!\n                        llama3  = transformer_options['RegContext'].llama3 .to(llama3 .dtype).to(llama3 .device)\n                        mask = None\n                    else:\n                        context = transformer_options['RegContext'].context.to(context.dtype).to(context.device)\n                        llama3  = transformer_options['RegContext'].llama3 .to(llama3 .dtype).to(llama3 .device)\n\n                if UNCOND and 'AttnMask_neg' in transformer_options: # and weight != 0:\n                    AttnMask = transformer_options['AttnMask_neg']\n                    mask = transformer_options['AttnMask_neg'].attn_mask.mask.to('cuda')\n                    if mask_zero is None:\n                        mask_zero = torch.ones_like(mask)\n                        img_len = transformer_options['AttnMask_neg'].img_len\n                        mask_zero[img_len:, img_len:] = mask[img_len:, img_len:]\n\n                    if weight == 0:\n                        context = transformer_options['RegContext_neg'].context.to(context.dtype).to(context.device)\n                        context = context.view(128, -1, context.shape[-1]).sum(dim=-2)                                    # 128 !!!\n                        llama3  = transformer_options['RegContext_neg'].llama3 .to(llama3 .dtype).to(llama3 .device)\n                        mask = None\n\n                    else:\n                        context = transformer_options['RegContext_neg'].context.to(context.dtype).to(context.device)\n                        llama3  = transformer_options['RegContext_neg'].llama3 .to(llama3 .dtype).to(llama3 .device)\n\n                elif UNCOND and 'AttnMask' in transformer_options:\n                    AttnMask = transformer_options['AttnMask']\n                    mask = transformer_options['AttnMask'].attn_mask.mask.to('cuda')\n                    \n                    if mask_zero is None:\n                        mask_zero = torch.ones_like(mask)\n                        #img_len = transformer_options['AttnMask'].img_len\n                        mask_zero[img_len:, img_len:] = mask[img_len:, img_len:]\n                    if weight == 0:                                                                             # ADDED 5/23/2025\n                        context = transformer_options['RegContext'].context.to(context.dtype).to(context.device)  # ADDED 5/26/2025 14:53\n                        context = context.view(128, -1, context.shape[-1]).sum(dim=-2)                                    # 128 !!!\n                        llama3  = transformer_options['RegContext'].llama3 .to(llama3 .dtype).to(llama3 .device)\n                        mask = None\n                    else:\n                        A       = context\n                        B       = transformer_options['RegContext'].context\n                        context = A.repeat(1,    (B.shape[1] // A.shape[1]) + 1, 1)[:,   :B.shape[1], :]\n\n                        A       = llama3\n                        B       = transformer_options['RegContext'].llama3\n                        llama3  = A.repeat(1, 1, (B.shape[2] // A.shape[2]) + 1, 1)[:,:, :B.shape[2], :]\n\n                if y0_style_active and not HDModel.RECON_MODE:\n                    if mask is None:\n                        context, y, llama3 = StyleMMDiT.apply_style_conditioning(\n                            UNCOND = UNCOND,\n                            base_context       = context,\n                            base_y             = y,\n                            base_llama3        = llama3,\n                        )\n                    else:\n                        context = context.repeat(bsz_style + 1, 1, 1)\n                        y = y.repeat(bsz_style + 1, 1)                   if y      is not None else None\n                        llama3  =  llama3.repeat(bsz_style + 1, 1, 1, 1) if llama3 is not None else None\n                    img_y0_style = img_y0_style_orig.clone()\n\n                if mask is not None and not type(mask[0][0].item()) == bool:\n                    mask = mask.to(x.dtype)\n                if mask_zero is not None and not type(mask_zero[0][0].item()) == bool:\n                    mask_zero = mask_zero.to(x.dtype)\n\n                # prep embeds\n                t    = self.expand_timesteps(t, bsz, x.device)\n                t    = self.t_embedder      (t,      x.dtype)\n                clip = t + self.p_embedder(y)\n                \n        \n                x_embedder_dtype = self.x_embedder.proj.weight.data.dtype\n                if x_embedder_dtype not in {torch.bfloat16, torch.float16, torch.float32, torch.float64}:\n                    x_embedder_dtype = x.dtype\n                \n                img_sizes = None\n                img, img_masks, img_sizes = self.patchify(img, self.max_seq, img_sizes)   # for 1024x1024: output is   1,4096,64   None   [[64,64]]     hidden_states rearranged not shrunk, patch_size 1x1???\n                if img_masks is None:\n                    pH, pW          = img_sizes[0]\n                    img_ids         = torch.zeros(pH, pW, 3, device=img.device)\n                    img_ids[..., 1] = img_ids[..., 1] + torch.arange(pH, device=img.device)[:, None]\n                    img_ids[..., 2] = img_ids[..., 2] + torch.arange(pW, device=img.device)[None, :]\n                    img_ids         = repeat(img_ids, \"h w c -> b (h w) c\", b=bsz)\n                img = self.x_embedder(img.to(x_embedder_dtype))\n                #img_len = img.shape[-2]\n\n                if y0_style_active and not HDModel.RECON_MODE:\n                    img_y0_style, _, _ = self.patchify(img_y0_style_orig.clone(), self.max_seq, None)   # for 1024x1024: output is   1,4096,64   None   [[64,64]]     hidden_states rearranged not shrunk, patch_size 1x1???\n                    img_y0_style = self.x_embedder(img_y0_style.to(x_embedder_dtype))  # hidden_states 1,4032,2560         for 1024x1024: -> 1,4096,2560      ,64 -> ,2560 (x40)\n                    img = torch.cat([img, img_y0_style], dim=0)\n\n                contexts = self.prepare_contexts(llama3, context, bsz, img.shape[-1])\n\n                # txt_ids -> 1,414,3\n                txt_ids = torch.zeros(bsz,   contexts[-1].shape[1] + contexts[-2].shape[1] + contexts[0].shape[1],     3,    device=img_ids.device, dtype=img_ids.dtype)\n                ids     = torch.cat((img_ids, txt_ids), dim=-2)   # ids -> 1,4446,3\n                rope    = self.pe_embedder(ids)                  # rope -> 1, 4446, 1, 64, 2, 2\n\n                txt_init     = torch.cat([contexts[-1], contexts[-2]], dim=-2)     # shape[1] == 128, 143       then on another step/call it's 128, 128...??? cuz the contexts is now 1,128,2560\n                txt_init_len = txt_init.shape[-2]                                       # 271\n\n                if mask is not None:\n                    txt_init_list = []\n                    \n                    offset_t5_start    = 0\n                    for i in range(transformer_options['AttnMask'].num_regions):\n                        offset_t5_end   = offset_t5_start + transformer_options['AttnMask'].context_lens_list[i][0]\n                        txt_init_list.append(contexts[-1][:,offset_t5_start:offset_t5_end,:])\n                        offset_t5_start = offset_t5_end\n                    \n                    offset_llama_start = 0\n                    for i in range(transformer_options['AttnMask'].num_regions):\n                        offset_llama_end   = offset_llama_start + transformer_options['AttnMask'].context_lens_list[i][1]\n                        txt_init_list.append(contexts[-2][:,offset_llama_start:offset_llama_end,:])\n                        offset_llama_start = offset_llama_end\n                    \n                    txt_init = torch.cat(txt_init_list, dim=-2)  #T5,LLAMA3 (last block)\n                    txt_init_len = txt_init.shape[-2]\n                    \n                img = StyleMMDiT(img, \"proj_in\")\n                \n                img = img.to(x) if img is not None else None\n                \n                # DOUBLE STREAM\n                for bid, (block, style_block) in enumerate(zip(self.double_stream_blocks, StyleMMDiT.double_blocks)):\n                    txt_llama = contexts[bid]\n                    txt = torch.cat([txt_init, txt_llama], dim=-2)        # 1,384,2560       # cur_contexts = T5, LLAMA3 (last block), LLAMA3 (current block)\n\n                    if   weight > 0 and mask is not None and     weight  <      bid/48:\n                        img, txt_init = block(img, img_masks, txt, clip, rope, mask_zero, style_block=style_block)\n                        \n                    elif (weight < 0 and mask is not None and abs(weight) < (1 - bid/48)):\n                        img_tmpZ, txt_tmpZ = img.clone(), txt.clone()\n\n                        # more efficient than the commented lines below being used instead in the loop?\n                        img_tmpZ, txt_init = block(img_tmpZ, img_masks, txt_tmpZ, clip, rope, mask, style_block=style_block)\n                        img     , txt_tmpZ = block(img     , img_masks, txt     , clip, rope, mask_zero, style_block=style_block)\n                        \n                    elif floor > 0 and mask is not None and     floor  >      bid/48:\n                        mask_tmp = mask.clone()\n                        mask_tmp[:img_len,:img_len] = 1.0\n                        img, txt_init = block(img, img_masks, txt, clip, rope, mask_tmp, style_block=style_block)\n                        \n                    elif floor < 0 and mask is not None and abs(floor) > (1 - bid/48):\n                        mask_tmp = mask.clone()\n                        mask_tmp[:img_len,:img_len] = 1.0\n                        img, txt_init = block(img, img_masks, txt, clip, rope, mask_tmp, style_block=style_block)\n                        \n                    elif update_cross_attn is not None and update_cross_attn['skip_cross_attn']:\n                        img, txt_init = block(img, img_masks, txt, clip, rope, mask, update_cross_attn=update_cross_attn)\n                        \n                    else:\n                        img, txt_init = block(img, img_masks, txt, clip, rope, mask, update_cross_attn=update_cross_attn, style_block=style_block)\n\n                    txt_init = txt_init[..., :txt_init_len, :]\n                # END DOUBLE STREAM\n\n\n\n\n\n                img       = torch.cat([img, txt_init], dim=-2)   # 4032 + 271 -> 4303     # txt embed from double stream block\n                joint_len = img.shape[-2]\n                \n                if img_masks is not None:\n                    img_masks_ones = torch.ones( (bsz, txt_init.shape[-2] + txt_llama.shape[-2]), device=img_masks.device, dtype=img_masks.dtype)   # encoder_attention_mask_ones=   padding for txt embed concatted onto end of img\n                    img_masks      = torch.cat([img_masks, img_masks_ones], dim=-2)\n\n\n\n\n\n                # SINGLE STREAM\n                for bid, (block, style_block) in enumerate(zip(self.single_stream_blocks, StyleMMDiT.single_blocks)):\n                    txt_llama = contexts[bid+16]                        # T5 pre-embedded for single stream blocks\n                    img = torch.cat([img, txt_llama], dim=-2)            # cat img,txt     opposite of flux which is txt,img       4303 + 143 -> 4446\n\n                    if   weight > 0 and mask is not None and     weight  <      (bid+16)/48:\n                        img = block(img, img_masks, None, clip, rope, mask_zero, style_block=style_block)\n                        \n                    elif weight < 0 and mask is not None and abs(weight) < (1 - (bid+16)/48):\n                        img = block(img, img_masks, None, clip, rope, mask_zero, style_block=style_block)\n                    \n                    elif floor > 0 and mask is not None and     floor  >      (bid+16)/48:\n                        mask_tmp = mask.clone()\n                        mask_tmp[:img_len,:img_len] = 1.0\n                        img = block(img, img_masks, None, clip, rope, mask_tmp, style_block=style_block)\n                        \n                    elif floor < 0 and mask is not None and abs(floor) > (1 - (bid+16)/48):\n                        mask_tmp = mask.clone()\n                        mask_tmp[:img_len,:img_len] = 1.0\n                        img = block(img, img_masks, None, clip, rope, mask_tmp, style_block=style_block)\n                        \n                    else:\n                        img = block(img, img_masks, None, clip, rope, mask, style_block=style_block)\n                        \n                    img = img[..., :joint_len, :]   # slice off txt_llama\n                # END SINGLE STREAM\n                    \n                img = img[..., :img_len, :]\n                #img = self.final_layer(img, clip)   # 4096,2560 -> 4096,64\n                shift, scale = self.final_layer.adaLN_modulation(clip).chunk(2,dim=1)\n                img = (1 + scale[:, None, :]) * self.final_layer.norm_final(img) + shift[:, None, :]\n                if not EO(\"endojector\"):\n                    img = StyleMMDiT(img, \"proj_out\")\n\n                if y0_style_active and not HDModel.RECON_MODE:\n                    img = img[0:1]\n                \n                if EO(\"endojector\"):\n                    if EO(\"dumb\"):\n                        eps_style = x_init[0:1].to(y0_style) - y0_style\n                    else:\n                        eps_style = (x_tmp[0:1].to(y0_style) - y0_style) / SIGMA.to(y0_style)\n                    eps_embed = self.Endojector.embed(eps_style)\n                    img = StyleMMDiT.scattersort_(img.to(eps_embed), eps_embed)\n                \n                img = self.final_layer.linear(img.to(self.final_layer.linear.weight.data))\n\n                img = self.unpatchify(img, img_sizes)\n                out_list.append(img)\n                \n            output = torch.cat(out_list, dim=0)\n            eps = -output[:, :, :h, :w]\n            \n            if recon_iter == 1:\n                denoised = new_x - SIGMA.to(new_x) * eps.to(new_x)\n                if x_tmp is not None:\n                    eps = (x_tmp - denoised.to(x_tmp)) / SIGMA.to(x_tmp)\n                else:\n                    eps = (x_orig - denoised.to(x_orig)) / SIGMA.to(x_orig)\n                    \n\n\n\n\n\n\n\n\n\n\n\n\n        freqsep_lowpass_method = transformer_options.get(\"freqsep_lowpass_method\")\n        freqsep_sigma          = transformer_options.get(\"freqsep_sigma\")\n        freqsep_kernel_size    = transformer_options.get(\"freqsep_kernel_size\")\n        freqsep_inner_kernel_size    = transformer_options.get(\"freqsep_inner_kernel_size\")\n        freqsep_stride    = transformer_options.get(\"freqsep_stride\")\n        \n        freqsep_lowpass_weight = transformer_options.get(\"freqsep_lowpass_weight\")\n        freqsep_highpass_weight= transformer_options.get(\"freqsep_highpass_weight\")\n        freqsep_mask           = transformer_options.get(\"freqsep_mask\")\n\n        y0_style_pos = transformer_options.get(\"y0_style_pos\")\n        y0_style_neg = transformer_options.get(\"y0_style_neg\")\n        \n        # end recon loop\n        self.style_dtype = torch.float32 if self.style_dtype is None else self.style_dtype\n        dtype = eps.dtype if self.style_dtype is None else self.style_dtype\n        \n        if y0_style_pos is not None:\n            y0_style_pos_weight    = transformer_options.get(\"y0_style_pos_weight\")\n            y0_style_pos_synweight = transformer_options.get(\"y0_style_pos_synweight\")\n            y0_style_pos_synweight *= y0_style_pos_weight\n            y0_style_pos_mask = transformer_options.get(\"y0_style_pos_mask\")\n            y0_style_pos_mask_edge = transformer_options.get(\"y0_style_pos_mask_edge\")\n\n            y0_style_pos = y0_style_pos.to(dtype)\n            x   = x_orig.to(dtype)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n            \n            denoised_embed = self.Retrojector.embed(denoised)\n            y0_adain_embed = self.Retrojector.embed(y0_style_pos)\n            \n            if transformer_options['y0_style_method'] == \"scattersort\":\n                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')\n                pad = transformer_options.get('y0_style_tile_padding')\n                if pad is not None and tile_h is not None and tile_w is not None:\n                    \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if EO(\"scattersort_median_LP\"):\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=EO(\"scattersort_median_LP\",7))\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=EO(\"scattersort_median_LP\",7))\n                        \n                        denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_LP = apply_scattersort_tiled(denoised_spatial_LP, y0_adain_spatial_LP, tile_h, tile_w, pad)\n                        \n                        denoised_spatial = denoised_spatial_LP + denoised_spatial_HP\n                        denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n                    else:\n                        denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)\n                    \n                    denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n                    \n                else:\n                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_pos_mask, y0_style_pos_mask_edge, h_len, w_len)\n\n\n\n            elif transformer_options['y0_style_method'] == \"AdaIN\":\n                if freqsep_mask is not None:\n                    freqsep_mask = freqsep_mask.view(1, 1, *freqsep_mask.shape[-2:]).float()\n                    freqsep_mask = F.interpolate(freqsep_mask.float(), size=(h_len, w_len), mode='nearest-exact')\n                \n                if hasattr(self, \"adain_tile\"):\n                    tile_h, tile_w = self.adain_tile\n                    \n                    denoised_pretile = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_pretile = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if self.adain_flag:\n                        h_off = tile_h // 2\n                        w_off = tile_w // 2\n                        denoised_pretile = denoised_pretile[:,:,h_off:-h_off, w_off:-w_off]\n                        self.adain_flag = False\n                    else:\n                        h_off = 0\n                        w_off = 0\n                        self.adain_flag = True\n                    \n                    tiles,    orig_shape, grid, strides = tile_latent(denoised_pretile, tile_size=(tile_h,tile_w))\n                    y0_tiles, orig_shape, grid, strides = tile_latent(y0_adain_pretile, tile_size=(tile_h,tile_w))\n                    \n                    tiles_out = []\n                    for i in range(tiles.shape[0]):\n                        tile = tiles[i].unsqueeze(0)\n                        y0_tile = y0_tiles[i].unsqueeze(0)\n                        \n                        tile    = rearrange(tile,    \"b c h w -> b (h w) c\", h=tile_h, w=tile_w)\n                        y0_tile = rearrange(y0_tile, \"b c h w -> b (h w) c\", h=tile_h, w=tile_w)\n                        \n                        tile = adain_seq_inplace(tile, y0_tile)\n                        tiles_out.append(rearrange(tile, \"b (h w) c -> b c h w\", h=tile_h, w=tile_w))\n                    \n                    tiles_out_tensor = torch.cat(tiles_out, dim=0)\n                    tiles_out_tensor = untile_latent(tiles_out_tensor, orig_shape, grid, strides)\n\n                    if h_off == 0:\n                        denoised_pretile = tiles_out_tensor\n                    else:\n                        denoised_pretile[:,:,h_off:-h_off, w_off:-w_off] = tiles_out_tensor\n                    denoised_embed = rearrange(denoised_pretile, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None and freqsep_lowpass_method.endswith(\"pw\"): #EO(\"adain_pw\"):\n\n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n\n                    if   freqsep_lowpass_method == \"median_pw\":\n                        denoised_spatial_new = adain_patchwise_row_batch_med(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size, use_median_blur=True, lowpass_weight=freqsep_lowpass_weight, highpass_weight=freqsep_highpass_weight)\n                    elif freqsep_lowpass_method == \"gaussian_pw\": \n                        denoised_spatial_new = adain_patchwise_row_batch(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None: \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if   freqsep_lowpass_method == \"median\":\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=freqsep_kernel_size)\n                    elif freqsep_lowpass_method == \"gaussian\":\n                        denoised_spatial_LP = gaussian_blur_2d(denoised_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = gaussian_blur_2d(y0_adain_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                    \n                    if EO(\"adain_fs_uhp\"):\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_ULP = gaussian_blur_2d(denoised_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        y0_adain_spatial_ULP = gaussian_blur_2d(y0_adain_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        \n                        denoised_spatial_UHP = denoised_spatial_HP  - denoised_spatial_ULP\n                        y0_adain_spatial_UHP = y0_adain_spatial_HP  - y0_adain_spatial_ULP\n                        \n                        #denoised_spatial_HP  = y0_adain_spatial_ULP + denoised_spatial_UHP\n                        denoised_spatial_HP  = denoised_spatial_ULP + y0_adain_spatial_UHP\n                    \n                    denoised_spatial_new = freqsep_lowpass_weight * y0_adain_spatial_LP + freqsep_highpass_weight * denoised_spatial_HP\n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                else:\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                self.StyleWCT.set(y0_adain_embed)\n                denoised_embed = self.StyleWCT.get(denoised_embed)\n                \n                if transformer_options.get('y0_standard_guide') is not None:\n                    y0_standard_guide = transformer_options.get('y0_standard_guide')\n                    \n                    y0_standard_guide_embed = self.Retrojector.embed(y0_standard_guide)\n                    f_cs = self.StyleWCT.get(y0_standard_guide_embed)\n                    self.y0_standard_guide = self.Retrojector.unembed(f_cs)\n\n                if transformer_options.get('y0_inv_standard_guide') is not None:\n                    y0_inv_standard_guide = transformer_options.get('y0_inv_standard_guide')\n\n                    y0_inv_standard_guide_embed = self.Retrojector.embed(y0_inv_standard_guide)\n                    f_cs = self.StyleWCT.get(y0_inv_standard_guide_embed)\n                    self.y0_inv_standard_guide = self.Retrojector.unembed(f_cs)\n\n            elif transformer_options['y0_style_method'] == \"WCT2\":\n                self.WaveletStyleWCT.set(y0_adain_embed, h_len, w_len)\n                denoised_embed = self.WaveletStyleWCT.get(denoised_embed, h_len, w_len)\n                \n                if transformer_options.get('y0_standard_guide') is not None:\n                    y0_standard_guide = transformer_options.get('y0_standard_guide')\n                    \n                    y0_standard_guide_embed = self.Retrojector.embed(y0_standard_guide)\n                    f_cs = self.WaveletStyleWCT.get(y0_standard_guide_embed, h_len, w_len)\n                    self.y0_standard_guide = self.Retrojector.unembed(f_cs)\n\n                if transformer_options.get('y0_inv_standard_guide') is not None:\n                    y0_inv_standard_guide = transformer_options.get('y0_inv_standard_guide')\n\n                    y0_inv_standard_guide_embed = self.Retrojector.embed(y0_inv_standard_guide)\n                    f_cs = self.WaveletStyleWCT.get(y0_inv_standard_guide_embed, h_len, w_len)\n                    self.y0_inv_standard_guide = self.Retrojector.unembed(f_cs)\n\n            denoised_approx = self.Retrojector.unembed(denoised_embed)\n            \n            eps = (x - denoised_approx) / sigma\n\n            if not UNCOND:\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_pos_weight * (eps[1] - eps_orig[1])\n                    eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n                else:\n                    eps[0] = eps_orig[0] + y0_style_pos_weight * (eps[0] - eps_orig[0])\n            elif eps.shape[0] == 1 and UNCOND:\n                eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n            \n            #eps = eps.float()\n        \n        if y0_style_neg is not None:\n            y0_style_neg_weight    = transformer_options.get(\"y0_style_neg_weight\")\n            y0_style_neg_synweight = transformer_options.get(\"y0_style_neg_synweight\")\n            y0_style_neg_synweight *= y0_style_neg_weight\n            y0_style_neg_mask = transformer_options.get(\"y0_style_neg_mask\")\n            y0_style_neg_mask_edge = transformer_options.get(\"y0_style_neg_mask_edge\")\n            \n            y0_style_neg = y0_style_neg.to(dtype)\n            x   = x_orig.to(dtype)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n\n            denoised_embed = self.Retrojector.embed(denoised)\n            y0_adain_embed = self.Retrojector.embed(y0_style_neg)\n            \n            if transformer_options['y0_style_method'] == \"scattersort\":\n                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')\n                pad = transformer_options.get('y0_style_tile_padding')\n                if pad is not None and tile_h is not None and tile_w is not None:\n                    \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)\n                    \n                    denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n\n                else:\n                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_neg_mask, y0_style_neg_mask_edge, h_len, w_len)\n            \n            \n            elif transformer_options['y0_style_method'] == \"AdaIN\":\n                denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                self.StyleWCT.set(y0_adain_embed)\n                denoised_embed = self.StyleWCT.get(denoised_embed)\n\n            elif transformer_options['y0_style_method'] == \"WCT2\":\n                self.WaveletStyleWCT.set(y0_adain_embed, h_len, w_len)\n                denoised_embed = self.WaveletStyleWCT.get(denoised_embed, h_len, w_len)\n\n            denoised_approx = self.Retrojector.unembed(denoised_embed)\n\n            if UNCOND:\n                eps = (x - denoised_approx) / sigma\n                eps[0] = eps_orig[0] + y0_style_neg_weight * (eps[0] - eps_orig[0])\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n            elif eps.shape[0] == 1 and not UNCOND:\n                eps[0] = eps_orig[0] + y0_style_neg_synweight * (eps[0] - eps_orig[0])\n            \n            #eps = eps.float()\n        \n        if EO(\"model_eps_out\"):\n            self.eps_out = eps.clone()\n        return eps\n    \n\n\n\n\n    def expand_timesteps(self, t, batch_size, device):\n        if not torch.is_tensor(t):\n            is_mps = device.type == \"mps\"\n            if isinstance(t, float):\n                dtype = torch.float32 if is_mps else torch.float64\n            else:\n                dtype = torch.int32   if is_mps else torch.int64\n            t = Tensor([t], dtype=dtype, device=device)\n        elif len(t.shape) == 0:\n            t = t[None].to(device)\n        # broadcast to batch dimension in a way that's compatible with ONNX/Core ML\n        t = t.expand(batch_size)\n        return t\n\n\n    def unpatchify(self, x: Tensor, img_sizes: List[Tuple[int, int]]) -> List[Tensor]:\n        x_arr = []\n        for i, img_size in enumerate(img_sizes):   #  [[64,64]]\n            pH, pW = img_size\n            x_arr.append(\n                einops.rearrange(x[i, :pH*pW].reshape(1, pH, pW, -1), 'B H W (p1 p2 C) -> B C (H p1) (W p2)',\n                    p1=self.patch_size, p2=self.patch_size)\n            )\n        x = torch.cat(x_arr, dim=0)\n        return x\n\n\n    def patchify(self, x, max_seq, img_sizes=None):\n        pz2 = self.patch_size * self.patch_size\n        if isinstance(x, Tensor):\n            B      = x.shape[0]\n            device = x.device\n            dtype  = x.dtype\n        else:\n            B      = len(x)\n            device = x[0].device\n            dtype  = x[0].dtype\n        x_masks = torch.zeros((B, max_seq), dtype=dtype, device=device)\n\n        if img_sizes is not None:\n            for i, img_size in enumerate(img_sizes): #  [[64,64]]\n                x_masks[i, 0:img_size[0] * img_size[1]] = 1\n            x         = einops.rearrange(x, 'B C S p -> B S (p C)', p=pz2)\n        elif isinstance(x, Tensor):\n            pH, pW    = x.shape[-2] // self.patch_size, x.shape[-1] // self.patch_size\n            x         = einops.rearrange(x, 'B C (H p1) (W p2) -> B (H W) (p1 p2 C)', p1=self.patch_size, p2=self.patch_size)\n            img_sizes = [[pH, pW]] * B\n            x_masks   = None\n        else:\n            raise NotImplementedError\n        return x, x_masks, img_sizes\n    \n    \ndef clone_inputs(*args, index: int=None):\n\n    if index is None:\n        return tuple(x.clone() for x in args)\n    else:\n        return tuple(x[index].unsqueeze(0).clone() for x in args)\n\n\n\ndef attention_rescale(\n    query, \n    key, \n    value,\n    attn_mask=None\n) -> torch.Tensor:\n    L, S = query.size(-2), key.size(-2)\n    scale_factor = 1 / math.sqrt(query.size(-1))\n\n\n    attn_weight = query @ key.transpose(-2, -1) * scale_factor\n    if attn_mask is not None:\n        attn_weight *= attn_mask\n\n    attn_weight = torch.softmax(attn_weight, dim=-1)\n\n    return attn_weight @ value\n\n\n\nclass HDLastLayer(nn.Module):\n    def __init__(self, hidden_size: int, patch_size: int, out_channels: int, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.norm_final       = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n        self.linear           = nn.Linear(hidden_size, patch_size * patch_size * out_channels, bias=True, dtype=dtype, device=device)\n        self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(hidden_size, 2 * hidden_size, bias=True, dtype=dtype, device=device))\n\n    def forward(self, x: Tensor, vec: Tensor, modulation_dims=None) -> Tensor:\n        x_dtype = x.dtype\n        \n        dtype = self.linear.weight.dtype\n        if dtype not in {torch.bfloat16, torch.float16, torch.float32, torch.float64}:\n            dtype = torch.float32\n            self.linear.weight.data = self.linear.weight.data.to(dtype)\n            self.linear.bias.data = self.linear.bias.data.to(dtype)\n            self.adaLN_modulation[1].weight.data = self.adaLN_modulation[1].weight.data.to(dtype)\n            self.adaLN_modulation[1].bias.data = self.adaLN_modulation[1].bias.data.to(dtype)\n        \n        x = x.to(dtype)\n        vec = vec.to(dtype)\n        if vec.ndim == 2:\n            vec = vec[:, None, :]\n\n        shift, scale = self.adaLN_modulation(vec).chunk(2, dim=-1)\n        x = apply_mod(self.norm_final(x), (1 + scale), shift, modulation_dims)\n        x = self.linear(x)\n        return x #.to(x_dtype)\n\ndef apply_mod(tensor, m_mult, m_add=None, modulation_dims=None):\n    if modulation_dims is None:\n        if m_add is not None:\n            return tensor * m_mult + m_add\n        else:\n            return tensor * m_mult\n    else:\n        for d in modulation_dims:\n            tensor[:, d[0]:d[1]] *= m_mult[:, d[2]]\n            if m_add is not None:\n                tensor[:, d[0]:d[1]] += m_add[:, d[2]]\n        return tensor\n\n\n\n\n\n"
  },
  {
    "path": "images.py",
    "content": "import torch\nimport torch.nn.functional as F\nimport math\n\nfrom torchvision import transforms\n\nfrom torch  import Tensor\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKING, TypeVar, List\n\nimport numpy as np\nimport kornia\nimport cv2\n\nfrom PIL import Image, ImageFilter, ImageEnhance\n\nimport comfy\n\n# tensor -> PIL\ndef tensor2pil(image):\n    return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8))\n\n# PIL -> tensor\ndef pil2tensor(image):\n    return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0)\n\n\ndef freq_sep_fft(img, cutoff=5, sigma=10):\n    fft_img = torch.fft.fft2(img, dim=(-2, -1))\n    fft_shifted = torch.fft.fftshift(fft_img)\n\n    _, _, h, w = img.shape\n\n    # freq domain -> meshgrid\n    y, x = torch.meshgrid(torch.arange(h, device=img.device), torch.arange(w, device=img.device))\n    center_y, center_x = h // 2, w // 2\n    distance = torch.sqrt((x - center_x) ** 2 + (y - center_y) ** 2)\n\n    # smoother low-pass filter via gaussian filter\n    low_pass_filter = torch.exp(-distance**2 / (2 * sigma**2))\n\n    low_pass_filter = low_pass_filter.unsqueeze(0).unsqueeze(0)\n    low_pass_fft = fft_shifted * low_pass_filter\n\n    high_pass_fft = fft_shifted * (1 - low_pass_filter)\n\n    # inverse FFT -> return to spatial domain\n    low_pass_img  = torch.fft.ifft2(torch.fft.ifftshift( low_pass_fft), dim=(-2, -1)).real\n    high_pass_img = torch.fft.ifft2(torch.fft.ifftshift(high_pass_fft), dim=(-2, -1)).real\n\n    return low_pass_img, high_pass_img\n\n\ndef color_dodge_blend(base, blend):\n    return torch.clamp(base / (1 - blend + 1e-8), 0, 1)\n    \ndef color_scorch_blend(base, blend):\n    return torch.clamp(1 - (1 - base) / (1 - blend + 1e-8), 0, 1)\n\ndef divide_blend(base, blend):\n    return torch.clamp(base / (blend + 1e-8), 0, 1)\n\ndef color_burn_blend(base, blend):\n    return torch.clamp(1 - (1 - base) / (blend + 1e-8), 0, 1)\n\ndef hard_light_blend(base, blend):\n    return torch.where(blend <= 0.5, \n                       2 * base * blend, \n                       1 - 2 * (1 - base) * (1 - blend))\n\ndef hard_light_freq_sep(original, low_pass):\n    high_pass = (color_burn_blend(original, (1 - low_pass)) + divide_blend(original, low_pass)) / 2\n    return high_pass\n\ndef linear_light_blend(base, blend):\n    return torch.where(blend <= 0.5,\n                       base + 2 *  blend - 1,\n                       base + 2 * (blend - 0.5))\n\ndef linear_light_freq_sep(base, blend):\n    return (base + (1-blend)) / 2\n\ndef scale_to_range(value, min_old, max_old, min_new, max_new):\n    return (value - min_old) / (max_old - min_old) * (max_new - min_new) + min_new\n\n\ndef normalize_lab(lab_image):\n    L, A, B = lab_image[:, 0:1, :, :], lab_image[:, 1:2, :, :], lab_image[:, 2:3, :, :]\n\n    L_normalized = L / 100.0\n    A_normalized = scale_to_range(A, -128, 127, 0, 1)\n    B_normalized = scale_to_range(B, -128, 127, 0, 1)    \n\n    lab_normalized = torch.cat([L_normalized, A_normalized, B_normalized], dim=1)\n\n    return lab_normalized\n\ndef denormalize_lab(lab_normalized):\n    L_normalized, A_normalized, B_normalized = torch.split(lab_normalized, 1, dim=1)\n\n    L = L_normalized * 100.0\n    A = scale_to_range(A_normalized, 0, 1, -128, 127)\n    B = scale_to_range(B_normalized, 0, 1, -128, 127)\n\n    lab_image = torch.cat([L, A, B], dim=1)\n    return lab_image\n\n\ndef rgb_to_lab(image):\n    return kornia.color.rgb_to_lab(image)\n\ndef lab_to_rgb(image):\n    return kornia.color.lab_to_rgb(image)\n\n# cv2_layer() and ImageMedianBlur adapted from: https://github.com/Nourepide/ComfyUI-Allor/\n    \ndef cv2_layer(tensor, function):\n    \"\"\"\n    This function applies a given function to each channel of an input tensor and returns the result as a PyTorch tensor.\n\n    :param tensor: A PyTorch tensor of shape (H, W, C) or (N, H, W, C), where C is the number of channels, H is the height, and W is the width of the image.\n    :param function: A function that takes a numpy array of shape (H, W, C) as input and returns a numpy array of the same shape.\n    :return: A PyTorch tensor of the same shape as the input tensor, where the given function has been applied to each channel of each image in the tensor.\n    \"\"\"\n    shape_size = tensor.shape.__len__()\n\n    def produce(image):\n        channels = image[0, 0, :].shape[0]\n\n        rgb = image[:, :, 0:3].numpy()\n        result_rgb = function(rgb)\n\n        if channels <= 3:\n            return torch.from_numpy(result_rgb)\n        elif channels == 4:\n            alpha = image[:, :, 3:4].numpy()\n            result_alpha = function(alpha)[..., np.newaxis]\n            result_rgba = np.concatenate((result_rgb, result_alpha), axis=2)\n\n            return torch.from_numpy(result_rgba)\n\n    if shape_size == 3:\n        return torch.from_numpy(produce(tensor))\n    elif shape_size == 4:\n        return torch.stack([\n            produce(tensor[i]) for i in range(len(tensor))\n        ])\n    else:\n        raise ValueError(\"Incompatible tensor dimension.\")\n    \n\n# adapted from https://github.com/cubiq/ComfyUI_essentials\ndef image_resize(image,\n                width,\n                height,\n                method          = \"stretch\",\n                interpolation   = \"nearest\",\n                condition       = \"always\",\n                multiple_of     = 0,\n                keep_proportion = False):\n    \n    _, oh, ow, _ = image.shape\n    x = y = x2 = y2 = 0\n    pad_left = pad_right = pad_top = pad_bottom = 0\n\n    if keep_proportion:\n        method = \"keep proportion\"\n\n    if multiple_of > 1:\n        width = width - (width % multiple_of)\n        height = height - (height % multiple_of)\n\n    if method == 'keep proportion' or method == 'pad':\n        if width == 0 and oh < height:\n            width = MAX_RESOLUTION\n        elif width == 0 and oh >= height:\n            width = ow\n\n        if height == 0 and ow < width:\n            height = MAX_RESOLUTION\n        elif height == 0 and ow >= width:\n            height = oh\n\n        ratio = min(width / ow, height / oh)\n        new_width = round(ow*ratio)\n        new_height = round(oh*ratio)\n\n        if method == 'pad':\n            pad_left = (width - new_width) // 2\n            pad_right = width - new_width - pad_left\n            pad_top = (height - new_height) // 2\n            pad_bottom = height - new_height - pad_top\n\n        width = new_width\n        height = new_height\n    elif method.startswith('fill'):\n        width = width if width > 0 else ow\n        height = height if height > 0 else oh\n\n        ratio = max(width / ow, height / oh)\n        new_width = round(ow*ratio)\n        new_height = round(oh*ratio)\n        x = (new_width - width) // 2\n        y = (new_height - height) // 2\n        x2 = x + width\n        y2 = y + height\n        if x2 > new_width:\n            x -= (x2 - new_width)\n        if x < 0:\n            x = 0\n        if y2 > new_height:\n            y -= (y2 - new_height)\n        if y < 0:\n            y = 0\n        width = new_width\n        height = new_height\n    else:\n        width = width if width > 0 else ow\n        height = height if height > 0 else oh\n\n    if \"always\" in condition \\\n        or (\"downscale if bigger\" == condition and (oh > height or ow > width)) or (\"upscale if smaller\" == condition and (oh < height or ow < width)) \\\n        or (\"bigger area\" in condition and (oh * ow > height * width)) or (\"smaller area\" in condition and (oh * ow < height * width)):\n\n        outputs = image.permute(0,3,1,2)\n\n        if interpolation == \"lanczos\":\n            outputs = comfy.utils.lanczos(outputs, width, height)\n        else:\n            outputs = F.interpolate(outputs, size=(height, width), mode=interpolation)\n\n        if method == 'pad':\n            if pad_left > 0 or pad_right > 0 or pad_top > 0 or pad_bottom > 0:\n                outputs = F.pad(outputs, (pad_left, pad_right, pad_top, pad_bottom), value=0)\n\n        outputs = outputs.permute(0,2,3,1)\n\n        if method.startswith('fill'):\n            if x > 0 or y > 0 or x2 > 0 or y2 > 0:\n                outputs = outputs[:, y:y2, x:x2, :]\n    else:\n        outputs = image\n\n    if multiple_of > 1 and (outputs.shape[2] % multiple_of != 0 or outputs.shape[1] % multiple_of != 0):\n        width = outputs.shape[2]\n        height = outputs.shape[1]\n        x = (width % multiple_of) // 2\n        y = (height % multiple_of) // 2\n        x2 = width - ((width % multiple_of) - x)\n        y2 = height - ((height % multiple_of) - y)\n        outputs = outputs[:, y:y2, x:x2, :]\n    \n    outputs = torch.clamp(outputs, 0, 1)\n\n    return outputs\n\n\n\nclass ImageRepeatTileToSize:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"image\":  (\"IMAGE\",),\n                \"width\":  (\"INT\",     {\"default\": 1024, \"min\": 1, \"max\": 1048576, \"step\": 1,}),\n                \"height\": (\"INT\",     {\"default\": 1024, \"min\": 1, \"max\": 1048576, \"step\": 1,}),\n                \"crop\":   (\"BOOLEAN\", {\"default\": True}),\n            },\n        }\n\n    RETURN_TYPES = (\"IMAGE\",)\n    RETURN_NAMES = (\"image\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, image, width, height, crop,            \n            method          = \"stretch\",\n            interpolation   = \"lanczos\",\n            condition       = \"always\",\n            multiple_of     = 0,\n            keep_proportion = False,\n        ):\n\n        img = image.clone().detach()\n        \n        b, h, w, c = img.shape\n        \n        h_tgt = int(torch.ceil(torch.div(height, h)))\n        w_tgt = int(torch.ceil(torch.div(width,  w)))\n        \n        img_tiled = torch.tile(img, (h_tgt, w_tgt, 1))\n        \n        if crop:\n            img_tiled = img_tiled[:,:height, :width, :]\n        else:\n            img_tiled  = image_resize(img_tiled, width, height, method, interpolation, condition, multiple_of, keep_proportion)\n\n        return (img_tiled,)\n\n\n\n\n# Rewrite of the WAS Film Grain node, much improved speed and efficiency (https://github.com/WASasquatch/was-node-suite-comfyui)\n\nclass Film_Grain: \n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"image\":              (\"IMAGE\",),\n                \"density\":            (\"FLOAT\", {\"default\": 1.0, \"min\": 0.01, \"max\": 1.0, \"step\": 0.01}),\n                \"intensity\":          (\"FLOAT\", {\"default\": 1.0, \"min\": 0.01, \"max\": 1.0, \"step\": 0.01}),\n                \"highlights\":         (\"FLOAT\", {\"default\": 1.0, \"min\": 0.01, \"max\": 255.0, \"step\": 0.01}),\n                \"supersample_factor\": (\"INT\",   {\"default\": 4, \"min\": 1, \"max\": 8, \"step\": 1}),\n                \"repeats\":            (\"INT\",   {\"default\": 1, \"min\": 1, \"max\": 1000, \"step\": 1})\n            }\n        }\n    RETURN_TYPES = (\"IMAGE\",)\n    FUNCTION = \"main\"\n\n    CATEGORY = \"RES4LYF/images\"\n\n    def main(self, image, density, intensity, highlights, supersample_factor, repeats=1):\n        image = image.repeat(repeats, 1, 1, 1)\n        return (self.apply_film_grain(image, density, intensity, highlights, supersample_factor), )\n\n    def apply_film_grain(self, img, density=0.1, intensity=1.0, highlights=1.0, supersample_factor=4):\n\n        img_batch = img.clone()\n        img_list = []\n        for i in range(img_batch.shape[0]):\n            img = img_batch[i].unsqueeze(0)\n            img = tensor2pil(img)\n            device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n        \n            # apply grayscale noise with specified density/intensity/highlights to PIL image\n            img_gray = img.convert('L')\n            original_size = img.size\n            img_gray = img_gray.resize(\n                ((img.size[0] * supersample_factor), (img.size[1] * supersample_factor)), Image.Resampling(2))\n            num_pixels = int(density * img_gray.size[0] * img_gray.size[1])\n\n            img_gray_tensor = torch.from_numpy(np.array(img_gray).astype(np.float32) / 255.0).to(device)\n            img_gray_flat = img_gray_tensor.view(-1)\n            num_pixels = int(density * img_gray_flat.numel())\n            indices = torch.randint(0, img_gray_flat.numel(), (num_pixels,), device=img_gray_flat.device)\n            values = torch.randint(0, 256, (num_pixels,), device=img_gray_flat.device, dtype=torch.float32) / 255.0\n            \n            img_gray_flat[indices] = values\n            img_gray = img_gray_flat.view(img_gray_tensor.shape)\n            \n            img_gray_np = (img_gray.cpu().numpy() * 255).astype(np.uint8)\n            img_gray = Image.fromarray(img_gray_np)\n\n            img_noise = img_gray.convert('RGB')\n            img_noise = img_noise.filter(ImageFilter.GaussianBlur(radius=0.125))\n            img_noise = img_noise.resize(original_size, Image.Resampling(1))\n            img_noise = img_noise.filter(ImageFilter.EDGE_ENHANCE_MORE)\n            img_final = Image.blend(img, img_noise, intensity)\n            enhancer = ImageEnhance.Brightness(img_final)\n            img_highlights = enhancer.enhance(highlights)\n            \n            img_list.append(pil2tensor(img_highlights).squeeze(dim=0))\n            \n        img_highlights = torch.stack(img_list, dim=0)\n        return img_highlights\n\n\n\nclass Image_Grain_Add: \n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"image\": (\"IMAGE\",),\n                \"weight\": (\"FLOAT\", {\"default\": 0.5, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                #\"density\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.01, \"max\": 1.0, \"step\": 0.01}),\n                #\"intensity\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.01, \"max\": 1.0, \"step\": 0.01}),\n                #\"highlights\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.01, \"max\": 255.0, \"step\": 0.01}),\n                #\"supersample_factor\": (\"INT\", {\"default\": 4, \"min\": 1, \"max\": 8, \"step\": 1}),\n                #\"repeats\": (\"INT\", {\"default\": 1, \"min\": 1, \"max\": 1000, \"step\": 1})\n            }\n        }\n    RETURN_TYPES = (\"IMAGE\",)\n    FUNCTION = \"main\"\n\n    CATEGORY = \"RES4LYF/images\"\n\n    def main(self, image, weight=0.5, density=1.0, intensity=1.0, highlights=1.0, supersample_factor=1.0, repeats=1):\n        image = image.repeat(repeats, 1, 1, 1)\n        image_grain = self.apply_film_grain(image, density, intensity, highlights, supersample_factor)\n        \n        return (image + weight * (hard_light_blend(image_grain, image) - image), )\n\n\n    def apply_film_grain(self, img, density=0.1, intensity=1.0, highlights=1.0, supersample_factor=4):\n\n        img_batch = img.clone()\n        img_list = []\n        for i in range(img_batch.shape[0]):\n            img = img_batch[i].unsqueeze(0)\n            img = tensor2pil(img)\n            device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n        \n            # apply grayscale noise with specified density/intensity/highlights to PIL image\n            img_gray = img.convert('L')\n            original_size = img.size\n            img_gray = img_gray.resize(\n                ((img.size[0] * supersample_factor), (img.size[1] * supersample_factor)), Image.Resampling(2))\n            num_pixels = int(density * img_gray.size[0] * img_gray.size[1])\n\n            img_gray_tensor = torch.from_numpy(np.array(img_gray).astype(np.float32) / 255.0).to(device)\n            img_gray_flat = img_gray_tensor.view(-1)\n            num_pixels = int(density * img_gray_flat.numel())\n            indices = torch.randint(0, img_gray_flat.numel(), (num_pixels,), device=img_gray_flat.device)\n            values = torch.randint(0, 256, (num_pixels,), device=img_gray_flat.device, dtype=torch.float32) / 255.0\n            \n            img_gray_flat[indices] = values\n            img_gray = img_gray_flat.view(img_gray_tensor.shape)\n            \n            img_gray_np = (img_gray.cpu().numpy() * 255).astype(np.uint8)\n            img_gray = Image.fromarray(img_gray_np)\n\n            img_noise = img_gray.convert('RGB')\n            img_noise = img_noise.filter(ImageFilter.GaussianBlur(radius=0.125))\n            img_noise = img_noise.resize(original_size, Image.Resampling(1))\n            img_noise = img_noise.filter(ImageFilter.EDGE_ENHANCE_MORE)\n            img_final = Image.blend(img, img_noise, intensity)\n            enhancer = ImageEnhance.Brightness(img_final)\n            img_highlights = enhancer.enhance(highlights)\n            \n            img_list.append(pil2tensor(img_highlights).squeeze(dim=0))\n            \n        img_highlights = torch.stack(img_list, dim=0)\n        return img_highlights\n\n\n\n\nclass Frequency_Separation_Hard_Light:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"optional\": {\n                \"high_pass\": (\"IMAGE\",),\n                \"original\":  (\"IMAGE\",),\n                \"low_pass\":  (\"IMAGE\",),\n            },\n            \"required\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"IMAGE\",\"IMAGE\",\"IMAGE\",)\n    RETURN_NAMES = (\"high_pass\", \"original\", \"low_pass\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, high_pass=None, original=None, low_pass=None):\n\n        if high_pass is None:\n            high_pass = hard_light_freq_sep(original.to(torch.float64).to('cuda'), low_pass.to(torch.float64).to('cuda'))\n        \n        if original is None:\n            original = hard_light_blend(low_pass.to(torch.float64).to('cuda'), high_pass.to(torch.float64).to('cuda'))\n\n        return (high_pass, original, low_pass,)\n\n\nclass Frequency_Separation_Hard_Light_LAB:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"optional\": {\n                \"high_pass\": (\"IMAGE\",),\n                \"original\":  (\"IMAGE\",),\n                \"low_pass\":  (\"IMAGE\",),\n            },\n            \"required\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"IMAGE\", \"IMAGE\", \"IMAGE\",)\n    RETURN_NAMES = (\"high_pass\", \"original\", \"low_pass\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, high_pass=None, original=None, low_pass=None):\n\n        if original is not None:\n            lab_original = rgb_to_lab(original.to(torch.float64).permute(0, 3, 1, 2))\n            lab_original_normalized = normalize_lab(lab_original)\n        \n        if low_pass is not None:\n            lab_low_pass = rgb_to_lab(low_pass.to(torch.float64).permute(0, 3, 1, 2))\n            lab_low_pass_normalized = normalize_lab(lab_low_pass)\n\n        if high_pass is not None:\n            lab_high_pass = rgb_to_lab(high_pass.to(torch.float64).permute(0, 3, 1, 2))\n            lab_high_pass_normalized = normalize_lab(lab_high_pass)\n\n        #original_l = lab_original_normalized[:, :1, :, :]  \n        #low_pass_l = lab_low_pass_normalized[:, :1, :, :]  \n\n        if high_pass is None:\n            lab_high_pass_normalized = hard_light_freq_sep(lab_original_normalized.permute(0, 2, 3, 1), lab_low_pass_normalized.permute(0, 2, 3, 1)).permute(0, 3, 1, 2)\n            lab_high_pass = denormalize_lab(lab_high_pass_normalized)\n            high_pass = lab_to_rgb(lab_high_pass).permute(0, 2, 3, 1)\n        if original is None:\n            lab_original_normalized = hard_light_blend(lab_low_pass_normalized.permute(0, 2, 3, 1), lab_high_pass_normalized.permute(0, 2, 3, 1)).permute(0, 3, 1, 2)\n            lab_original = denormalize_lab(lab_original_normalized)\n            original = lab_to_rgb(lab_original).permute(0, 2, 3, 1)\n\n        return (high_pass, original, low_pass)\n    \n    \nclass Frame_Select:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"frames\": (\"IMAGE\",),\n                \"select\": (\"INT\",  {\"default\": 0, \"min\": 0, \"max\": 10000}),\n                \n            },\n            \"optional\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"IMAGE\",)\n    RETURN_NAMES = (\"image\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, frames=None, select=0):\n        frame = frames[select].unsqueeze(0).clone()\n        return (frame,)\n    \n    \nclass Frames_Slice:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"frames\": (\"IMAGE\",),\n                \"start\":  (\"INT\",  {\"default\": 0, \"min\": 0, \"max\": 10000}),\n                \"stop\":   (\"INT\",  {\"default\": 1, \"min\": 1, \"max\": 10000}),\n            },\n            \"optional\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"IMAGE\",)\n    RETURN_NAMES = (\"image\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, frames=None, start=0, stop=1):\n        frames_slice = frames[start:stop].clone()\n        return (frames_slice,)\n\n\nclass Frames_Concat:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"frames_0\": (\"IMAGE\",),\n                \"frames_1\": (\"IMAGE\",),\n            },\n            \"optional\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"IMAGE\",)\n    RETURN_NAMES = (\"image\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, frames_0, frames_1):\n        frames_concat = torch.cat((frames_0, frames_1), dim=0).squeeze(0).clone()\n        return (frames_concat,)\n    \n    \n    \nclass Image_Channels_LAB:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"optional\": {\n                \"RGB\": (\"IMAGE\",),\n                \"L\": (\"IMAGE\",),\n                \"A\": (\"IMAGE\",),\n                \"B\": (\"IMAGE\",),\n            },\n            \"required\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"IMAGE\",\"IMAGE\",\"IMAGE\",\"IMAGE\",)\n    RETURN_NAMES = (\"RGB\",\"L\",\"A\",\"B\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, RGB=None, L=None, A=None, B=None):\n\n        if RGB is not None:\n            LAB = rgb_to_lab(RGB.to(torch.float64).permute(0, 3, 1, 2))\n            L, A, B = LAB[:, 0:1, :, :], LAB[:, 1:2, :, :], LAB[:, 2:3, :, :]\n        else:\n            LAB = torch.cat([L,A,B], dim=1)\n            RGB = lab_to_rgb(LAB.to(torch.float64)).permute(0,2,3,1)\n\n        return (RGB, L, A, B,)\n    \n    \n\nclass Frequency_Separation_Vivid_Light:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"optional\": {\n                \"high_pass\": (\"IMAGE\",),\n                \"original\":  (\"IMAGE\",),\n                \"low_pass\":  (\"IMAGE\",),\n            },\n            \"required\": {\n            },\n        }\n    RETURN_TYPES = (\"IMAGE\",\"IMAGE\",\"IMAGE\",)\n    RETURN_NAMES = (\"high_pass\", \"original\", \"low_pass\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, high_pass=None, original=None, low_pass=None):\n\n        if high_pass is None:\n            high_pass = hard_light_freq_sep(low_pass.to(torch.float64), original.to(torch.float64))\n        \n        if original is None:\n            original = hard_light_blend(high_pass.to(torch.float64), low_pass.to(torch.float64))\n\n        return (high_pass, original, low_pass,)\n\n\nclass Frequency_Separation_Linear_Light:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"optional\": {\n                \"high_pass\": (\"IMAGE\",),\n                \"original\":  (\"IMAGE\",),\n                \"low_pass\":  (\"IMAGE\",),\n            },\n            \"required\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"IMAGE\",\"IMAGE\",\"IMAGE\",)\n    RETURN_NAMES = (\"high_pass\", \"original\", \"low_pass\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, high_pass=None, original=None, low_pass=None):\n\n        if high_pass is None:\n            high_pass = linear_light_freq_sep(original.to(torch.float64).to('cuda'), low_pass.to(torch.float64).to('cuda'))\n        \n        if original is None:\n            original = linear_light_blend(low_pass.to(torch.float64).to('cuda'), high_pass.to(torch.float64).to('cuda'))\n\n        return (high_pass, original, low_pass,)\n\n\nclass Frequency_Separation_FFT:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"optional\": {\n                \"high_pass\": (\"IMAGE\",),\n                \"original\":  (\"IMAGE\",),\n                \"low_pass\":  (\"IMAGE\",),\n            },\n            \"required\": {\n                \"cutoff\":    (\"FLOAT\", {\"default\": 5.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"sigma\":     (\"FLOAT\", {\"default\": 5.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n            },\n        }\n        \n    RETURN_TYPES = (\"IMAGE\",\"IMAGE\",\"IMAGE\",)\n    RETURN_NAMES = (\"high_pass\", \"original\", \"low_pass\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, high_pass=None, original=None, low_pass=None, cutoff=5.0, sigma=5.0):\n\n        if high_pass is None:\n            low_pass, high_pass = freq_sep_fft(original.to(torch.float64), cutoff=cutoff, sigma=sigma)\n        \n        if original is None:\n            original = low_pass + high_pass\n\n        return (high_pass, original, low_pass,)\n    \n    \n\n\nclass ImageSharpenFS:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"images\":    (\"IMAGE\",),\n                #\"method\":    ([\"hard\", \"linear\", \"vivid\"], {\"default\": \"hard\"}),\n                \"method\":    ([\"hard\", \"linear\"], {\"default\": \"hard\"}),\n                \"type\":      ([\"median\", \"gaussian\"],      {\"default\": \"median\"}),\n                \"intensity\": (\"INT\",                       {\"default\": 6, \"min\": 1, \"step\": 1,\n                }),\n            },\n        }\n\n    RETURN_TYPES = (\"IMAGE\",)\n    RETURN_NAMES = (\"image\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, images, method, type, intensity):\n        match type:\n            case \"median\":\n                IB = ImageMedianBlur()\n            case \"gaussian\":\n                IB = ImageGaussianBlur()\n            \n        match method:\n            case \"hard\":\n                FS = Frequency_Separation_Hard_Light()\n            case \"linear\":\n                FS = Frequency_Separation_Linear_Light()\n                \n        img_lp = IB.main(images, intensity)\n        \n        fs_hp, fs_orig, fs_lp = FS.main(None, images, *img_lp)\n        \n        _, img_sharpened, _ = FS.main(high_pass=fs_hp, original=None, low_pass=images)\n        \n        return (img_sharpened,)\n\n\n    \n    \n\nclass ImageMedianBlur:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"images\": (\"IMAGE\",),\n                \"size\":   (\"INT\", {\"default\": 6, \"min\": 1, \"step\": 1,}),\n            },\n        }\n\n    RETURN_TYPES = (\"IMAGE\",)\n    RETURN_NAMES = (\"image\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, images, size):\n        size -= 1\n\n        img = images.clone().detach()\n        img = (img * 255).to(torch.uint8)\n\n        return ((cv2_layer(img, lambda x: cv2.medianBlur(x, size)) / 255),)\n\n\n\nclass ImageGaussianBlur:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"images\": (\"IMAGE\",),\n                \"size\":   (\"INT\", {\"default\": 6, \"min\": 1, \"step\": 1,}),\n            },\n        }\n\n    RETURN_TYPES = (\"IMAGE\",)\n    RETURN_NAMES = (\"image\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, images, size):\n        size -= 1 \n\n        img = images.clone().detach()\n        img = (img * 255).to(torch.uint8)\n\n        return ((cv2_layer(img, lambda x: cv2.GaussianBlur(x, (size, size), 0)) / 255),)\n\n\n\ndef fast_smudge_blur_comfyui(img, kernel_size=51):\n    img = img.to('cuda').float()\n\n    # (b, h, w, c) to (b, c, h, w)\n    img = img.permute(0, 3, 1, 2)\n\n    num_channels = img.shape[1]\n\n    box_kernel_1d = torch.ones(num_channels, 1, kernel_size, device=img.device, dtype=img.dtype) / kernel_size\n\n    # apply box blur separately in horizontal and vertical directions\n    blurred_img = F.conv2d(        img, box_kernel_1d.unsqueeze(2), padding=kernel_size // 2, groups=num_channels)\n    blurred_img = F.conv2d(blurred_img, box_kernel_1d.unsqueeze(3), padding=kernel_size // 2, groups=num_channels)\n\n    # (b, c, h, w) to (b, h, w, c)\n    blurred_img = blurred_img.permute(0, 2, 3, 1)\n\n    return blurred_img\n\n\n\nclass FastSmudgeBlur:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"images\":      (\"IMAGE\",), \n                \"kernel_size\": (\"INT\", {\"default\": 51, \"min\": 1, \"step\": 1,}),\n            },\n        }\n\n    RETURN_TYPES = (\"IMAGE\",)\n    RETURN_NAMES = (\"image\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, images, kernel_size):\n        img = images.clone().detach().to('cuda').float()\n        \n        # (b, h, w, c) to (b, c, h, w)\n        img = img.permute(0, 3, 1, 2)\n\n        num_channels = img.shape[1]\n\n        # box blur kernel (separable convolution)\n        box_kernel_1d = torch.ones(num_channels, 1, kernel_size, device=img.device, dtype=img.dtype) / kernel_size\n\n        padding_size = kernel_size // 2\n\n        # apply box blur in horizontal/vertical dim separately\n        blurred_img = F.conv2d(\n            img, box_kernel_1d.unsqueeze(2), padding=(padding_size, 0), groups=num_channels\n        )\n        blurred_img = F.conv2d(\n            blurred_img, box_kernel_1d.unsqueeze(3), padding=(0, padding_size), groups=num_channels\n        )\n\n        # (b, c, h, w) to (b, h, w, c)\n        blurred_img = blurred_img.permute(0, 2, 3, 1)\n\n        return (blurred_img,)\n\n\n\nclass Image_Pair_Split:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": { \n                \"img_pair\": (\"IMAGE\",),\n                }\n            }\n    RETURN_TYPES = (\"IMAGE\",\"IMAGE\",)\n    RETURN_NAMES = (\"img_0\",\"img_1\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, img_pair):\n        img_0, img_1 = img_pair.chunk(2, dim=0)\n\n        return (img_0, img_1,)\n\n\n\nclass Image_Crop_Location_Exact:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"image\":  (\"IMAGE\",),\n                \"x\":      (\"INT\", {\"default\": 0,   \"max\": 10000000, \"min\": 0, \"step\": 1}),\n                \"y\":      (\"INT\", {\"default\": 0,   \"max\": 10000000, \"min\": 0, \"step\": 1}),\n                \"width\":  (\"INT\", {\"default\": 256, \"max\": 10000000, \"min\": 1, \"step\": 1}),\n                \"height\": (\"INT\", {\"default\": 256, \"max\": 10000000, \"min\": 1, \"step\": 1}),\n                \"edge\":   ([\"original\", \"short\", \"long\"],),\n            }\n        }\n\n    RETURN_TYPES = (\"IMAGE\", \"CROP_DATA\",)\n    RETURN_NAMES = (\"image\", \"crop_data\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n\n    def main(self, image, x=0, y=0, width=256, height=256, edge=\"original\"):\n        if image.dim() != 4:\n            raise ValueError(\"Expected a 4D tensor (batch, channels, height, width).\")\n        \n        if edge == \"short\":\n            side = width if width < height else height\n            width, height = side, side\n        if edge == \"long\":\n            side = width if width > height else height\n            width, height = side, side\n\n        batch_size, img_height, img_width, channels = image.size()\n\n        crop_left   = max(x, 0)\n        crop_top    = max(y, 0)\n        crop_right  = min(x + width, img_width)\n        crop_bottom = min(y + height, img_height)\n\n        crop_width = crop_right - crop_left\n        crop_height = crop_bottom - crop_top\n        if crop_width <= 0 or crop_height <= 0:\n            raise ValueError(\"Invalid crop dimensions. Please check the values for x, y, width, and height.\")\n\n        cropped_image = image[:, crop_top:crop_bottom, crop_left:crop_right, :]\n\n        crop_data = ((crop_width, crop_height), (crop_left, crop_top, crop_right, crop_bottom))\n\n        return cropped_image, crop_data\n    \n\n\n\nclass Masks_Unpack4:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": { \n                \"masks\": (\"MASK\",),\n                }\n            }\n    RETURN_TYPES = (\"MASK\",\"MASK\",\"MASK\",\"MASK\",)\n    RETURN_NAMES = (\"masks\",\"masks\",\"masks\",\"masks\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/masks\"\n    DESCRIPTION  = \"Unpack a list of masks into separate outputs.\"\n\n    def main(self, masks,):\n        return (*masks,)\n\nclass Masks_Unpack8:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": { \n                \"masks\": (\"MASK\",),\n                }\n            }\n    RETURN_TYPES = (\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",)\n    RETURN_NAMES = (\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/masks\"\n    DESCRIPTION  = \"Unpack a list of masks into separate outputs.\"\n\n    def main(self, masks,):\n        return (*masks,)\n\nclass Masks_Unpack16:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": { \n                \"masks\": (\"MASK\",),\n                }\n            }\n    RETURN_TYPES = (\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",\"MASK\",)\n    RETURN_NAMES = (\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",\"masks\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/masks\"\n    DESCRIPTION  = \"Unpack a list of masks into separate outputs.\"\n\n    def main(self, masks,):\n        return (*masks,)\n\n\n\n\n\n\nclass Image_Get_Color_Swatches:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": { \n                \"image_color_swatches\": (\"IMAGE\",),\n                }\n            }\n    RETURN_TYPES = (\"COLOR_SWATCHES\",)\n    RETURN_NAMES = (\"color_swatches\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n    DESCRIPTION  = \"Get color swatches, in the order they appear, from top to bottom, in an input image. For use with color masks.\"\n\n    def main(self, image_color_swatches):\n        rgb = (image_color_swatches * 255).round().clamp(0, 255).to(torch.uint8)\n        color_swatches = read_swatch_colors(rgb.squeeze().numpy(), min_fraction=0.01)\n        #color_swatches = read_swatch_colors(rgb.squeeze().numpy(), ignore=(255,255,255), min_fraction=0.01)\n\n        return (color_swatches,)\n\nclass Masks_From_Color_Swatches:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": { \n                \"image_color_mask\": (\"IMAGE\",),\n                \"color_swatches\":   (\"COLOR_SWATCHES\",),\n                }\n            }\n    RETURN_TYPES = (\"MASK\",)\n    RETURN_NAMES = (\"masks\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n    DESCRIPTION  = \"Create masks from a multicolor image using color swatches to identify regions. Returns them as a list.\"\n\n    def main(self, image_color_mask, color_swatches):\n        rgb = (image_color_mask * 255).round().clamp(0, 255).to(torch.uint8)\n        masks = build_masks_from_swatch(rgb.squeeze().numpy(), color_swatches, tol=8)\n        masks = cleanup_and_fill_masks(masks)\n        masks = torch.stack(masks, dim=0).unsqueeze(1)\n        return (masks,)\n\n\n\nclass Masks_From_Colors:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": { \n                \"image_color_swatches\": (\"IMAGE\",),\n                \"image_color_mask\":     (\"IMAGE\",),\n                }\n            }\n    RETURN_TYPES = (\"MASK\",)\n    RETURN_NAMES = (\"masks\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/images\"\n    DESCRIPTION  = \"Create masks from a multicolor image using color swatches to identify regions. Returns them as a list.\"\n\n    def main(self, image_color_swatches, image_color_mask, ):\n        rgb = (image_color_swatches * 255).round().clamp(0, 255).to(torch.uint8)\n        color_swatches = read_swatch_colors(rgb.squeeze().numpy(), min_fraction=0.01)\n        #color_swatches = read_swatch_colors(rgb.squeeze().numpy(), ignore=(255,255,255), min_fraction=0.01)\n        \n        rgb = (image_color_mask * 255).round().clamp(0, 255).to(torch.uint8)\n        masks = build_masks_from_swatch(rgb.squeeze().numpy(), color_swatches, tol=8)\n        masks = cleanup_and_fill_masks(masks)\n        \n        original_len = len(masks)\n        masks = [m for m in masks if m.sum() != 0]\n        \n        removed = original_len - len(masks)\n        print(f\"Removed {removed} empty masks.\")\n        masks = torch.stack(masks, dim=0).unsqueeze(1)\n        return (masks,)\n\n\n\n\n\n\n\nfrom PIL import Image\nimport numpy as np\n\ndef read_swatch_colors(\n    img,\n    ignore: Tuple[int,int,int] = (-1,-1,-1),\n    min_fraction: float = 0.2\n) -> List[Tuple[int,int,int]]:\n    \"\"\"\n    1. Load swatch, RGB.\n    2. Count every unique color (except `ignore`).\n    3. Discard any color whose count < (min_fraction * largest_count).\n    4. Sort the remaining by their first y-position (top→bottom).\n    \"\"\"\n    H, W, _ = img.shape\n    flat = img.reshape(-1,3)\n    \n    # count all colors\n    colors, counts = np.unique(flat, axis=0, return_counts=True)\n    # build list of (color, count), skipping white\n    cc = [\n        (tuple(c.tolist()), cnt)\n        for c, cnt in zip(colors, counts)\n        if tuple(c.tolist()) != ignore\n    ]\n    if not cc:\n        return []\n    \n    # find largest band size\n    max_cnt = max(cnt for _,cnt in cc)\n    # filter by relative size\n    kept = [c for c,cnt in cc if cnt >= max_cnt * min_fraction]\n    \n    # find first‐y for each kept color\n    first_y = {}\n    for color in kept:\n        # mask of where that color lives\n        mask = np.all(img == color, axis=-1)\n        ys, xs = np.nonzero(mask)\n        first_y[color] = int(np.min(ys))\n    \n    # sort top→bottom\n    kept.sort(key=lambda c: first_y[c])\n    return kept\n\n\n\nimport numpy as np\nimport torch\nfrom typing import List, Tuple\nfrom PIL import Image\n\n\ndef build_masks_from_swatch(\n    mask_img: np.ndarray,\n    swatch_colors: List[Tuple[int,int,int]],\n    tol: int = 8\n) -> List[torch.Tensor]:\n    \"\"\"\n    1. Normalize mask_img → uint8 H×W×3 (handles float [0,1] or [0,255], channel-first too).\n    2. Bin every pixel into buckets of size `tol`.\n    3. Detect user-painted region (non-black).\n    4. In swatch order, claim all exact matches (first-wins).\n    5. Fill in any *painted but unclaimed* pixel by nearest‐swatch in RGB distance.\n    Returns a list of BoolTensors [H,W], one per swatch color.\n    \"\"\"\n    # --- 1) ensure H×W×3 uint8 ---\n    img = mask_img\n    # channel-first → channel-last\n    if img.ndim == 3 and img.shape[0] == 3:\n        img = np.transpose(img, (1,2,0))\n    # float → uint8\n    if np.issubdtype(img.dtype, np.floating):\n        m = img.max()\n        if m <= 1.01:\n            img = (img * 255.0).round()\n        else:\n            img = img.round()\n    img = img.clip(0,255).astype(np.uint8)\n\n    H, W, _ = img.shape\n\n    # --- 2) bin into tol-sized buckets ---\n    binned = (img // tol) * tol  # still uint8\n\n    # --- 3) painted region mask (non-black) ---\n    painted = np.any(img != 0, axis=2)  # H×W bool\n\n    # --- snap swatch colors into same buckets ---\n    snapped = np.array([\n        ((np.array(c)//tol)*tol).astype(np.uint8)\n        for c in swatch_colors\n    ])  # C×3\n\n    claimed = np.zeros((H, W), dtype=bool)\n    masks   = []\n\n    # --- 4) first-pass exact matches ---\n    for s in snapped:\n        m = (\n            (binned[:,:,0] == s[0]) &\n            (binned[:,:,1] == s[1]) &\n            (binned[:,:,2] == s[2])\n        )\n        m &= ~claimed\n        masks.append(torch.from_numpy(m))\n        claimed |= m\n\n    # --- 5) fill-in only within painted & unclaimed pixels ---\n    miss = painted & (~claimed)\n    if miss.any():\n        flat       = binned.reshape(-1,3).astype(int)  # (H*W)×3\n        flat_miss  = miss.reshape(-1)                 # (H*W,)\n        # squared RGB distances to each swatch: → (H*W)×C\n        d2         = np.sum((flat[:,None,:] - snapped[None,:,:])**2, axis=2)\n        nearest    = np.argmin(d2, axis=1)            # (H*W,)\n\n        for i in range(len(masks)):\n            assign = (flat_miss & (nearest == i)).reshape(H, W)\n            masks[i] = masks[i] | torch.from_numpy(assign)\n\n    return masks\n\n\n\n\nimport numpy as np\nimport torch\nfrom typing import List\nfrom collections import deque\n\ndef _remove_small_components(\n    mask: np.ndarray,\n    rel_thresh: float = 0.01\n) -> np.ndarray:\n    \"\"\"\n    Remove connected components smaller than rel_thresh * max_component_size.\n    4-connectivity.\n    \"\"\"\n    H, W = mask.shape\n    visited = np.zeros_like(mask, bool)\n    comps = []  # list of (size, pixels_list)\n\n    # 1) find all components\n    for y in range(H):\n        for x in range(W):\n            if mask[y,x] and not visited[y,x]:\n                q = deque([(y,x)])\n                visited[y,x] = True\n                pix = [(y,x)]\n                while q:\n                    cy,cx = q.popleft()\n                    for dy,dx in ((1,0),(-1,0),(0,1),(0,-1)):\n                        ny,nx = cy+dy, cx+dx\n                        if 0<=ny<H and 0<=nx<W and mask[ny,nx] and not visited[ny,nx]:\n                            visited[ny,nx] = True\n                            q.append((ny,nx))\n                            pix.append((ny,nx))\n                comps.append(pix)\n\n    if not comps:\n        return np.zeros_like(mask)\n\n    # 2) compute threshold\n    sizes = [len(c) for c in comps]\n    max_size = max(sizes)\n    min_size = max_size * rel_thresh\n\n    # 3) build a new mask keeping only large comps\n    out = np.zeros_like(mask)\n    for pix in comps:\n        if len(pix) >= min_size:\n            for (y,x) in pix:\n                out[y,x] = True\n\n    return out\n\ndef cleanup_and_fill_masks(\n    masks: List[torch.Tensor],\n    rel_thresh: float = 0.01\n) -> List[torch.Tensor]:\n    \"\"\"\n    1) Remove any component < rel_thresh * (largest component) per mask\n    2) Then re-assign any freed pixels to nearest-swatches by neighbor-count\n    \"\"\"\n    # stack into C×H×W\n    np_masks = np.stack([m.cpu().numpy() for m in masks], axis=0)\n    C, H, W = np_masks.shape\n\n    # 1) component pruning\n    for c in range(C):\n        np_masks[c] = _remove_small_components(np_masks[c], rel_thresh)\n\n    # 2) figure out what’s still unclaimed\n    claimed = np_masks.any(axis=0)  # H×W\n\n    # 3) build neighbor‐counts to know who's closest\n    #    (reuse the same 8-neighbor idea to bias to the largest local region)\n    shifts = [(1,0),(-1,0),(0,1),(0,-1),(1,1),(1,-1),(-1,1),(-1,-1)]\n    neighbor_counts = np.zeros_like(np_masks, int)\n    for dy,dx in shifts:\n        neighbor_counts += np.roll(np.roll(np_masks, dy, axis=1), dx, axis=2)\n\n    # 4) for every pixel still unclaimed, pick the mask with the highest neighbor count\n    miss = ~claimed\n    if miss.any():\n        # which mask “wins” that pixel?\n        winner = np.argmax(neighbor_counts, axis=0)  # H×W\n        for c in range(C):\n            assign = (miss & (winner == c))\n            np_masks[c][assign] = True\n\n    # back to torch\n    cleaned = [torch.from_numpy(np_masks[c]) for c in range(C)]\n    return cleaned\n\nimport os\nimport folder_paths\n\n\nclass MaskSketch:\n    @classmethod\n    def INPUT_TYPES(s):\n        input_dir = folder_paths.get_input_directory()\n        files = [f for f in os.listdir(input_dir) if os.path.isfile(os.path.join(input_dir, f))]\n        return {\"required\":\n                    {\"image\": (sorted(files), {\"image_upload\": True})},\n                }\n\n    CATEGORY = \"image\"\n\n    RETURN_TYPES = (\"IMAGE\", \"MASK\")\n    FUNCTION = \"load_image\"\n    \n    def load_image(self, image):\n        width, height = 512, 512  # or whatever size you prefer\n\n        # White image: RGB values all set to 1.0\n        white_image = torch.ones((1, height, width, 3), dtype=torch.float32)\n\n        # White mask: all ones (or zeros if you're using inverse alpha)\n        white_mask = torch.zeros((1, height, width), dtype=torch.float32)\n\n        return (white_image, white_mask)\n        \n    def load_image_orig(self, image):\n        image_path = folder_paths.get_annotated_filepath(image)\n\n        img = node_helpers.pillow(Image.open, image_path)\n\n        output_images = []\n        output_masks = []\n        w, h = None, None\n\n        excluded_formats = ['MPO']\n\n        for i in ImageSequence.Iterator(img):\n            i = node_helpers.pillow(ImageOps.exif_transpose, i)\n\n            if i.mode == 'I':\n                i = i.point(lambda i: i * (1 / 255))\n            image = i.convert(\"RGB\")\n\n            if len(output_images) == 0:\n                w = image.size[0]\n                h = image.size[1]\n\n            if image.size[0] != w or image.size[1] != h:\n                continue\n\n            image = np.array(image).astype(np.float32) / 255.0\n            image = torch.from_numpy(image)[None,]\n            if 'A' in i.getbands():\n                mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0\n                mask = 1. - torch.from_numpy(mask)\n            else:\n                mask = torch.zeros((64,64), dtype=torch.float32, device=\"cpu\")\n            output_images.append(image)\n            output_masks.append(mask.unsqueeze(0))\n\n        if len(output_images) > 1 and img.format not in excluded_formats:\n            output_image = torch.cat(output_images, dim=0)\n            output_mask = torch.cat(output_masks, dim=0)\n        else:\n            output_image = output_images[0]\n            output_mask = output_masks[0]\n\n        return (output_image, output_mask)\n\n    @classmethod\n    def IS_CHANGED(s, image):\n        image_path = folder_paths.get_annotated_filepath(image)\n        m = hashlib.sha256()\n        with open(image_path, 'rb') as f:\n            m.update(f.read())\n        return m.digest().hex()\n\n    @classmethod\n    def VALIDATE_INPUTS(s, image):\n        if not folder_paths.exists_annotated_filepath(image):\n            return \"Invalid image file: {}\".format(image)\n\n        return True\n\n\n\n# based on https://github.com/cubiq/ComfyUI_essentials/blob/main/mask.py\nimport math\nimport torch\nimport torch.nn.functional as F\nimport torchvision.transforms.v2 as T\nimport numpy as np\nfrom scipy.ndimage import distance_transform_edt\n\nclass MaskBoundingBoxAspectRatio:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": {\n                \"padding\":      (\"INT\",   { \"default\": 0, \"min\": 0,   \"max\": 4096, \"step\": 1 }),\n                \"blur\":         (\"INT\",   { \"default\": 0, \"min\": 0,   \"max\": 256,  \"step\": 1 }),\n                \"aspect_ratio\": (\"FLOAT\", { \"default\": 1.0, \"min\": 0.01,\"max\": 10.0, \"step\": 0.01 }),\n                \"transpose\":    (\"BOOLEAN\",{\"default\": False}),\n            },\n            \"optional\": {\n                \"image\": (\"IMAGE\",),\n                \"mask\":  (\"MASK\",),\n            },\n        }\n\n    RETURN_TYPES = (\"IMAGE\",\"MASK\",\"MASK\",\"INT\",\"INT\",\"INT\",\"INT\")\n    RETURN_NAMES = (\"image\",\"mask\",\"mask_blurred\",\"x\",\"y\",\"width\",\"height\")\n    FUNCTION = \"execute\"\n    CATEGORY = \"essentials/mask\"\n\n    def execute(self, mask, padding, blur, aspect_ratio, transpose, image=None):\n        if mask.dim() == 2:\n            mask = mask.unsqueeze(0)\n        B, H, W = mask.shape\n        hard     = mask.clone()\n\n        # build outward-only “blurred” mask via distance transform\n        if blur > 0:\n            m_bool = hard[0].cpu().numpy().astype(bool)\n            d_out  = distance_transform_edt(~m_bool)\n            d_in   = distance_transform_edt( m_bool)\n            alpha  = np.zeros_like(d_out, np.float32)\n            alpha[d_in>0] = 1.0\n            ramp = np.clip(1.0 - (d_out / blur), 0.0, 1.0)\n            alpha[d_out>0] = ramp[d_out>0]\n            mask_blur_full = torch.from_numpy(alpha)[None,...].to(hard.device)\n        else:\n            mask_blur_full = hard.clone()\n\n        # calc tight bbox + padding on the \"hard\" mask\n        ys, xs = torch.where(hard[0] > 0)\n        x1 = max(0, int(xs.min()) - padding)\n        x2 = min(W, int(xs.max()) + 1 + padding)\n        y1 = max(0, int(ys.min()) - padding)\n        y2 = min(H, int(ys.max()) + 1 + padding)\n        w0 = x2 - x1\n        h0 = y2 - y1\n\n        if image is None:\n            img_full = hard.unsqueeze(-1).repeat(1,1,1,3).to(torch.float32)\n        else:\n            img_full = image\n            \n        if img_full.shape[1:3] != (H, W):\n            img_full = comfy.utils.common_upscale(\n                img_full.permute(0,3,1,2),\n                W, H, upscale_method=\"bicubic\", crop=\"center\"\n            ).permute(0,2,3,1)\n\n        ar = aspect_ratio\n        req_w = math.ceil(h0 * ar)   # how wide we'd need to be to hit AR at h0\n        req_h = math.floor(w0 / ar)  # how tall we'd need to be to hit AR at w0\n\n        new_x1, new_x2 = x1, x2\n        new_y1, new_y2 = y1, y2\n\n        flush_left  = (x1 == 0)\n        flush_right = (x2 == W)\n        flush_top   = (y1 == 0)\n        flush_bot   = (y2 == H)\n\n        if not transpose:\n            if req_w > w0: # widen?\n                target_w = min(W, req_w)\n                delta    = target_w - w0\n                if flush_right:\n                    new_x1, new_x2 = W - target_w, W\n                elif flush_left:\n                    new_x1, new_x2 = 0, target_w\n                else:\n                    off = delta // 2\n                    new_x1 = max(0, x1 - off)\n                    new_x2 = new_x1 + target_w\n                    if new_x2 > W:\n                        new_x2 = W\n                        new_x1 = W - target_w\n\n            elif req_h > h0: # vertical bloater?\n                target_h = min(H, req_h)\n                delta    = target_h - h0\n                if flush_bot:\n                    new_y1, new_y2 = H - target_h, H\n                elif flush_top:\n                    new_y1, new_y2 = 0, target_h\n                else:\n                    off = delta // 2\n                    new_y1 = max(0, y1 - off)\n                    new_y2 = new_y1 + target_h\n                    if new_y2 > H:\n                        new_y2 = H\n                        new_y1 = H - target_h\n\n        else:\n            if req_h > h0:\n                target_h = min(H, req_h)\n                delta    = target_h - h0\n                if flush_bot:\n                    new_y1, new_y2 = H - target_h, H\n                elif flush_top:\n                    new_y1, new_y2 = 0, target_h\n                else:\n                    off = delta // 2\n                    new_y1 = max(0, y1 - off)\n                    new_y2 = new_y1 + target_h\n                    if new_y2 > H:\n                        new_y2 = H\n                        new_y1 = H - target_h\n\n            elif req_w > w0:\n                target_w = min(W, req_w)\n                delta    = target_w - w0\n                if flush_right:\n                    new_x1, new_x2 = W - target_w, W\n                elif flush_left:\n                    new_x1, new_x2 = 0, target_w\n                else:\n                    off = delta // 2\n                    new_x1 = max(0, x1 - off)\n                    new_x2 = new_x1 + target_w\n                    if new_x2 > W:\n                        new_x2 = W\n                        new_x1 = W - target_w\n\n        final_w = new_x2 - new_x1\n        final_h = new_y2 - new_y1\n\n        # done... crop image & masks\n        img_crop      = img_full[:,    new_y1:new_y2, new_x1:new_x2, :]\n        mask_crop     = hard[:,       new_y1:new_y2, new_x1:new_x2   ]\n        mask_blurred  = mask_blur_full[:, new_y1:new_y2, new_x1:new_x2]\n\n        return (\n            img_crop,\n            mask_crop,\n            mask_blurred,\n            new_x1,\n            new_y1,\n            final_w,\n            final_h,\n        )\n\n\n"
  },
  {
    "path": "latent_images.py",
    "content": "import comfy.samplers\r\nimport comfy.sample\r\nimport comfy.sampler_helpers\r\nimport comfy.utils\r\n    \r\nimport itertools\r\n\r\nimport torch\r\nimport math\r\nimport re\r\n\r\nfrom .beta.noise_classes import *\r\n\r\ndef initialize_or_scale(tensor, value, steps):\r\n    if tensor is None:\r\n        return torch.full((steps,), value)\r\n    else:\r\n        return value * tensor\r\n    \r\ndef latent_normalize_channels(x):\r\n    mean = x.mean(dim=(-2, -1), keepdim=True)\r\n    std  = x.std (dim=(-2, -1), keepdim=True)\r\n    return  (x - mean) / std\r\n\r\ndef latent_stdize_channels(x):\r\n    std  = x.std (dim=(-2, -1), keepdim=True)\r\n    return  x / std\r\n\r\ndef latent_meancenter_channels(x):\r\n    mean = x.mean(dim=(-2, -1), keepdim=True)\r\n    return  x - mean\r\n\r\n\r\nclass latent_channelwise_match:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                    \"model\": (\"MODEL\",),\r\n                    \"latent_target\": (\"LATENT\", ),      \r\n                    \"latent_source\": (\"LATENT\", ),      \r\n                     },\r\n            \"optional\": {\r\n                    \"mask_target\": (\"MASK\", ),      \r\n                    \"mask_source\": (\"MASK\", ),   \r\n                    \"extra_options\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),   \r\n            }\r\n                }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    RETURN_NAMES = (\"latent_matched\",)\r\n    CATEGORY = \"RES4LYF/latents\"\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    def main(self, model, latent_target, mask_target, latent_source, mask_source, extra_options):\r\n        \r\n        dtype = latent_target['samples'].dtype\r\n        \r\n        exclude_channels_match = re.search(r\"exclude_channels=([\\d,]+)\", extra_options)\r\n        exclude_channels = []\r\n        if exclude_channels_match:\r\n            exclude_channels = [int(ch.strip()) for ch in exclude_channels_match.group(1).split(\",\")]\r\n        \r\n        if re.search(r\"\\bdisable_process_latent\\b\", extra_options): \r\n            x_target = latent_target['samples'].clone()\r\n            x_source = latent_source['samples'].clone()\r\n        else:\r\n            #x_target = model.inner_model.inner_model.process_latent_in(latent_target['samples']).clone() \r\n            #x_source = model.inner_model.inner_model.process_latent_in(latent_source['samples']).clone()\r\n            x_target = model.model.process_latent_in(latent_target['samples']).clone().to(torch.float64)\r\n            x_source = model.model.process_latent_in(latent_source['samples']).clone().to(torch.float64)\r\n        \r\n        if mask_target is None:\r\n            mask_target = torch.ones_like(x_target)\r\n        else:\r\n            mask_target = mask_target.unsqueeze(1)\r\n            mask_target = mask_target.repeat(1, x_target.shape[1], 1, 1) \r\n            mask_target = F.interpolate(mask_target, size=(x_target.shape[2], x_target.shape[3]), mode='bilinear', align_corners=False)\r\n            mask_target = mask_target.to(x_target.dtype).to(x_target.device)\r\n        \r\n        if mask_source is None:\r\n            mask_source = torch.ones_like(x_target)\r\n        else:\r\n            mask_source = mask_source.unsqueeze(1)\r\n            mask_source = mask_source.repeat(1, x_target.shape[1], 1, 1) \r\n            mask_source = F.interpolate(mask_source, size=(x_target.shape[2], x_target.shape[3]), mode='bilinear', align_corners=False)\r\n            mask_source = mask_source.to(x_target.dtype).to(x_target.device)\r\n        \r\n        x_target_masked     = x_target * ((mask_target==1)*mask_target)\r\n        x_target_masked_inv = x_target - x_target_masked\r\n        #x_source_masked     = x_source * ((mask_source==1)*mask_source)\r\n        \r\n        x_matched = torch.zeros_like(x_target)\r\n        for n in range(x_matched.shape[1]):\r\n            if n in exclude_channels: \r\n                x_matched[0][n] = x_target[0][n] \r\n                continue\r\n            \r\n            x_target_masked_values = x_target[0][n][mask_target[0][n] == 1]\r\n            x_source_masked_values = x_source[0][n][mask_source[0][n] == 1]\r\n            \r\n            x_target_masked_values_mean = x_target_masked_values.mean()\r\n            x_target_masked_values_std = x_target_masked_values.std()\r\n            x_target_masked_source_mean = x_source_masked_values.mean()\r\n            x_target_masked_source_std = x_source_masked_values.std()\r\n            \r\n            x_target_mean = x_target.mean()\r\n            x_target_std = x_target.std()\r\n            x_source_mean = x_source.mean()\r\n            x_source_std = x_source.std()\r\n            \r\n            if re.search(r\"\\benable_std\\b\", extra_options) == None:\r\n                x_target_std = x_target_masked_values_std = x_target_masked_source_std = 1\r\n                \r\n            if re.search(r\"\\bdisable_mean\\b\", extra_options):\r\n                x_target_mean = x_target_masked_values_mean = x_target_masked_source_mean = 1\r\n            \r\n            if re.search(r\"\\bdisable_masks\\b\", extra_options):\r\n                x_matched[0][n] = (x_target[0][n] - x_target_mean) / x_target_std\r\n                x_matched[0][n] = (x_matched[0][n] * x_source_std) + x_source_mean\r\n            else:\r\n                x_matched[0][n] = (x_target_masked[0][n] - x_target_masked_values_mean) / x_target_masked_values_std\r\n                x_matched[0][n] = (x_matched[0][n] * x_target_masked_source_std) + x_target_masked_source_mean\r\n                x_matched[0][n] = x_target_masked_inv[0][n] + x_matched[0][n] * ((mask_target[0][n]==1)*mask_target[0][n])\r\n        \r\n        if re.search(r\"\\bdisable_process_latent\\b\", extra_options) == None: \r\n            x_matched = model.model.process_latent_out(x_matched).clone()\r\n            \r\n        \r\n        return ({\"samples\": x_matched.to(dtype)}, )\r\n                \r\n                \r\n    "
  },
  {
    "path": "latents.py",
    "content": "import torch\r\nimport torch.nn.functional as F\r\nfrom typing import Tuple, List, Union\r\nimport math\r\n\r\n\r\n# TENSOR PROJECTION OPS\r\n\r\ndef get_cosine_similarity_manual(a, b):\r\n    return (a * b).sum() / (torch.norm(a) * torch.norm(b))\r\n\r\ndef get_cosine_similarity(a, b, mask=None, dim=0):\r\n    if a.ndim == 5 and b.ndim == 5 and b.shape[2] == 1:\r\n        b = b.expand(-1, -1, a.shape[2], -1, -1)\r\n        \r\n    if mask is not None:\r\n        return F.cosine_similarity((mask * a).flatten(), (mask * b).flatten(), dim=dim)\r\n    else:\r\n        return F.cosine_similarity(a.flatten(), b.flatten(), dim=dim)\r\n    \r\ndef get_pearson_similarity(a, b, mask=None, dim=0, norm_dim=None):\r\n    if a.ndim == 5 and b.ndim == 5 and b.shape[2] == 1:\r\n        b = b.expand(-1, -1, a.shape[2], -1, -1)\r\n    \r\n    if norm_dim is None:\r\n        if   a.ndim == 4:\r\n            norm_dim=(-2,-1)\r\n        elif a.ndim == 5:\r\n            norm_dim=(-4,-2,-1)\r\n    \r\n    a = a - a.mean(dim=norm_dim, keepdim=True)\r\n    b = b - b.mean(dim=norm_dim, keepdim=True)\r\n    \r\n    if mask is not None:\r\n        return F.cosine_similarity((mask * a).flatten(), (mask * b).flatten(), dim=dim)\r\n    else:\r\n        return F.cosine_similarity(a.flatten(), b.flatten(), dim=dim)\r\n    \r\n    \r\n    \r\ndef get_collinear(x, y):\r\n    return get_collinear_flat(x, y).reshape_as(x)\r\n\r\ndef get_orthogonal(x, y):\r\n    x_flat = x.reshape(x.size(0), -1).clone()\r\n    x_ortho_y = x_flat - get_collinear_flat(x, y)  \r\n    return x_ortho_y.view_as(x)\r\n\r\ndef get_collinear_flat(x, y):\r\n\r\n    y_flat = y.reshape(y.size(0), -1).clone()\r\n    x_flat = x.reshape(x.size(0), -1).clone()\r\n\r\n    y_flat /= y_flat.norm(dim=-1, keepdim=True)\r\n    x_proj_y = torch.sum(x_flat * y_flat, dim=-1, keepdim=True) * y_flat\r\n\r\n    return x_proj_y\r\n\r\n\r\n\r\ndef get_orthogonal_noise_from_channelwise(*refs, max_iter=500, max_score=1e-15):\r\n    noise, *refs = refs\r\n    noise_tmp = noise.clone()\r\n    #b,c,h,w = noise.shape\r\n    if (noise.ndim == 4):\r\n        b,ch,h,w = noise.shape\r\n    elif (noise.ndim == 5):\r\n        b,ch,t,h,w = noise.shape\r\n    \r\n    for i in range(max_iter):\r\n        noise_tmp = gram_schmidt_channels_optimized(noise_tmp, *refs)\r\n        \r\n        cossim_scores = []\r\n        for ref in refs:\r\n            #for c in range(noise.shape[-3]):\r\n            for c in range(ch):\r\n                cossim_scores.append(get_cosine_similarity(noise_tmp[0][c], ref[0][c]).abs())\r\n            cossim_scores.append(get_cosine_similarity(noise_tmp[0], ref[0]).abs())\r\n            \r\n        if max(cossim_scores) < max_score:\r\n            break\r\n    \r\n    return noise_tmp\r\n\r\n\r\n\r\ndef gram_schmidt_channels_optimized(A, *refs):\r\n    if (A.ndim == 4):\r\n        b,c,h,w = A.shape\r\n    elif (A.ndim == 5):\r\n        b,c,t,h,w = A.shape\r\n\r\n    A_flat = A.view(b, c, -1)  \r\n    \r\n    for ref in refs:\r\n        ref_flat = ref.view(b, c, -1).clone()  \r\n\r\n        ref_flat /= ref_flat.norm(dim=-1, keepdim=True) \r\n\r\n        proj_coeff = torch.sum(A_flat * ref_flat, dim=-1, keepdim=True)  \r\n        projection = proj_coeff * ref_flat \r\n\r\n        A_flat -= projection\r\n\r\n    return A_flat.view_as(A)\r\n\r\n\r\n\r\n# Efficient implementation equivalent to the following:\r\ndef attention_weights(\r\n    query, \r\n    key, \r\n    attn_mask=None\r\n) -> torch.Tensor:\r\n    L, S = query.size(-2), key.size(-2)\r\n    scale_factor = 1 / math.sqrt(query.size(-1))\r\n    attn_bias = torch.zeros(L, S, dtype=query.dtype).to(query.device)\r\n\r\n    if attn_mask is not None:\r\n        if attn_mask.dtype == torch.bool:\r\n            attn_bias.masked_fill_(attn_mask.logical_not(), float(\"-inf\"))\r\n        else:\r\n            attn_bias += attn_mask\r\n\r\n    attn_weight = query @ key.transpose(-2, -1) * scale_factor\r\n    attn_weight += attn_bias\r\n    attn_weight = torch.softmax(attn_weight, dim=-1)\r\n\r\n    return attn_weight\r\n\r\n\r\ndef attention_weights_orig(q, k):\r\n    # implementation of in-place softmax to reduce memory req\r\n    scores = torch.matmul(q, k.transpose(-2, -1))\r\n    scores.div_(math.sqrt(q.size(-1)))\r\n    torch.exp(scores, out=scores)\r\n    summed = torch.sum(scores, dim=-1, keepdim=True)\r\n    scores /= summed\r\n    return scores.nan_to_num_(0.0, 65504., -65504.)\r\n\r\n\r\n# calculate slerp ratio needed to hit a target cosine similarity score\r\ndef get_slerp_weight_for_cossim(cos_sim, target_cos):\r\n    # assumes unit vector matrices used for cossim\r\n    import math\r\n    c = cos_sim\r\n    T = target_cos\r\n    K = 1 - c\r\n\r\n    A = K**2 - 2 * T**2 * K\r\n    B = 2 * (1 - c) * (c + T**2)\r\n    C = c**2 - T**2\r\n\r\n    if abs(A) < 1e-8: # nearly collinear\r\n        return 0.5  # just mix 50:50\r\n\r\n    disc = B**2 - 4*A*C\r\n    if disc < 0:\r\n        return None  # no valid solution... blow up somewhere to get user's attention\r\n\r\n    sqrt_disc = math.sqrt(disc)\r\n    w1 = (-B + sqrt_disc) / (2 * A)\r\n    w2 = (-B - sqrt_disc) / (2 * A)\r\n\r\n    candidates = [w for w in [w1, w2] if 0 <= w <= 1]\r\n    if candidates:\r\n        return candidates[0]\r\n    else:\r\n        return max(0.0, min(1.0, w1))\r\n\r\n\r\n\r\ndef get_slerp_ratio(cos_sim_A, cos_sim_B, target_cos):\r\n    import math\r\n    alpha = math.acos(cos_sim_A)\r\n    beta  = math.acos(cos_sim_B)\r\n    delta = math.acos(target_cos)\r\n    \r\n    if abs(beta - alpha) < 1e-6:\r\n        return 0.5\r\n    \r\n    t = (delta - alpha) / (beta - alpha)\r\n    t = max(0.0, min(1.0, t))\r\n    return t\r\n\r\ndef find_slerp_ratio_grid(A: torch.Tensor, B: torch.Tensor, D: torch.Tensor, E: torch.Tensor,\r\n                            target_ratio: float = 1.0, num_samples: int = 100) -> float:\r\n    \"\"\"\r\n    Finds the interpolation parameter t (in [0,1]) for which:\r\n       f(t) = cos(slerp(t, A, B), D) - target_ratio * cos(slerp(t, A, B), E)\r\n    is minimized in absolute value.\r\n    \r\n    Instead of requiring a sign change for bisection, we sample t values uniformly and pick the one that minimizes |f(t)|.\r\n    \"\"\"\r\n    ts = torch.linspace(0.0, 1.0, steps=num_samples, device=A.device, dtype=A.dtype)\r\n    best_t   = 0.0\r\n    best_val = float('inf')\r\n    for t_val in ts:\r\n        t_tensor = torch.tensor(t_val, dtype=A.dtype, device=A.device)\r\n        C        = slerp_tensor(t_tensor, A, B)\r\n        diff     = get_pearson_similarity(C, D) - target_ratio * get_pearson_similarity(C, E)\r\n        if abs(diff) < best_val:\r\n            best_val = abs(diff)\r\n            best_t   = t_val\r\n    return best_t\r\n\r\n\r\n\r\ndef compute_slerp_ratio_for_target(A: torch.Tensor, B: torch.Tensor, D: torch.Tensor, target: float) -> float:\r\n    \"\"\"\r\n    Given three unit vectors A, B, and D (all assumed to be coplanar)\r\n    and a target cosine similarity (target) for the slerp result C with D,\r\n    compute the interpolation parameter t such that:\r\n        C = slerp(t, A, B)\r\n        and cos(C, D) ≈ target.\r\n\r\n    Args:\r\n        A: Tensor of shape (D,), starting vector.\r\n        B: Tensor of shape (D,), ending vector.\r\n        D: Tensor of shape (D,), the reference vector.\r\n        target: Desired cosine similarity between C and D.\r\n\r\n    Returns:\r\n        t: A float between 0 and 1.\r\n    \"\"\"\r\n    A = A / (A.norm() + 1e-8)\r\n    B = B / (B.norm() + 1e-8)\r\n    D = D / (D.norm() + 1e-8)\r\n    \r\n    alpha = math.acos(max(-1.0, min(1.0, float(torch.dot(D, A))))) # angel between D and A\r\n    beta  = math.acos(max(-1.0, min(1.0, float(torch.dot(D, B))))) # angle between D and B\r\n    \r\n    delta = math.acos(max(-1.0, min(1.0, target))) # target cosine similarity... angle etc...\r\n    \r\n    if abs(beta - alpha) < 1e-6:\r\n        return 0.5\r\n    \r\n    t = (delta - alpha) / (beta - alpha)\r\n    t = max(0.0, min(1.0, t))\r\n    return t\r\n\r\n\r\n\r\n# TENSOR NORMALIZATION OPS\r\n\r\ndef normalize_zscore(x, channelwise=False, inplace=False):\r\n    if inplace:\r\n        if channelwise:\r\n            return x.sub_(x.mean(dim=(-2,-1), keepdim=True)).div_(x.std(dim=(-2,-1), keepdim=True))\r\n        else:\r\n            return x.sub_(x.mean()).div_(x.std())\r\n    else:\r\n        if channelwise:\r\n            return (x - x.mean(dim=(-2,-1), keepdim=True) / x.std(dim=(-2,-1), keepdim=True))\r\n        else:\r\n            return (x - x.mean()) / x.std()\r\n\r\ndef latent_normalize_channels(x):\r\n    mean = x.mean(dim=(-2, -1), keepdim=True)\r\n    std  = x.std (dim=(-2, -1), keepdim=True)\r\n    return  (x - mean) / std\r\n\r\ndef latent_stdize_channels(x):\r\n    std  = x.std (dim=(-2, -1), keepdim=True)\r\n    return  x / std\r\n\r\ndef latent_meancenter_channels(x):\r\n    mean = x.mean(dim=(-2, -1), keepdim=True)\r\n    return  x - mean\r\n\r\n\r\n\r\n# TENSOR INTERPOLATION OPS\r\n\r\ndef lagrange_interpolation(x_values, y_values, x_new):\r\n\r\n    if not isinstance(x_values, torch.Tensor):\r\n        x_values = torch.tensor(x_values, dtype=torch.get_default_dtype())\r\n    if x_values.ndim != 1:\r\n        raise ValueError(\"x_values must be a 1D tensor or a list of scalars.\")\r\n\r\n    if not isinstance(x_new, torch.Tensor):\r\n        x_new = torch.tensor(x_new, dtype=x_values.dtype, device=x_values.device)\r\n    if x_new.ndim == 0:\r\n        x_new = x_new.unsqueeze(0)\r\n\r\n    if isinstance(y_values, list):\r\n        y_values = torch.stack(y_values, dim=0)\r\n    if y_values.ndim < 1:\r\n        raise ValueError(\"y_values must have at least one dimension (the sample dimension).\")\r\n\r\n    n = x_values.shape[0]\r\n    if y_values.shape[0] != n:\r\n        raise ValueError(f\"Mismatch: x_values has length {n} but y_values has {y_values.shape[0]} samples.\")\r\n\r\n    m = x_new.shape[0]\r\n    result_shape = (m,) + y_values.shape[1:]\r\n    result = torch.zeros(result_shape, dtype=y_values.dtype, device=y_values.device)\r\n\r\n    for i in range(n):\r\n        Li = torch.ones_like(x_new, dtype=y_values.dtype, device=y_values.device)\r\n        xi = x_values[i]\r\n        for j in range(n):\r\n            if i == j:\r\n                continue\r\n            xj = x_values[j]\r\n            Li = Li * ((x_new - xj) / (xi - xj))\r\n        extra_dims = (1,) * (y_values.ndim - 1)\r\n        Li = Li.view(m, *extra_dims)\r\n        result = result + Li * y_values[i]\r\n\r\n    return result\r\n\r\ndef line_intersection(a: torch.Tensor, d1: torch.Tensor, b: torch.Tensor, d2: torch.Tensor, eps=1e-8) -> torch.Tensor:\r\n    \"\"\"\r\n    Computes the intersection (or closest point average) of two lines in R^D.\r\n    \r\n    The first line is defined by:  L1: x = a + t * d1\r\n    The second line is defined by: L2: x = b + s * d2\r\n    \r\n    If the lines do not exactly intersect, this function returns the average of the closest points.\r\n    \r\n    a, d1, b, d2: Tensors of shape (D,) or with an extra batch dimension (B, D).\r\n    Returns: Tensor of shape (D,) or (B, D) representing the intersection (or midpoint of closest approach).\r\n    \"\"\"\r\n    # Compute dot products\r\n    d1d1 = (d1 * d1).sum(dim=-1, keepdim=True)  # shape (B,1) or (1,)\r\n    d2d2 = (d2 * d2).sum(dim=-1, keepdim=True)\r\n    d1d2 = (d1 * d2).sum(dim=-1, keepdim=True)\r\n    \r\n    r = b - a  # shape (B, D) or (D,)\r\n    r_d1 = (r * d1).sum(dim=-1, keepdim=True)\r\n    r_d2 = (r * d2).sum(dim=-1, keepdim=True)\r\n    \r\n    # Solve for t and s:\r\n    # t * d1d1 - s * d1d2 = r_d1\r\n    # t * d1d2 - s * d2d2 = r_d2\r\n    # Solve using determinants:\r\n    denom = d1d1 * d2d2 - d1d2 * d1d2\r\n    # Avoid division by zero\r\n    denom = torch.where(denom.abs() < eps, torch.full_like(denom, eps), denom)\r\n    t = (r_d1 * d2d2 - r_d2 * d1d2) / denom\r\n    s = (r_d1 * d1d2 - r_d2 * d1d1) / denom\r\n    \r\n    point1 = a + t * d1\r\n    point2 = b + s * d2\r\n    # If they intersect exactly, point1 and point2 are identical.\r\n    # Otherwise, return the midpoint of the closest points.\r\n    return (point1 + point2) / 2\r\n\r\ndef slerp_direction(t: float, u0: torch.Tensor, u1: torch.Tensor, DOT_THRESHOLD=0.9995) -> torch.Tensor:\r\n    dot = (u0 * u1).sum(-1).clamp(-1.0, 1.0) #u0, u1 are unit vectors... should not be affected by clamp\r\n    if dot.item() > DOT_THRESHOLD: # u0, u1 nearly aligned, fallback to lerp\r\n        return torch.lerp(u0, u1, t)\r\n    theta_0     = torch.acos(dot)\r\n    sin_theta_0 = torch.sin(theta_0)\r\n    theta_t     = theta_0 * t\r\n    sin_theta_t = torch.sin(theta_t)\r\n    s0          = torch.sin(theta_0 - theta_t) / sin_theta_0\r\n    s1          = sin_theta_t / sin_theta_0\r\n    return s0 * u0 + s1 * u1\r\n\r\ndef magnitude_aware_interpolation(t: float, v0: torch.Tensor, v1: torch.Tensor) -> torch.Tensor:\r\n\r\n    m0 = v0.norm(dim=-1, keepdim=True)\r\n    m1 = v1.norm(dim=-1, keepdim=True)\r\n\r\n    u0 = v0 / (m0 + 1e-8)\r\n    u1 = v1 / (m1 + 1e-8)\r\n    \r\n    u = slerp_direction(t, u0, u1)\r\n    \r\n    m = (1 - t) * m0 + t * m1 # tinerpolate magnitudes linearly\r\n    return m * u\r\n\r\n\r\ndef slerp_tensor(val: torch.Tensor, low: torch.Tensor, high: torch.Tensor, dim=-3) -> torch.Tensor:\r\n    #dim = (2,3)\r\n    if low.ndim == 4 and low.shape[-3] > 1:\r\n        dim=-3\r\n    elif low.ndim == 5 and low.shape[-3] > 1:\r\n        dim=-4\r\n    elif low.ndim == 2:\r\n        dim=(-2,-1)\r\n        \r\n    if type(val) == float:\r\n        val = torch.Tensor([val]).expand_as(low).to(low.dtype).to(low.device)\r\n        \r\n    if val.shape != low.shape:\r\n        val = val.expand_as(low)\r\n        \r\n    low_norm = low / (torch.norm(low, dim=dim, keepdim=True))\r\n    high_norm = high / (torch.norm(high, dim=dim, keepdim=True))\r\n    \r\n    dot = (low_norm * high_norm).sum(dim=dim, keepdim=True).clamp(-1.0, 1.0)\r\n    \r\n    #near = ~(-0.9995 < dot < 0.9995) #dot > 0.9995 or dot < -0.9995\r\n    near = dot > 0.9995\r\n    opposite = dot < -0.9995\r\n\r\n    condition = torch.logical_or(near, opposite)\r\n    \r\n    omega = torch.acos(dot)\r\n    so = torch.sin(omega)\r\n\r\n    if val.ndim < low.ndim:\r\n        val = val.unsqueeze(dim)\r\n    \r\n    factor_low = torch.sin((1 - val) * omega) / so\r\n    factor_high = torch.sin(val * omega) / so\r\n\r\n    res = factor_low * low + factor_high * high\r\n    res = torch.where(condition, low * (1 - val) + high * val, res)\r\n    return res\r\n\r\n\r\n\r\n\r\n# pytorch slerp implementation from https://gist.github.com/Birch-san/230ac46f99ec411ed5907b0a3d728efa\r\nfrom torch import FloatTensor, LongTensor, Tensor, Size, lerp, zeros_like\r\nfrom torch.linalg import norm\r\n\r\n# adapted to PyTorch from:\r\n# https://gist.github.com/dvschultz/3af50c40df002da3b751efab1daddf2c\r\n# most of the extra complexity is to support:\r\n# - many-dimensional vectors\r\n# - v0 or v1 with last dim all zeroes, or v0 ~colinear with v1\r\n#   - falls back to lerp()\r\n#   - conditional logic implemented with parallelism rather than Python loops\r\n# - many-dimensional tensor for t\r\n#   - you can ask for batches of slerp outputs by making t more-dimensional than the vectors\r\n#   -   slerp(\r\n#         v0:   torch.Size([2,3]),\r\n#         v1:   torch.Size([2,3]),\r\n#         t:  torch.Size([4,1,1]), \r\n#       )\r\n#   - this makes it interface-compatible with lerp()\r\n\r\ndef slerp(v0: FloatTensor, v1: FloatTensor, t: float|FloatTensor, DOT_THRESHOLD=0.9995):\r\n    '''\r\n    Spherical linear interpolation\r\n    Args:\r\n        v0: Starting vector\r\n        v1: Final vector\r\n        t: Float value between 0.0 and 1.0\r\n        DOT_THRESHOLD: Threshold for considering the two vectors as\r\n        colinear. Not recommended to alter this.\r\n    Returns:\r\n        Interpolation vector between v0 and v1\r\n    '''\r\n    assert v0.shape == v1.shape, \"shapes of v0 and v1 must match\"\r\n    \r\n    # Normalize the vectors to get the directions and angles\r\n    v0_norm: FloatTensor = norm(v0, dim=-1)\r\n    v1_norm: FloatTensor = norm(v1, dim=-1)\r\n    \r\n    v0_normed: FloatTensor = v0 / v0_norm.unsqueeze(-1)\r\n    v1_normed: FloatTensor = v1 / v1_norm.unsqueeze(-1)\r\n    \r\n    # Dot product with the normalized vectors\r\n    dot: FloatTensor = (v0_normed * v1_normed).sum(-1)\r\n    dot_mag: FloatTensor = dot.abs()\r\n    \r\n    # if dp is NaN, it's because the v0 or v1 row was filled with 0s\r\n    # If absolute value of dot product is almost 1, vectors are ~colinear, so use lerp\r\n    gotta_lerp: LongTensor = dot_mag.isnan() | (dot_mag > DOT_THRESHOLD)\r\n    can_slerp: LongTensor = ~gotta_lerp\r\n    \r\n    t_batch_dim_count: int = max(0, t.ndim-v0.ndim) if isinstance(t, Tensor) else 0\r\n    t_batch_dims: Size = t.shape[:t_batch_dim_count] if isinstance(t, Tensor) else Size([])\r\n    out: FloatTensor = zeros_like(v0.expand(*t_batch_dims, *[-1]*v0.ndim))\r\n    \r\n    # if no elements are lerpable, our vectors become 0-dimensional, preventing broadcasting\r\n    if gotta_lerp.any():\r\n        lerped: FloatTensor = lerp(v0, v1, t)\r\n    \r\n        out: FloatTensor = lerped.where(gotta_lerp.unsqueeze(-1), out)\r\n    \r\n    # if no elements are slerpable, our vectors become 0-dimensional, preventing broadcasting\r\n    if can_slerp.any():\r\n    \r\n        # Calculate initial angle between v0 and v1\r\n        theta_0: FloatTensor = dot.arccos().unsqueeze(-1)\r\n        sin_theta_0: FloatTensor = theta_0.sin()\r\n        # Angle at timestep t\r\n        theta_t: FloatTensor = theta_0 * t\r\n        sin_theta_t: FloatTensor = theta_t.sin()\r\n        # Finish the slerp algorithm\r\n        s0: FloatTensor = (theta_0 - theta_t).sin() / sin_theta_0\r\n        s1: FloatTensor = sin_theta_t / sin_theta_0\r\n        slerped: FloatTensor = s0 * v0 + s1 * v1\r\n    \r\n        out: FloatTensor = slerped.where(can_slerp.unsqueeze(-1), out)\r\n    \r\n    return out\r\n\r\n\r\n\r\n# this is silly...\r\ndef normalize_latent(target, source=None, mean=True, std=True, set_mean=None, set_std=None, channelwise=True):\r\n    target = target.clone()\r\n    source = source.clone() if source is not None else None\r\n    def normalize_single_latent(single_target, single_source=None):\r\n        y = torch.zeros_like(single_target)\r\n        for b in range(y.shape[0]):\r\n            if channelwise:\r\n                for c in range(y.shape[1]):\r\n                    single_source_mean = single_source[b][c].mean() if set_mean is None else set_mean\r\n                    single_source_std  = single_source[b][c].std()  if set_std  is None else set_std\r\n                    \r\n                    if mean and std:\r\n                        y[b][c] = (single_target[b][c] - single_target[b][c].mean()) / single_target[b][c].std()\r\n                        if single_source is not None:\r\n                            y[b][c] = y[b][c] * single_source_std + single_source_mean\r\n                    elif mean:\r\n                        y[b][c] = single_target[b][c] - single_target[b][c].mean()\r\n                        if single_source is not None:\r\n                            y[b][c] = y[b][c] + single_source_mean\r\n                    elif std:\r\n                        y[b][c] = single_target[b][c] / single_target[b][c].std()\r\n                        if single_source is not None:\r\n                            y[b][c] = y[b][c] * single_source_std\r\n            else:\r\n                single_source_mean = single_source[b].mean() if set_mean is None else set_mean\r\n                single_source_std  = single_source[b].std()  if set_std  is None else set_std\r\n                \r\n                if mean and std:\r\n                    y[b] = (single_target[b] - single_target[b].mean()) / single_target[b].std()\r\n                    if single_source is not None:\r\n                        y[b] = y[b] * single_source_std + single_source_mean\r\n                elif mean:\r\n                    y[b] = single_target[b] - single_target[b].mean()\r\n                    if single_source is not None:\r\n                        y[b] = y[b] + single_source_mean\r\n                elif std:\r\n                    y[b] = single_target[b] / single_target[b].std()\r\n                    if single_source is not None:\r\n                        y[b] = y[b] * single_source_std\r\n        return y\r\n\r\n    if isinstance(target, (list, tuple)):\r\n        if source is not None:\r\n            assert isinstance(source, (list, tuple)) and len(source) == len(target), \\\r\n                \"If target is a list/tuple, source must be a list/tuple of the same length.\"\r\n            return [normalize_single_latent(t, s) for t, s in zip(target, source)]\r\n        else:\r\n            return [normalize_single_latent(t) for t in target]\r\n    else:\r\n        return normalize_single_latent(target, source)\r\n\r\n\r\n\r\ndef hard_light_blend(base_latent, blend_latent):\r\n    if base_latent.sum() == 0 and base_latent.std() == 0:\r\n        return base_latent\r\n    \r\n    blend_latent = (blend_latent - blend_latent.min()) / (blend_latent.max() - blend_latent.min())\r\n\r\n    positive_mask = base_latent >= 0\r\n    negative_mask = base_latent < 0\r\n    \r\n    positive_latent = base_latent * positive_mask.float()\r\n    negative_latent = base_latent * negative_mask.float()\r\n\r\n    positive_result = torch.where(blend_latent < 0.5,\r\n                                  2 * positive_latent * blend_latent,\r\n                                  1 - 2 * (1 - positive_latent) * (1 - blend_latent))\r\n\r\n    negative_result = torch.where(blend_latent < 0.5,\r\n                                  2 * negative_latent.abs() * blend_latent,\r\n                                  1 - 2 * (1 - negative_latent.abs()) * (1 - blend_latent))\r\n    \r\n    negative_result = -negative_result\r\n\r\n    combined_result = positive_result * positive_mask.float() + negative_result * negative_mask.float()\r\n\r\n    #combined_result *= base_latent.max()\r\n    \r\n    ks  = combined_result\r\n    ks2 = torch.zeros_like(base_latent)\r\n    for n in range(base_latent.shape[1]):\r\n        ks2[0][n] = (ks[0][n]) / ks[0][n].std()\r\n        ks2[0][n] = (ks2[0][n] * base_latent[0][n].std())\r\n    combined_result = ks2\r\n    \r\n    return combined_result\r\n\r\n\r\n\r\n\r\ndef make_checkerboard(tile_size: int, num_tiles: int, dtype=torch.float16, device=\"cpu\"):\r\n    pattern = torch.tensor([[0, 1], [1, 0]], dtype=dtype, device=device)\r\n    board = pattern.repeat(num_tiles // 2 + 1, num_tiles // 2 + 1)[:num_tiles, :num_tiles]\r\n    board_expanded = board.repeat_interleave(tile_size, dim=0).repeat_interleave(tile_size, dim=1)\r\n    return board_expanded\r\n\r\n\r\n\r\ndef get_edge_mask_slug(mask: torch.Tensor, dilation: int = 3) -> torch.Tensor:\r\n\r\n    mask = mask.float()\r\n    \r\n    eroded = -F.max_pool2d(-mask.unsqueeze(0).unsqueeze(0), kernel_size=3, stride=1, padding=1)\r\n    eroded = eroded.squeeze(0).squeeze(0)\r\n    \r\n    edge = mask - eroded\r\n    edge = (edge > 0).float()\r\n    \r\n    dilated_edge = F.max_pool2d(edge.unsqueeze(0).unsqueeze(0), kernel_size=dilation, stride=1, padding=dilation//2)\r\n    dilated_edge = dilated_edge.squeeze(0).squeeze(0)\r\n    \r\n    return dilated_edge\r\n\r\n\r\n\r\ndef get_edge_mask(mask: torch.Tensor, dilation: int = 3) -> torch.Tensor:\r\n    if dilation == 0:                                                         # safeguard for zero kernel size...\r\n        return mask\r\n    mask_tmp = mask.squeeze().to('cuda')\r\n    mask_tmp = mask_tmp.float()\r\n    \r\n    eroded = -F.max_pool2d(-mask_tmp.unsqueeze(0).unsqueeze(0), kernel_size=3, stride=1, padding=1)\r\n    eroded = eroded.squeeze(0).squeeze(0)\r\n    \r\n    edge = mask_tmp - eroded\r\n    edge = (edge > 0).float()\r\n    \r\n    dilated_edge = F.max_pool2d(edge.unsqueeze(0).unsqueeze(0), kernel_size=dilation, stride=1, padding=dilation//2)\r\n    dilated_edge = dilated_edge.squeeze(0).squeeze(0)\r\n    \r\n    return dilated_edge[...,:mask.shape[-2], :mask.shape[-1]].view_as(mask).to(mask.device)\r\n\r\n\r\n\r\ndef checkerboard_variable(widths, dtype=torch.float16, device='cpu'):\r\n    total = sum(widths)\r\n    mask = torch.zeros((total, total), dtype=dtype, device=device)\r\n\r\n    x_start = 0\r\n    for i, w_x in enumerate(widths):\r\n        y_start = 0\r\n        for j, w_y in enumerate(widths):\r\n            if (i + j) % 2 == 0:  # checkerboard logic\r\n                mask[x_start:x_start+w_x, y_start:y_start+w_y] = 1.0\r\n            y_start += w_y\r\n        x_start += w_x\r\n\r\n    return mask\r\n\r\n\r\n\r\n\r\n\r\ndef interpolate_spd(cov1, cov2, t, eps=1e-5):\r\n    \"\"\"\r\n    Geodesic interpolation on the SPD manifold between cov1 and cov2.\r\n\r\n    Args:\r\n      cov1, cov2: [D×D] symmetric positive-definite covariances (torch.Tensor).\r\n      t:         interpolation factor in [0,1].\r\n      eps:       jitter added to diagonal for numerical stability.\r\n\r\n    Returns:\r\n      cov_t:     the SPD matrix at fraction t along the geodesic from cov1 to cov2.\r\n    \"\"\"\r\n    cov1 = cov1.double()\r\n    cov2 = cov2.double()\r\n\r\n    M1 = cov1.clone()\r\n    M1.diagonal().add_(eps)\r\n    M2 = cov2.clone()\r\n    M2.diagonal().add_(eps)\r\n\r\n    S1, U1 = torch.linalg.eigh(M1)\r\n    S1_clamped = S1.clamp(min=eps)\r\n    inv_sqrt_S1 = S1_clamped.rsqrt()\r\n    M1_inv_sqrt = U1 @ torch.diag(inv_sqrt_S1) @ U1.T\r\n\r\n    middle = M1_inv_sqrt @ M2 @ M1_inv_sqrt\r\n\r\n    Sm, Um = torch.linalg.eigh(middle)\r\n    Sm_clamped = Sm.clamp(min=eps)\r\n\r\n    Sm_t = Sm_clamped.pow(t)\r\n\r\n    middle_t = Um @ torch.diag(Sm_t) @ Um.T\r\n\r\n    sqrt_S1 = S1_clamped.sqrt()\r\n    M1_sqrt = U1 @ torch.diag(sqrt_S1) @ U1.T\r\n\r\n    cov_t = M1_sqrt @ middle_t @ M1_sqrt\r\n\r\n    return cov_t.to(cov1.dtype) \r\n\r\n\r\n\r\n\r\n\r\ndef tile_latent(latent: torch.Tensor,\r\n                tile_size: Tuple[int,int]\r\n                ) -> Tuple[torch.Tensor,\r\n                           Tuple[int,...],\r\n                           Tuple[int,int],\r\n                           Tuple[List[int],List[int]]]:\r\n    \"\"\"\r\n    Split `latent` into spatial tiles of shape (t_h, t_w).\r\n    Works on either:\r\n       - 4D [B,C,H,W]\r\n       - 5D [B,C,T,H,W]\r\n    Returns:\r\n        tiles:      [B*rows*cols, C, (T,), t_h, t_w]\r\n        orig_shape: the full shape of `latent`\r\n        tile_hw:    (t_h, t_w)\r\n        positions:  (pos_h, pos_w) lists of start y and x positions\r\n    \"\"\"\r\n    *lead, H, W = latent.shape\r\n    B, C = lead[0], lead[1]\r\n    has_time = (latent.ndim == 5)\r\n    if has_time:\r\n        T = lead[2]\r\n    t_h, t_w = tile_size\r\n\r\n    rows = (H + t_h - 1) // t_h\r\n    cols = (W + t_w - 1) // t_w\r\n\r\n    if rows == 1:\r\n        pos_h = [0]\r\n    else:\r\n        pos_h = [round(i*(H - t_h)/(rows-1)) for i in range(rows)]\r\n    if cols == 1:\r\n        pos_w = [0]\r\n    else:\r\n        pos_w = [round(j*(W - t_w)/(cols-1)) for j in range(cols)]\r\n\r\n    tiles = []\r\n    for y in pos_h:\r\n        for x in pos_w:\r\n            if has_time:\r\n                tile = latent[:, :, :, y:y+t_h, x:x+t_w]\r\n            else:\r\n                tile = latent[:, :,    y:y+t_h, x:x+t_w]\r\n            tiles.append(tile)\r\n\r\n    tiles = torch.cat(tiles, dim=0)\r\n    orig_shape = tuple(latent.shape)\r\n    return tiles, orig_shape, (t_h, t_w), (pos_h, pos_w)\r\n\r\n\r\ndef untile_latent(tiles: torch.Tensor,\r\n                  orig_shape: Tuple[int,...],\r\n                  tile_hw: Tuple[int,int],\r\n                  positions: Tuple[List[int],List[int]]\r\n                  ) -> torch.Tensor:\r\n    \"\"\"\r\n    Reconstruct latent from tiles + their start positions.\r\n    Works on either 4D or 5D original.\r\n    Args:\r\n      tiles:      [B*rows*cols, C, (T,), t_h, t_w]\r\n      orig_shape: shape of original latent (B,C,H,W) or (B,C,T,H,W)\r\n      tile_hw:    (t_h, t_w)\r\n      positions:  (pos_h, pos_w)\r\n    Returns:\r\n      reconstructed latent of shape `orig_shape`\r\n    \"\"\"\r\n    *lead, H, W = orig_shape\r\n    B, C = lead[0], lead[1]\r\n    has_time = (len(orig_shape) == 5)\r\n    if has_time:\r\n        T = lead[2]\r\n    t_h, t_w = tile_hw\r\n    pos_h, pos_w = positions\r\n    rows, cols = len(pos_h), len(pos_w)\r\n\r\n    if has_time:\r\n        out = torch.zeros(B, C, T, H, W, device=tiles.device, dtype=tiles.dtype)\r\n        count = torch.zeros_like(out)\r\n        tiles = tiles.view(B, rows, cols, C, T, t_h, t_w)\r\n        for bi in range(B):\r\n            for i, y in enumerate(pos_h):\r\n                for j, x in enumerate(pos_w):\r\n                    tile = tiles[bi, i, j]\r\n                    out[bi, :, :, y:y+t_h, x:x+t_w] += tile\r\n                    count[bi, :, :, y:y+t_h, x:x+t_w] += 1\r\n    else:\r\n        out = torch.zeros(B, C, H, W, device=tiles.device, dtype=tiles.dtype)\r\n        count = torch.zeros_like(out)\r\n        tiles = tiles.view(B, rows, cols, C, t_h, t_w)\r\n        for bi in range(B):\r\n            for i, y in enumerate(pos_h):\r\n                for j, x in enumerate(pos_w):\r\n                    tile = tiles[bi, i, j]\r\n                    out[bi, :, y:y+t_h, x:x+t_w] += tile\r\n                    count[bi, :, y:y+t_h, x:x+t_w] += 1\r\n\r\n    valid = count > 0\r\n    out[valid] = out[valid] / count[valid]\r\n    return out\r\n\r\n\r\n\r\ndef upscale_to_match_spatial(tensor_5d, ref_4d, mode='bicubic'):\r\n    \"\"\"\r\n    Upscales a 5D tensor [B, C, T, H1, W1] to match the spatial size of a 4D tensor [1, C, H2, W2].\r\n    \r\n    Args:\r\n        tensor_5d: Tensor of shape [B, C, T, H1, W1]\r\n        ref_4d: Tensor of shape [1, C, H2, W2] — used as spatial reference\r\n        mode: Interpolation mode ('bilinear' or 'bicubic')\r\n    \r\n    Returns:\r\n        Resized tensor of shape [B, C, T, H2, W2]\r\n    \"\"\"\r\n    b, c, t, _, _ = tensor_5d.shape\r\n    _, _, h_target, w_target = ref_4d.shape\r\n\r\n    tensor_reshaped = tensor_5d.reshape(b * c, t, tensor_5d.shape[-2], tensor_5d.shape[-1])\r\n    upscaled = F.interpolate(tensor_reshaped, size=(h_target, w_target), mode=mode, align_corners=False)\r\n    return upscaled.view(b, c, t, h_target, w_target)\r\n\r\n\r\n\r\n\r\n\r\ndef gaussian_blur_2d(img: torch.Tensor, sigma: float, kernel_size: int = None) -> torch.Tensor:\r\n    B, C, H, W = img.shape\r\n    dtype = img.dtype\r\n    device = img.device\r\n\r\n    if kernel_size is None:\r\n        kernel_size = int(2 * math.ceil(3 * sigma) + 1)\r\n\r\n    if kernel_size % 2 == 0:\r\n        kernel_size += 1\r\n\r\n    coords = torch.arange(kernel_size, dtype=torch.float64) - kernel_size // 2\r\n    g = torch.exp(-0.5 * (coords / sigma) ** 2)\r\n    g = g / g.sum()\r\n\r\n    kernel_2d = g[:, None] * g[None, :]\r\n    kernel_2d = kernel_2d.to(dtype=dtype, device=device)\r\n\r\n    kernel = kernel_2d.expand(C, 1, kernel_size, kernel_size)\r\n\r\n    pad = kernel_size // 2\r\n    img_padded = F.pad(img, (pad, pad, pad, pad), mode='reflect')\r\n\r\n    return F.conv2d(img_padded, kernel, groups=C)\r\n\r\n\r\ndef median_blur_2d(img: torch.Tensor, kernel_size: int = 3) -> torch.Tensor:\r\n    if kernel_size % 2 == 0:\r\n        kernel_size += 1\r\n    pad = kernel_size // 2\r\n\r\n    B, C, H, W = img.shape\r\n    img_padded = F.pad(img, (pad, pad, pad, pad), mode='reflect')\r\n\r\n    unfolded = img_padded.unfold(2, kernel_size, 1).unfold(3, kernel_size, 1)\r\n    # unfolded: [B, C, H, W, kH, kW] → flatten to patches\r\n    patches = unfolded.contiguous().view(B, C, H, W, -1)\r\n    median = patches.median(dim=-1).values\r\n    return median\r\n\r\ndef apply_to_state_info_tensors(obj, ref_shape, modify_func, *args, **kwargs):\r\n    \"\"\"\r\n    Recursively traverse obj and apply modify_func to tensors whose last 5 dimensions\r\n    match ref_shape's last 5 dimensions.\r\n    Used to apply function to all relevant tensors in latent state_info.\r\n    \r\n    Args:\r\n        obj: The object to traverse (dict, list, tuple, tensor, etc.)\r\n        ref_shape: Reference tensor shape to match against\r\n        modify_func: Function to apply to matching tensors. Should accept (tensor, *args, **kwargs)\r\n        *args, **kwargs: Additional arguments passed to modify_func\r\n    \r\n    Returns:\r\n        Modified structure with applicable tensors transformed\r\n    \"\"\"\r\n    import torch\r\n\r\n    if isinstance(obj, torch.Tensor):\r\n        if obj.ndim >= 5:\r\n            # Check if last 5 dims match reference\r\n            obj_last5 = obj.shape[-5:]\r\n            ref_last5 = ref_shape[-5:] if len(ref_shape) >= 5 else ref_shape\r\n            if obj_last5 == ref_last5:\r\n                return modify_func(obj, *args, **kwargs)\r\n        return obj\r\n\r\n    if isinstance(obj, dict):\r\n        changed = False\r\n        out = {}\r\n        for k, v in obj.items():\r\n            nv = apply_to_state_info_tensors(v, ref_shape, modify_func, *args, **kwargs)\r\n            changed |= (nv is not v)\r\n            out[k] = nv\r\n        return out if changed else obj\r\n\r\n    if isinstance(obj, list):\r\n        changed = False\r\n        out = []\r\n        for v in obj:\r\n            nv = apply_to_state_info_tensors(v, ref_shape, modify_func, *args, **kwargs)\r\n            changed |= (nv is not v)\r\n            out.append(nv)\r\n        return out if changed else obj\r\n\r\n    if isinstance(obj, tuple):\r\n        new_t = tuple(apply_to_state_info_tensors(v, ref_shape, modify_func, *args, **kwargs) for v in obj)\r\n        if all(ov is nv for ov, nv in zip(obj, new_t)):\r\n            return obj\r\n        return new_t\r\n\r\n    return obj\r\n\r\n"
  },
  {
    "path": "legacy/__init__.py",
    "content": "from . import legacy_samplers\r\nfrom . import legacy_sampler_rk\r\n\r\nfrom . import rk_sampler\r\nfrom . import samplers\r\nfrom . import samplers_extensions\r\nfrom . import samplers_tiled\r\n\r\n\r\n\r\ndef add_legacy(NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers):\r\n    \r\n    NODE_CLASS_MAPPINGS.update({\r\n        \"Legacy_ClownSampler\"                 : legacy_samplers.Legacy_SamplerRK,\r\n        \"Legacy_SharkSampler\"                 : legacy_samplers.Legacy_SharkSampler,\r\n        \"Legacy_ClownsharKSampler\"            : legacy_samplers.Legacy_ClownsharKSampler,\r\n        \"Legacy_ClownsharKSamplerGuides\"      : legacy_samplers.Legacy_ClownsharKSamplerGuides,\r\n        \r\n        \"ClownSampler\"                        : samplers.ClownSampler,\r\n        \"ClownSamplerAdvanced\"                : samplers.ClownSamplerAdvanced,\r\n        \"ClownsharKSampler\"                   : samplers.ClownsharKSampler,\r\n        \r\n        \r\n        \"ClownsharKSamplerGuides\"             : samplers_extensions.ClownsharKSamplerGuides,\r\n        \"ClownsharKSamplerGuide\"              : samplers_extensions.ClownsharKSamplerGuide,\r\n    \r\n        \"ClownOptions_SDE_Noise\"              : samplers_extensions.ClownOptions_SDE_Noise,\r\n        \"ClownOptions_FrameWeights\"           : samplers_extensions.ClownOptions_FrameWeights,\r\n    \r\n        \"ClownInpaint\"                        : samplers_extensions.ClownInpaint,\r\n        \"ClownInpaintSimple\"                  : samplers_extensions.ClownInpaintSimple,\r\n\r\n        \"ClownsharKSamplerOptions\"            : samplers_extensions.ClownsharKSamplerOptions,\r\n\r\n        \"ClownsharKSamplerAutomation\"         : samplers_extensions.ClownsharKSamplerAutomation,\r\n        \"ClownsharKSamplerAutomation_Advanced\": samplers_extensions.ClownsharKSamplerAutomation_Advanced,\r\n        \"SamplerOptions_TimestepScaling\"      : samplers_extensions.SamplerOptions_TimestepScaling,\r\n        \"SamplerOptions_GarbageCollection\"    : samplers_extensions.SamplerOptions_GarbageCollection,\r\n\r\n        \"UltraSharkSampler\"                   : samplers.UltraSharkSampler,\r\n\r\n        \"UltraSharkSampler Tiled\"             : samplers_tiled.UltraSharkSampler_Tiled,\r\n    })\r\n    \r\n    NODE_DISPLAY_NAME_MAPPINGS.update({\r\n        \"Legacy_SamplerRK\"                            : \"Legacy_ClownSampler\",\r\n        \"Legacy_SharkSampler\"                         : \"Legacy_SharkSampler\",\r\n        \"Legacy_ClownsharKSampler\"                    : \"Legacy_ClownsharKSampler\",\r\n        \"Legacy_ClownsharKSamplerGuides\"              : \"Legacy_ClownsharKSamplerGuides\",\r\n        \r\n        \"ClownSampler\"                                : \"Legacy2_ClownSampler\",\r\n        \"ClownSamplerAdvanced\"                        : \"Legacy2_ClownSamplerAdvanced\",\r\n        \"ClownsharKSampler\"                           : \"Legacy2_ClownsharKSampler\",\r\n        \r\n        \"ClownsharKSamplerGuides\"                     : \"Legacy2_ClownsharKSamplerGuides\",\r\n        \"ClownsharKSamplerGuide\"                      : \"Legacy2_ClownsharKSamplerGuide\",\r\n\r\n        \"ClownOptions_SDE_Noise\"                      : \"Legacy2_ClownOptions_SDE_Noise\",\r\n        \"ClownOptions_FrameWeights\"                   : \"Legacy2_ClownOptions_FrameWeights\",\r\n\r\n        \"ClownInpaint\"                                : \"Legacy2_ClownInpaint\",\r\n        \"ClownInpaintSimple\"                          : \"Legacy2_ClownInpaintSimple\",\r\n\r\n        \"ClownsharKSamplerOptions\"                    : \"Legacy2_ClownsharKSamplerOptions\",\r\n\r\n        \"ClownsharKSamplerAutomation\"                 : \"Legacy2_ClownsharKSamplerAutomation\",\r\n        \"ClownsharKSamplerAutomation_Advanced\"        : \"Legacy2_ClownsharKSamplerAutomation_Advanced\",\r\n        \"SamplerOptions_TimestepScaling\"              : \"Legacy2_SamplerOptions_TimestepScaling\",\r\n        \"SamplerOptions_GarbageCollection\"            : \"Legacy2_SamplerOptions_GarbageCollection\",\r\n\r\n        \"UltraSharkSampler\"                           : \"Legacy2_UltraSharkSampler\",\r\n        \"UltraSharkSampler_Tiled\"                     : \"Legacy2_UltraSharkSampler Tiled\",\r\n    })\r\n\r\n\r\n\r\n    extra_samplers.update({\r\n        #\"res_2m\"     : rk_sampler.sample_res_2m,\r\n        #\"res_2s\"     : rk_sampler.sample_res_2s,\r\n        #\"res_3s\"     : rk_sampler.sample_res_3s,\r\n        #\"res_5s\"     : rk_sampler.sample_res_5s,\r\n        #\"res_6s\"     : rk_sampler.sample_res_6s,\r\n        #\"res_2m_sde\" : rk_sampler.sample_res_2m_sde,\r\n        #\"res_2s_sde\" : rk_sampler.sample_res_2s_sde,\r\n        #\"res_3s_sde\" : rk_sampler.sample_res_3s_sde,\r\n        #\"res_5s_sde\" : rk_sampler.sample_res_5s_sde,\r\n        #\"res_6s_sde\" : rk_sampler.sample_res_6s_sde,\r\n        #\"deis_2m\"    : rk_sampler.sample_deis_2m,\r\n        #\"deis_3m\"    : rk_sampler.sample_deis_3m,\r\n        #\"deis_4m\"    : rk_sampler.sample_deis_4m,\r\n        #\"deis_2m_sde\": rk_sampler.sample_deis_2m_sde,\r\n        #\"deis_3m_sde\": rk_sampler.sample_deis_3m_sde,\r\n        #\"deis_4m_sde\": rk_sampler.sample_deis_4m_sde,\r\n        \"rk\"         : rk_sampler.sample_rk,\r\n        \r\n        \"legacy_rk\"  : legacy_sampler_rk.legacy_sample_rk,\r\n    })\r\n    \r\n    return NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers\r\n\r\n"
  },
  {
    "path": "legacy/conditioning.py",
    "content": "import torch\r\nimport base64\r\nimport pickle # used strictly for serializing conditioning in the ConditioningToBase64 and Base64ToConditioning nodes for API use. (Offloading T5 processing to another machine to avoid model shuffling.)\r\n\r\nimport comfy.samplers\r\nimport comfy.sample\r\nimport comfy.sampler_helpers\r\nimport node_helpers\r\n\r\nimport functools\r\nfrom .noise_classes import precision_tool\r\nfrom copy import deepcopy\r\n\r\nfrom .helper import initialize_or_scale\r\nimport torch.nn.functional as F\r\nimport copy\r\n\r\nfrom .helper import get_orthogonal, get_collinear\r\nfrom ..res4lyf import RESplain\r\n\r\n\r\ndef multiply_nested_tensors(structure, scalar):\r\n    if isinstance(structure, torch.Tensor):\r\n        return structure * scalar\r\n    elif isinstance(structure, list):\r\n        return [multiply_nested_tensors(item, scalar) for item in structure]\r\n    elif isinstance(structure, dict):\r\n        return {key: multiply_nested_tensors(value, scalar) for key, value in structure.items()}\r\n    else:\r\n        return structure\r\n\r\n\r\n\r\n\r\nclass ConditioningOrthoCollin:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\r\n            \"conditioning_0\": (\"CONDITIONING\", ), \r\n            \"conditioning_1\": (\"CONDITIONING\", ),\r\n            \"t5_strength\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n            \"clip_strength\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n            }}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    FUNCTION = \"combine\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n\r\n    def combine(self, conditioning_0, conditioning_1, t5_strength, clip_strength):\r\n\r\n        t5_0_1_collin = get_collinear (conditioning_0[0][0], conditioning_1[0][0])\r\n        t5_1_0_ortho  = get_orthogonal(conditioning_1[0][0], conditioning_0[0][0])\r\n\r\n        t5_combined = t5_0_1_collin + t5_1_0_ortho\r\n        \r\n        t5_1_0_collin = get_collinear (conditioning_1[0][0], conditioning_0[0][0])\r\n        t5_0_1_ortho  = get_orthogonal(conditioning_0[0][0], conditioning_1[0][0])\r\n\r\n        t5_B_combined = t5_1_0_collin + t5_0_1_ortho\r\n\r\n        pooled_0_1_collin = get_collinear (conditioning_0[0][1]['pooled_output'].unsqueeze(0), conditioning_1[0][1]['pooled_output'].unsqueeze(0)).squeeze(0)\r\n        pooled_1_0_ortho  = get_orthogonal(conditioning_1[0][1]['pooled_output'].unsqueeze(0), conditioning_0[0][1]['pooled_output'].unsqueeze(0)).squeeze(0)\r\n\r\n        pooled_combined = pooled_0_1_collin + pooled_1_0_ortho\r\n        \r\n        #conditioning_0[0][0] = conditioning_0[0][0] + t5_strength * (t5_combined - conditioning_0[0][0])\r\n        \r\n        #conditioning_0[0][0] = t5_strength * t5_combined + (1-t5_strength) * t5_B_combined\r\n        \r\n        conditioning_0[0][0] = t5_strength * t5_0_1_collin + (1-t5_strength) * t5_1_0_collin\r\n        \r\n        conditioning_0[0][1]['pooled_output'] = conditioning_0[0][1]['pooled_output'] + clip_strength * (pooled_combined - conditioning_0[0][1]['pooled_output'])\r\n\r\n        return (conditioning_0, )\r\n\r\n\r\n\r\nclass CLIPTextEncodeFluxUnguided:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\r\n            \"clip\": (\"CLIP\", ),\r\n            \"clip_l\": (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            \"t5xxl\": (\"STRING\", {\"multiline\": True, \"dynamicPrompts\": True}),\r\n            }}\r\n    RETURN_NAMES = (\"conditioning\", \"clip_l_end\", \"t5xxl_end\",)\r\n    RETURN_TYPES = (\"CONDITIONING\",\"INT\",\"INT\",)\r\n    FUNCTION = \"encode\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n\r\n    def encode(self, clip, clip_l, t5xxl):\r\n        tokens = clip.tokenize(clip_l)\r\n        tokens[\"t5xxl\"] = clip.tokenize(t5xxl)[\"t5xxl\"]\r\n\r\n        clip_l_end=0\r\n        for i in range(len(tokens['l'][0])):\r\n            if tokens['l'][0][i][0] == 49407:\r\n                clip_l_end=i\r\n                break\r\n        t5xxl_end=0\r\n        for i in range(len(tokens['l'][0])):   # bug? should this be t5xxl?\r\n            if tokens['t5xxl'][0][i][0] == 1:\r\n                t5xxl_end=i\r\n                break\r\n\r\n        output = clip.encode_from_tokens(tokens, return_pooled=True, return_dict=True)\r\n        cond = output.pop(\"cond\")\r\n        conditioning = [[cond, output]]\r\n        conditioning[0][1]['clip_l_end'] = clip_l_end\r\n        conditioning[0][1]['t5xxl_end'] = t5xxl_end\r\n        return (conditioning, clip_l_end, t5xxl_end,)\r\n\r\n\r\nclass StyleModelApplyAdvanced: \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\"conditioning\": (\"CONDITIONING\", ),\r\n                             \"style_model\": (\"STYLE_MODEL\", ),\r\n                             \"clip_vision_output\": (\"CLIP_VISION_OUTPUT\", ),\r\n                             \"strength\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10.0, \"max\": 10.0, \"step\": 0.001}),\r\n                             }}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n    DESCRIPTION = \"Use with Flux Redux.\"\r\n\r\n    def main(self, clip_vision_output, style_model, conditioning, strength=1.0):\r\n        cond = style_model.get_cond(clip_vision_output).flatten(start_dim=0, end_dim=1).unsqueeze(dim=0)\r\n        cond = strength * cond\r\n        c = []\r\n        for t in conditioning:\r\n            n = [torch.cat((t[0], cond), dim=1), t[1].copy()]\r\n            c.append(n)\r\n        return (c, )\r\n\r\n\r\nclass ConditioningZeroAndTruncate: \r\n    # needs updating to ensure dims are correct for arbitrary models without hardcoding. \r\n    # vanilla ConditioningZeroOut node doesn't truncate and SD3.5M degrades badly with large embeddings, even if zeroed out, as the negative conditioning\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return { \"required\": {\"conditioning\": (\"CONDITIONING\", )}}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    FUNCTION = \"zero_out\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n    DESCRIPTION = \"Use for negative conditioning with SD3.5. ConditioningZeroOut does not truncate the embedding, \\\r\n                   which results in severe degradation of image quality with SD3.5 when the token limit is exceeded.\"\r\n\r\n    def zero_out(self, conditioning):\r\n        c = []\r\n        for t in conditioning:\r\n            d = t[1].copy()\r\n            pooled_output = d.get(\"pooled_output\", None)\r\n            if pooled_output is not None:\r\n                d[\"pooled_output\"] = torch.zeros((1,2048), dtype=t[0].dtype, device=t[0].device)\r\n                n = [torch.zeros((1,154,4096), dtype=t[0].dtype, device=t[0].device), d]\r\n            c.append(n)\r\n        return (c, )\r\n\r\n\r\nclass ConditioningTruncate: \r\n    # needs updating to ensure dims are correct for arbitrary models without hardcoding. \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return { \"required\": {\"conditioning\": (\"CONDITIONING\", )}}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    FUNCTION = \"zero_out\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n    DESCRIPTION = \"Use for positive conditioning with SD3.5. Tokens beyond 77 result in degradation of image quality.\"\r\n\r\n    def zero_out(self, conditioning):\r\n        c = []\r\n        for t in conditioning:\r\n            d = t[1].copy()\r\n            pooled_output = d.get(\"pooled_output\", None)\r\n            if pooled_output is not None:\r\n                d[\"pooled_output\"] = d[\"pooled_output\"][:, :2048]\r\n                n = [t[0][:, :154, :4096], d]\r\n            c.append(n)\r\n        return (c, )\r\n\r\n\r\nclass ConditioningMultiply:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\"conditioning\": (\"CONDITIONING\", ), \r\n                              \"multiplier\": (\"FLOAT\", {\"default\": 1.0, \"min\": -1000000000.0, \"max\": 1000000000.0, \"step\": 0.01})\r\n                             }}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n\r\n    def main(self, conditioning, multiplier):\r\n        c = multiply_nested_tensors(conditioning, multiplier)\r\n        return (c,)\r\n\r\n\r\n\r\nclass ConditioningAdd:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\"conditioning_1\": (\"CONDITIONING\", ), \r\n                             \"conditioning_2\": (\"CONDITIONING\", ), \r\n                              \"multiplier\": (\"FLOAT\", {\"default\": 1.0, \"min\": -1000000000.0, \"max\": 1000000000.0, \"step\": 0.01})\r\n                             }}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n\r\n    def main(self, conditioning_1, conditioning_2, multiplier):\r\n        \r\n        conditioning_1[0][0] += multiplier * conditioning_2[0][0]\r\n        conditioning_1[0][1]['pooled_output'] += multiplier * conditioning_2[0][1]['pooled_output'] \r\n        \r\n        return (conditioning_1,)\r\n\r\n\r\n\r\n\r\nclass ConditioningCombine:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\"conditioning_1\": (\"CONDITIONING\", ), \"conditioning_2\": (\"CONDITIONING\", )}}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    FUNCTION = \"combine\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n\r\n    def combine(self, conditioning_1, conditioning_2):\r\n        return (conditioning_1 + conditioning_2, )\r\n\r\n\r\n\r\nclass ConditioningAverage :\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\"conditioning_to\": (\"CONDITIONING\", ), \"conditioning_from\": (\"CONDITIONING\", ),\r\n                              \"conditioning_to_strength\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01})\r\n                             }}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    FUNCTION = \"addWeighted\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n\r\n    def addWeighted(self, conditioning_to, conditioning_from, conditioning_to_strength):\r\n        out = []\r\n\r\n        if len(conditioning_from) > 1:\r\n            RESplain(\"Warning: ConditioningAverage conditioning_from contains more than 1 cond, only the first one will actually be applied to conditioning_to.\")\r\n\r\n        cond_from = conditioning_from[0][0]\r\n        pooled_output_from = conditioning_from[0][1].get(\"pooled_output\", None)\r\n\r\n        for i in range(len(conditioning_to)):\r\n            t1 = conditioning_to[i][0]\r\n            pooled_output_to = conditioning_to[i][1].get(\"pooled_output\", pooled_output_from)\r\n            t0 = cond_from[:,:t1.shape[1]]\r\n            if t0.shape[1] < t1.shape[1]:\r\n                t0 = torch.cat([t0] + [torch.zeros((1, (t1.shape[1] - t0.shape[1]), t1.shape[2]))], dim=1)\r\n\r\n            tw = torch.mul(t1, conditioning_to_strength) + torch.mul(t0, (1.0 - conditioning_to_strength))\r\n            t_to = conditioning_to[i][1].copy()\r\n            if pooled_output_from is not None and pooled_output_to is not None:\r\n                t_to[\"pooled_output\"] = torch.mul(pooled_output_to, conditioning_to_strength) + torch.mul(pooled_output_from, (1.0 - conditioning_to_strength))\r\n            elif pooled_output_from is not None:\r\n                t_to[\"pooled_output\"] = pooled_output_from\r\n\r\n            n = [tw, t_to]\r\n            out.append(n)\r\n        return (out, )\r\n    \r\nclass ConditioningSetTimestepRange:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\"conditioning\": (\"CONDITIONING\", ),\r\n                             \"start\": (\"FLOAT\", {\"default\": 0.0, \"min\": 0.0, \"max\": 1.0, \"step\": 0.001}),\r\n                             \"end\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 1.0, \"step\": 0.001})\r\n                             }}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    FUNCTION = \"set_range\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n\r\n    def set_range(self, conditioning, start, end):\r\n        c = node_helpers.conditioning_set_values(conditioning, {\"start_percent\": start,\r\n                                                                \"end_percent\": end})\r\n        return (c, )\r\n\r\nclass ConditioningAverageScheduler: # don't think this is implemented correctly. needs to be reworked\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n                \"required\": {\r\n                    \"conditioning_0\": (\"CONDITIONING\", ), \r\n                    \"conditioning_1\": (\"CONDITIONING\", ),\r\n                    \"ratio\": (\"SIGMAS\", ),\r\n                    }\r\n            }\r\n    \r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n\r\n    @staticmethod\r\n    def addWeighted(conditioning_to, conditioning_from, conditioning_to_strength): #this function borrowed from comfyui\r\n        out = []\r\n\r\n        if len(conditioning_from) > 1:\r\n            RESplain(\"Warning: ConditioningAverage conditioning_from contains more than 1 cond, only the first one will actually be applied to conditioning_to.\")\r\n\r\n        cond_from = conditioning_from[0][0]\r\n        pooled_output_from = conditioning_from[0][1].get(\"pooled_output\", None)\r\n\r\n        for i in range(len(conditioning_to)):\r\n            t1 = conditioning_to[i][0]\r\n            pooled_output_to = conditioning_to[i][1].get(\"pooled_output\", pooled_output_from)\r\n            t0 = cond_from[:,:t1.shape[1]]\r\n            if t0.shape[1] < t1.shape[1]:\r\n                t0 = torch.cat([t0] + [torch.zeros((1, (t1.shape[1] - t0.shape[1]), t1.shape[2]))], dim=1)\r\n\r\n            tw = torch.mul(t1, conditioning_to_strength) + torch.mul(t0, (1.0 - conditioning_to_strength))\r\n            t_to = conditioning_to[i][1].copy()\r\n            if pooled_output_from is not None and pooled_output_to is not None:\r\n                t_to[\"pooled_output\"] = torch.mul(pooled_output_to, conditioning_to_strength) + torch.mul(pooled_output_from, (1.0 - conditioning_to_strength))\r\n            elif pooled_output_from is not None:\r\n                t_to[\"pooled_output\"] = pooled_output_from\r\n\r\n            n = [tw, t_to]\r\n            out.append(n)\r\n        return out\r\n\r\n    @staticmethod\r\n    def create_percent_array(steps):\r\n        step_size = 1.0 / steps\r\n        return [{\"start_percent\": i * step_size, \"end_percent\": (i + 1) * step_size} for i in range(steps)]\r\n\r\n    def main(self, conditioning_0, conditioning_1, ratio):\r\n        steps = len(ratio)\r\n\r\n        percents = self.create_percent_array(steps)\r\n\r\n        cond = []\r\n        for i in range(steps):\r\n            average = self.addWeighted(conditioning_0, conditioning_1, ratio[i].item())\r\n            cond += node_helpers.conditioning_set_values(average, {\"start_percent\": percents[i][\"start_percent\"], \"end_percent\": percents[i][\"end_percent\"]})\r\n\r\n        return (cond,)\r\n\r\n\r\n\r\nclass StableCascade_StageB_Conditioning64:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \"conditioning\": (\"CONDITIONING\",),\r\n                              \"stage_c\": (\"LATENT\",),\r\n                             }}\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n\r\n    FUNCTION = \"set_prior\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n\r\n    @precision_tool.cast_tensor\r\n    def set_prior(self, conditioning, stage_c):\r\n        c = []\r\n        for t in conditioning:\r\n            d = t[1].copy()\r\n            d['stable_cascade_prior'] = stage_c['samples']\r\n            n = [t[0], d]\r\n            c.append(n)\r\n        return (c, )\r\n\r\n\r\n\r\nclass Conditioning_Recast64:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \"cond_0\": (\"CONDITIONING\",),\r\n                             },\r\n                \"optional\": { \"cond_1\": (\"CONDITIONING\",),}\r\n                }\r\n    RETURN_TYPES = (\"CONDITIONING\",\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"cond_0_recast\",\"cond_1_recast\",)\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/precision\"\r\n\r\n    @precision_tool.cast_tensor\r\n    def main(self, cond_0, cond_1 = None):\r\n        cond_0[0][0] = cond_0[0][0].to(torch.float64)\r\n        cond_0[0][1][\"pooled_output\"] = cond_0[0][1][\"pooled_output\"].to(torch.float64)\r\n        \r\n        if cond_1 is not None:\r\n            cond_1[0][0] = cond_1[0][0].to(torch.float64)\r\n            cond_1[0][1][\"pooled_output\"] = cond_1[0][1][\"pooled_output\"].to(torch.float64)\r\n\r\n        return (cond_0, cond_1,)\r\n\r\n\r\nclass ConditioningToBase64:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"conditioning\": (\"CONDITIONING\",),\r\n            },\r\n            \"hidden\": {\r\n                \"unique_id\": \"UNIQUE_ID\",\r\n                \"extra_pnginfo\": \"EXTRA_PNGINFO\",\r\n            },\r\n        }\r\n\r\n    RETURN_TYPES = (\"STRING\",)\r\n    FUNCTION = \"notify\"\r\n    OUTPUT_NODE = True\r\n    OUTPUT_IS_LIST = (True,)\r\n\r\n    CATEGORY = \"RES4LYF/utilities\"\r\n\r\n    def notify(self, unique_id=None, extra_pnginfo=None, conditioning=None):\r\n        conditioning_pickle = pickle.dumps(conditioning)\r\n        conditioning_base64 = base64.b64encode(conditioning_pickle).decode('utf-8')\r\n        text = [conditioning_base64]\r\n        \r\n        if unique_id is not None and extra_pnginfo is not None:\r\n            if not isinstance(extra_pnginfo, list):\r\n                RESplain(\"Error: extra_pnginfo is not a list\")\r\n            elif (\r\n                not isinstance(extra_pnginfo[0], dict)\r\n                or \"workflow\" not in extra_pnginfo[0]\r\n            ):\r\n                RESplain(\"Error: extra_pnginfo[0] is not a dict or missing 'workflow' key\")\r\n            else:\r\n                workflow = extra_pnginfo[0][\"workflow\"]\r\n                node = next(\r\n                    (x for x in workflow[\"nodes\"] if str(x[\"id\"]) == str(unique_id[0])),\r\n                    None,\r\n                )\r\n                if node:\r\n                    node[\"widgets_values\"] = [text]\r\n\r\n        return {\"ui\": {\"text\": text}, \"result\": (text,)}\r\n\r\nclass Base64ToConditioning:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"data\": (\"STRING\", {\"default\": \"\"}),\r\n            }\r\n        }\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/utilities\"\r\n\r\n    def main(self, data):\r\n        conditioning_pickle = base64.b64decode(data)\r\n        conditioning = pickle.loads(conditioning_pickle)\r\n        return (conditioning,)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass RegionalMask(torch.nn.Module):\r\n    def __init__(self, mask: torch.Tensor, conditioning: torch.Tensor, conditioning_regional: torch.Tensor, latent:torch.Tensor, start_percent: float, end_percent: float, mask_type: str, img_len: int, text_len: int) -> None:\r\n        super().__init__()\r\n        #self.register_buffer('mask', mask)\r\n        self.mask = mask.clone().to('cuda')\r\n        self.conditioning = copy.deepcopy(conditioning)\r\n        self.conditioning_regional = copy.deepcopy(conditioning_regional)\r\n        self.latent = latent.clone()\r\n        self.start_percent = start_percent\r\n        self.end_percent   = end_percent\r\n        self.mask_type = mask_type\r\n        self.img_len = img_len\r\n        self.text_len = text_len\r\n\r\n    def __call__(self, transformer_options, weight=0, dtype=torch.bfloat16, *args, **kwargs):\r\n        sigma = transformer_options['sigmas'][0]\r\n        if self.start_percent <= 1 - sigma < self.end_percent:\r\n            if self.mask_type == \"gradient\":\r\n                #mask = self.gen_mask(weight)\r\n                return self.mask.clone().to(sigma.device) * weight\r\n\r\n\r\n    \"\"\"def gen_mask(self, weight):             #FOR REGENERATION OF SELF-ATTN MASK\r\n        b, c, h, w = self.latent.shape\r\n        h //= 2  # 16x16 PE\r\n        w //= 2\r\n        img_len = h * w\r\n\r\n        cond_r = torch.cat([cond_reg['cond'] for cond_reg in self.conditioning_regional], dim=1)\r\n        \r\n        if self.conditioning is not None:\r\n            text_len = 256 + cond_r.shape[1]  # 256 = main prompt tokens... half of t5, comfy issue\r\n            conditioning_regional = [\r\n                {\r\n                    'mask': torch.ones((1,   h,    w), dtype=torch.bfloat16),\r\n                    'cond': torch.ones((1, 256, 4096), dtype=torch.bfloat16),\r\n                },\r\n                *self.conditioning_regional,\r\n            ]\r\n        else:\r\n            text_len = cond_r.shape[1]  # 256 = main prompt tokens... half of t5, comfy issue\r\n            conditioning_regional = self.conditioning_regional\r\n        \r\n        all_attn_mask       = torch.zeros((text_len+img_len, text_len+img_len), dtype=torch.bfloat16)\r\n        self_attn_mask     = torch.zeros((          img_len,          img_len), dtype=torch.bfloat16)\r\n        self_attn_mask_bkg = torch.zeros((          img_len,          img_len), dtype=torch.bfloat16)\r\n        \r\n        prev_len = 0\r\n        for cond_reg_dict in conditioning_regional:         #FOR REGENERATION OF SELF-ATTN MASK\r\n            cond_reg         = cond_reg_dict['cond']\r\n            region_mask_ = 1 - cond_reg_dict['mask'][0]\r\n            \r\n            region_mask_sq = cond_reg_dict['mask'][0].to(torch.bfloat16)\r\n\r\n            \r\n            img2txt_mask = torch.nn.functional.interpolate(region_mask_sq[None, None, :, :], (h, w), mode='nearest-exact').flatten().unsqueeze(1).repeat(1, cond_reg.size(1))\r\n            txt2img_mask = img2txt_mask.transpose(-1, -2)\r\n            \r\n            img2txt_mask_sq = torch.nn.functional.interpolate(region_mask_sq[None, None, :, :], (h, w), mode='nearest-exact').flatten().unsqueeze(1).repeat(1, self.img_len)\r\n            #img2txt_mask_sq = img2txt_mask[:, :1].repeat(1, img_len)\r\n            txt2img_mask_sq = img2txt_mask_sq.transpose(-1, -2)\r\n\r\n            curr_len = prev_len + cond_reg.shape[1]         #FOR REGENERATION OF SELF-ATTN MASK\r\n            \r\n            all_attn_mask[prev_len:curr_len, prev_len:curr_len] = 1.0           # self             TXT 2 TXT\r\n            all_attn_mask[prev_len:curr_len, text_len:        ] = txt2img_mask  # cross            TXT 2 regional IMG\r\n            all_attn_mask[text_len:        , prev_len:curr_len] = img2txt_mask  # cross   regional IMG 2 TXT\r\n            \r\n            #all_attn_mask[text_len:, text_len:] = fp_or(all_attn_mask[text_len:, text_len:]    , fp_and(  img2txt_mask_sq,   txt2img_mask_sq))\r\n            \r\n            self_attn_mask     = fp_or(self_attn_mask    , fp_and(                      img2txt_mask_sq,                       txt2img_mask_sq))\r\n            self_attn_mask_bkg = fp_or(self_attn_mask_bkg, fp_and(img2txt_mask_sq.max()-img2txt_mask_sq, txt2img_mask_sq.max()-txt2img_mask_sq))\r\n            #self_attn_mask_bkg = fp_or(self_attn_mask_bkg, fp_and(1-img2txt_mask_sq, 1-txt2img_mask_sq))\r\n            \r\n            prev_len = curr_len\r\n\r\n        all_attn_mask[text_len:, text_len:] = fp_or(self_attn_mask, self_attn_mask_bkg) #combine foreground/background self-attn\r\n\r\n        return all_attn_mask\r\n    \"\"\"\r\n    \r\n    \r\nclass RegionalConditioning(torch.nn.Module):\r\n    def __init__(self, conditioning: torch.Tensor, region_cond: torch.Tensor, start_percent: float, end_percent: float) -> None:\r\n        super().__init__()\r\n        #self.register_buffer('region_cond', region_cond)\r\n        self.conditioning = conditioning\r\n        self.region_cond = region_cond.clone().to('cuda')\r\n        self.start_percent = start_percent\r\n        self.end_percent   = end_percent\r\n\r\n    def __call__(self, transformer_options, dtype=torch.bfloat16, *args,  **kwargs):\r\n        sigma = transformer_options['sigmas'][0]\r\n        if self.start_percent <= 1 - sigma < self.end_percent:\r\n            return self.region_cond.clone().to(sigma.device).to(dtype)\r\n        return None\r\n    \r\n    def concat_cond(self, context, transformer_options, dtype=torch.bfloat16, *args,  **kwargs):\r\n        sigma = transformer_options['sigmas'][0]\r\n        if self.start_percent <= 1 - sigma < self.end_percent:\r\n            region_cond = self.region_cond.clone().to(sigma.device).to(dtype)\r\n            if self.conditioning is None:\r\n                return self.region_cond.clone().to(sigma.device).to(dtype)\r\n            else:\r\n                return torch.cat([context, region_cond.clone().to(torch.bfloat16)], dim=1)\r\n        return None\r\n\r\n\r\n\r\nclass FluxRegionalPrompt:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n            \"cond\": (\"CONDITIONING\",),\r\n        }, \"optional\": {\r\n            \"cond_regional\": (\"CONDITIONING_REGIONAL\",),\r\n            \"mask\": (\"MASK\",),\r\n        }}\r\n\r\n    RETURN_TYPES = (\"CONDITIONING_REGIONAL\",\"MASK\",)\r\n    RETURN_NAMES = (\"cond_regional\",\"mask_inv\")\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n\r\n    def main(self, cond, mask, cond_regional=[]):\r\n        cond_regional = [*cond_regional]\r\n        cond_regional.append({'mask': mask, 'cond': cond[0][0]})\r\n        mask_inv = 1-mask\r\n        return (cond_regional,mask_inv,)\r\n\r\ndef fp_not(tensor):\r\n    return 1 - tensor\r\n\r\ndef fp_or(tensor1, tensor2):\r\n    return torch.maximum(tensor1, tensor2)\r\n\r\ndef fp_and(tensor1, tensor2):\r\n    return torch.minimum(tensor1, tensor2)\r\n\r\nclass RegionalGenerateConditioningsAndMasks:\r\n    def __init__(self, conditioning, conditioning_regional, weight, start_percent, end_percent, mask_type):\r\n        self.conditioning          = conditioning\r\n        self.conditioning_regional = conditioning_regional\r\n        self.weight                = weight\r\n        self.start_percent         = start_percent\r\n        self.end_percent           = end_percent\r\n        self.mask_type             = mask_type\r\n\r\n    def __call__(self, latent):\r\n        b, c, h, w = latent.shape\r\n        h //= 2  # 16x16 PE\r\n        w //= 2\r\n        img_len = h * w\r\n\r\n        cond_r = torch.cat([cond_reg['cond'] for cond_reg in self.conditioning_regional], dim=1)\r\n        \r\n        if self.conditioning is not None:\r\n            text_len = 256 + cond_r.shape[1]  # 256 = main prompt tokens... half of t5, comfy issue\r\n            conditioning_regional = [\r\n                {\r\n                    'mask': torch.ones((1,   h,    w), dtype=torch.bfloat16),\r\n                    'cond': torch.ones((1, 256, 4096), dtype=torch.bfloat16),\r\n                },\r\n                *self.conditioning_regional,\r\n            ]\r\n        else:\r\n            text_len = cond_r.shape[1]  # 256 = main prompt tokens... half of t5, comfy issue\r\n            conditioning_regional = self.conditioning_regional\r\n        \r\n        all_attn_mask      = torch.zeros((text_len+img_len, text_len+img_len), dtype=torch.bfloat16)\r\n        self_attn_mask     = torch.zeros((         img_len,          img_len), dtype=torch.bfloat16)\r\n        self_attn_mask_bkg = torch.zeros((         img_len,          img_len), dtype=torch.bfloat16)\r\n        \r\n        prev_len = 0\r\n        for cond_reg_dict in conditioning_regional:\r\n            cond_reg    = cond_reg_dict['cond']\r\n            region_mask = cond_reg_dict['mask'][0]\r\n            \r\n            img2txt_mask    = torch.nn.functional.interpolate(region_mask[None, None, :, :], (h, w), mode='nearest-exact').flatten().unsqueeze(1).repeat(1, cond_reg.size(1))\r\n            txt2img_mask    = img2txt_mask   .transpose(-1, -2)\r\n            \r\n            img2txt_mask_sq = torch.nn.functional.interpolate(region_mask[None, None, :, :], (h, w), mode='nearest-exact').flatten().unsqueeze(1).repeat(1, img_len)\r\n            txt2img_mask_sq = img2txt_mask_sq.transpose(-1, -2)\r\n\r\n            curr_len = prev_len + cond_reg.shape[1]\r\n            \r\n            all_attn_mask[prev_len:curr_len, prev_len:curr_len] = 1.0           # self             TXT 2 TXT\r\n            all_attn_mask[prev_len:curr_len, text_len:        ] = txt2img_mask  # cross            TXT 2 regional IMG\r\n            all_attn_mask[text_len:        , prev_len:curr_len] = img2txt_mask  # cross   regional IMG 2 TXT\r\n            \r\n            self_attn_mask     = fp_or(self_attn_mask    , fp_and(                      img2txt_mask_sq,                       txt2img_mask_sq))\r\n            self_attn_mask_bkg = fp_or(self_attn_mask_bkg, fp_and(img2txt_mask_sq.max()-img2txt_mask_sq, txt2img_mask_sq.max()-txt2img_mask_sq))\r\n            \r\n            prev_len = curr_len\r\n\r\n        all_attn_mask[text_len:, text_len:] = fp_or(self_attn_mask, self_attn_mask_bkg) #combine foreground/background self-attn\r\n\r\n        all_attn_mask         = RegionalMask(all_attn_mask, self.conditioning, self.conditioning_regional, latent, self.start_percent, self.end_percent, self.mask_type, img_len, text_len)\r\n        regional_conditioning = RegionalConditioning(self.conditioning, cond_r, self.start_percent, self.end_percent)\r\n\r\n        return regional_conditioning, all_attn_mask\r\n\r\n\r\nclass FluxRegionalConditioning:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n            \"mask_weight\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n            \"self_attn_floor\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n            \"start_percent\": (\"FLOAT\", {\"default\": 0,   \"min\": 0.0, \"max\": 1.0, \"step\": 0.01}),\r\n            \"end_percent\":   (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01}),\r\n            \"mask_type\": ([\"gradient\"], {\"default\": \"gradient\"}),\r\n        }, \r\n            \"optional\": {\r\n                \"conditioning\": (\"CONDITIONING\",),\r\n                \"conditioning_regional\": (\"CONDITIONING_REGIONAL\",),\r\n                \"mask_weights\": (\"SIGMAS\", ),\r\n                \"self_attn_floors\": (\"SIGMAS\", ),\r\n\r\n        }}\r\n\r\n    RETURN_TYPES = (\"CONDITIONING\",)\r\n    RETURN_NAMES = (\"conditioning\",)\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n\r\n    def main(self, conditioning_regional, mask_weight=1.0, start_percent=0.0, end_percent=1.0, start_step=0, end_step=10000, conditioning=None, mask_weights=None, self_attn_floors=None, self_attn_floor=0.0, mask_type=\"gradient\", latent=None):\r\n        weight, weights = mask_weight, mask_weights\r\n        floor, floors = self_attn_floor, self_attn_floors\r\n        default_dtype = torch.float64\r\n        max_steps = 10000\r\n        weights = initialize_or_scale(weights, weight, max_steps).to(default_dtype)\r\n        weights = F.pad(weights, (0, max_steps), value=0.0)\r\n        \r\n        floors = initialize_or_scale(floors, floor, max_steps).to(default_dtype)\r\n        floors = F.pad(floors, (0, max_steps), value=0.0)\r\n\r\n        regional_generate_conditionings_and_masks_fn = RegionalGenerateConditioningsAndMasks(conditioning, conditioning_regional, weight, start_percent, end_percent, mask_type)\r\n\r\n        if conditioning is None:\r\n            conditioning = [\r\n                                [\r\n                                    torch.zeros_like(conditioning_regional[0]['cond']),\r\n                                    {'pooled_output':\r\n                                        torch.zeros((1,768), dtype=conditioning_regional[0]['cond'].dtype, device=conditioning_regional[0]['cond'].device),\r\n                                    }\r\n                                ],\r\n            ]\r\n\r\n        conditioning[0][1]['regional_generate_conditionings_and_masks_fn'] = regional_generate_conditionings_and_masks_fn\r\n        conditioning[0][1]['regional_conditioning_weights'] = weights\r\n        conditioning[0][1]['regional_conditioning_floors'] = floors\r\n        return (copy.deepcopy(conditioning),)\r\n\r\n\r\n\"\"\"\r\nfrom .models import ReFluxPatcher\r\n\r\nclass ClownRegionalConditioningFlux:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n            \"regional_model\": ([\"auto\", \"deactivate\"], {\"default\": \"auto\"}),\r\n            \"mask_weight\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n            \"region_bleed\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n            \"start_percent\": (\"FLOAT\", {\"default\": 0,   \"min\": 0.0, \"max\": 1.0, \"step\": 0.01}),\r\n            \"end_percent\":   (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01}),\r\n            \"mask_type\": ([\"gradient\"], {\"default\": \"gradient\"}),\r\n            \"invert_mask\": (\"BOOLEAN\", {\"default\": False}),\r\n        }, \r\n            \"optional\": {\r\n                \"model\":             (\"MODEL\", ),\r\n                \"positive_masked\":  (\"CONDITIONING\", ),\r\n                \"positive_unmasked\":      (\"CONDITIONING\", ),\r\n                \"mask\":              (\"MASK\", ),\r\n                \"mask_weights\": (\"SIGMAS\", ),\r\n                \"region_bleeds\": (\"SIGMAS\", ),\r\n        }}\r\n\r\n    RETURN_TYPES = (\"MODEL\", \"CONDITIONING\",)\r\n    RETURN_NAMES = (\"model\", \"positive\",)\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/conditioning\"\r\n\r\n    def main(self, model, regional_model, mask_weight=1.0, start_percent=0.0, end_percent=1.0, positive_masked=None, positive_unmasked=None, mask_weights=None, region_bleeds=None, region_bleed=0.0, mask_type=\"gradient\", mask=None, invert_mask=False):\r\n        if regional_model == \"auto\":\r\n            reflux_enable = True\r\n        else:\r\n            model, = ReFluxPatcher().main(model, enable=False)\r\n            return (model, positive_masked,)\r\n            \r\n        if invert_mask and mask is not None:\r\n            mask = 1-mask\r\n\r\n        weight, weights = mask_weight, mask_weights\r\n        floor, floors = region_bleed, region_bleeds\r\n        default_dtype = torch.float64\r\n        max_steps = 10000\r\n        weights = initialize_or_scale(weights, weight, max_steps).to(default_dtype)\r\n        weights = F.pad(weights, (0, max_steps), value=0.0)\r\n        \r\n        floors = initialize_or_scale(floors, floor, max_steps).to(default_dtype)\r\n        floors = F.pad(floors, (0, max_steps), value=0.0)\r\n\r\n        if (positive_masked is None) and (positive_unmasked is None):\r\n            positive = None\r\n            reflux_enable = False\r\n        elif mask is not None:\r\n            if regional_model == \"auto\":\r\n                reflux_enable = True\r\n            else:\r\n                reflux_enable = False\r\n            \r\n            if positive_unmasked is None:\r\n                if positive_unmasked is None:\r\n                    positive_unmasked = [[\r\n                        torch.zeros((1, 256, 4096)),\r\n                        {'pooled_output': torch.zeros((1, 768))}\r\n                        ]]\r\n            cond_regional, mask_inv     = FluxRegionalPrompt().main(cond=positive_masked,                                    mask=mask)\r\n            cond_regional, mask_inv_inv = FluxRegionalPrompt().main(cond=positive_unmasked    , cond_regional=cond_regional, mask=mask_inv)\r\n            \r\n            positive, = FluxRegionalConditioning().main(conditioning_regional=cond_regional, self_attn_floor=floor, self_attn_floors=floors, mask_weight=weight, mask_weights=weights, start_percent=start_percent, end_percent=end_percent, mask_type=mask_type)\r\n        else:\r\n            positive = positive_masked\r\n            reflux_enable = False\r\n        \r\n        if not reflux_enable:\r\n            model, = ReFluxPatcher().main(model, enable=False)\r\n            return (model, positive_masked,)\r\n        else:\r\n            model, = ReFluxPatcher().main(model, enable=True)\r\n            return (model, positive,)\r\n\r\n\"\"\"\r\n"
  },
  {
    "path": "legacy/constants.py",
    "content": "MAX_STEPS = 10000\n\n\nIMPLICIT_TYPE_NAMES = [\n    \"predictor-corrector\",\n    \"rebound\",\n    \"retro-eta\",\n    \"bongmath\",\n]\n\n\n"
  },
  {
    "path": "legacy/deis_coefficients.py",
    "content": "# Adapted from: https://github.com/zju-pi/diff-sampler/blob/main/gits-main/solver_utils.py\n# fixed the calcs for \"rhoab\" which suffered from an off-by-one error and made some other minor corrections\n\nimport torch\nimport numpy as np\n\n# A pytorch reimplementation of DEIS (https://github.com/qsh-zh/deis).\n#############################\n### Utils for DEIS solver ###\n#############################\n#----------------------------------------------------------------------------\n# Transfer from the input time (sigma) used in EDM to that (t) used in DEIS.\n\ndef edm2t(edm_steps, epsilon_s=1e-3, sigma_min=0.002, sigma_max=80):\n    vp_sigma = lambda beta_d, beta_min: lambda t: (np.e ** (0.5 * beta_d * (t ** 2) + beta_min * t) - 1) ** 0.5\n    vp_sigma_inv = lambda beta_d, beta_min: lambda sigma: ((beta_min ** 2 + 2 * beta_d * (sigma ** 2 + 1).log()).sqrt() - beta_min) / beta_d\n    vp_beta_d = 2 * (np.log(torch.tensor(sigma_min).cpu() ** 2 + 1) / epsilon_s - np.log(torch.tensor(sigma_max).cpu() ** 2 + 1)) / (epsilon_s - 1)\n    vp_beta_min = np.log(torch.tensor(sigma_max).cpu() ** 2 + 1) - 0.5 * vp_beta_d\n    t_steps = vp_sigma_inv(vp_beta_d.clone().detach().cpu(), vp_beta_min.clone().detach().cpu())(edm_steps.clone().detach().cpu())\n    return t_steps, vp_beta_min, vp_beta_d + vp_beta_min\n\n#----------------------------------------------------------------------------\n\ndef cal_poly(prev_t, j, taus):\n    poly = 1\n    for k in range(prev_t.shape[0]):\n        if k == j:\n            continue\n        poly *= (taus - prev_t[k]) / (prev_t[j] - prev_t[k])\n    return poly\n\n#----------------------------------------------------------------------------\n# Transfer from t to alpha_t.\n\ndef t2alpha_fn(beta_0, beta_1, t):\n    return torch.exp(-0.5 * t ** 2 * (beta_1 - beta_0) - t * beta_0)\n\n#----------------------------------------------------------------------------\n\ndef cal_integrand(beta_0, beta_1, taus):\n    with torch.inference_mode(mode=False):\n        taus = taus.clone()\n        beta_0 = beta_0.clone()\n        beta_1 = beta_1.clone()\n        with torch.enable_grad():\n            taus.requires_grad_(True)\n            alpha = t2alpha_fn(beta_0, beta_1, taus)\n            log_alpha = alpha.log()\n            log_alpha.sum().backward()\n            d_log_alpha_dtau = taus.grad\n    integrand = -0.5 * d_log_alpha_dtau / torch.sqrt(alpha * (1 - alpha))\n    return integrand\n\n#----------------------------------------------------------------------------\n\ndef get_deis_coeff_list(t_steps, max_order, N=10000, deis_mode='tab'):\n    \"\"\"\n    Get the coefficient list for DEIS sampling.\n\n    Args:\n        t_steps: A pytorch tensor. The time steps for sampling.\n        max_order: A `int`. Maximum order of the solver. 1 <= max_order <= 4\n        N: A `int`. Use how many points to perform the numerical integration when deis_mode=='tab'.\n        deis_mode: A `str`. Select between 'tab' and 'rhoab'. Type of DEIS.\n    Returns:\n        A pytorch tensor. A batch of generated samples or sampling trajectories if return_inters=True.\n    \"\"\"\n    if deis_mode == 'tab':\n        t_steps, beta_0, beta_1 = edm2t(t_steps)\n        C = []\n        for i, (t_cur, t_next) in enumerate(zip(t_steps[:-1], t_steps[1:])):\n            order = min(i+1, max_order)\n            if order == 1:\n                C.append([])\n            else:\n                taus = torch.linspace(t_cur, t_next, N)   # split the interval for integral approximation\n                dtau = (t_next - t_cur) / N\n                prev_t = t_steps[[i - k for k in range(order)]]\n                coeff_temp = []\n                integrand = cal_integrand(beta_0, beta_1, taus)\n                for j in range(order):\n                    poly = cal_poly(prev_t, j, taus)\n                    coeff_temp.append(torch.sum(integrand * poly) * dtau)\n                C.append(coeff_temp)\n\n    elif deis_mode == 'rhoab':\n        # Analytical solution, second order\n        def get_def_integral_2(a, b, start, end, c):\n            coeff = (end**3 - start**3) / 3 - (end**2 - start**2) * (a + b) / 2 + (end - start) * a * b\n            return coeff / ((c - a) * (c - b))\n\n        # Analytical solution, third order\n        def get_def_integral_3(a, b, c, start, end, d):\n            coeff = (end**4 - start**4) / 4 - (end**3 - start**3) * (a + b + c) / 3 \\\n                    + (end**2 - start**2) * (a*b + a*c + b*c) / 2 - (end - start) * a * b * c\n            return coeff / ((d - a) * (d - b) * (d - c))\n\n        C = []\n        for i, (t_cur, t_next) in enumerate(zip(t_steps[:-1], t_steps[1:])):\n            order = min(i+1, max_order) #fixed order calcs\n            if order == 1:\n                C.append([])\n            else:\n                prev_t = t_steps[[i - k for k in range(order+1)]]\n                if order == 2:\n                    coeff_cur = ((t_next - prev_t[1])**2 - (t_cur - prev_t[1])**2) / (2 * (t_cur - prev_t[1]))\n                    coeff_prev1 = (t_next - t_cur)**2 / (2 * (prev_t[1] - t_cur))\n                    coeff_temp = [coeff_cur, coeff_prev1]\n                elif order == 3:\n                    coeff_cur = get_def_integral_2(prev_t[1], prev_t[2], t_cur, t_next, t_cur)\n                    coeff_prev1 = get_def_integral_2(t_cur, prev_t[2], t_cur, t_next, prev_t[1])\n                    coeff_prev2 = get_def_integral_2(t_cur, prev_t[1], t_cur, t_next, prev_t[2])\n                    coeff_temp = [coeff_cur, coeff_prev1, coeff_prev2]\n                elif order == 4:\n                    coeff_cur = get_def_integral_3(prev_t[1], prev_t[2], prev_t[3], t_cur, t_next, t_cur)\n                    coeff_prev1 = get_def_integral_3(t_cur, prev_t[2], prev_t[3], t_cur, t_next, prev_t[1])\n                    coeff_prev2 = get_def_integral_3(t_cur, prev_t[1], prev_t[3], t_cur, t_next, prev_t[2])\n                    coeff_prev3 = get_def_integral_3(t_cur, prev_t[1], prev_t[2], t_cur, t_next, prev_t[3])\n                    coeff_temp = [coeff_cur, coeff_prev1, coeff_prev2, coeff_prev3]\n                C.append(coeff_temp)\n \n    return C\n\n"
  },
  {
    "path": "legacy/flux/controlnet.py",
    "content": "#Original code can be found on: https://github.com/XLabs-AI/x-flux/blob/main/src/flux/controlnet.py\n#modified to support different types of flux controlnets\n\nimport torch\nimport math\nfrom torch import Tensor, nn\nfrom einops import rearrange, repeat\n\nfrom .layers import (DoubleStreamBlock, EmbedND, LastLayer,\n                                 MLPEmbedder, SingleStreamBlock,\n                                 timestep_embedding)\n\nfrom .model import Flux\nimport comfy.ldm.common_dit\n\nclass MistolineCondDownsamplBlock(nn.Module):\n    def __init__(self, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.encoder = nn.Sequential(\n            operations.Conv2d(3, 16, 3, padding=1, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 1, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, stride=2, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, stride=2, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, stride=2, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 1, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device)\n        )\n\n    def forward(self, x):\n        return self.encoder(x)\n\nclass MistolineControlnetBlock(nn.Module):\n    def __init__(self, hidden_size, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.linear = operations.Linear(hidden_size, hidden_size, dtype=dtype, device=device)\n        self.act = nn.SiLU()\n\n    def forward(self, x):\n        return self.act(self.linear(x))\n\n\nclass ControlNetFlux(Flux):\n    def __init__(self, latent_input=False, num_union_modes=0, mistoline=False, control_latent_channels=None, image_model=None, dtype=None, device=None, operations=None, **kwargs):\n        super().__init__(final_layer=False, dtype=dtype, device=device, operations=operations, **kwargs)\n\n        self.main_model_double = 19\n        self.main_model_single = 38\n\n        self.mistoline = mistoline\n        # add ControlNet blocks\n        if self.mistoline:\n            control_block = lambda : MistolineControlnetBlock(self.hidden_size, dtype=dtype, device=device, operations=operations)\n        else:\n            control_block = lambda : operations.Linear(self.hidden_size, self.hidden_size, dtype=dtype, device=device)\n\n        self.controlnet_blocks = nn.ModuleList([])\n        for _ in range(self.params.depth):\n            self.controlnet_blocks.append(control_block())\n\n        self.controlnet_single_blocks = nn.ModuleList([])\n        for _ in range(self.params.depth_single_blocks):\n            self.controlnet_single_blocks.append(control_block())\n\n        self.num_union_modes = num_union_modes\n        self.controlnet_mode_embedder = None\n        if self.num_union_modes > 0:\n            self.controlnet_mode_embedder = operations.Embedding(self.num_union_modes, self.hidden_size, dtype=dtype, device=device)\n\n        self.gradient_checkpointing = False\n        self.latent_input = latent_input\n        if control_latent_channels is None:\n            control_latent_channels = self.in_channels\n        else:\n            control_latent_channels *= 2 * 2 #patch size\n\n        self.pos_embed_input = operations.Linear(control_latent_channels, self.hidden_size, bias=True, dtype=dtype, device=device)\n        if not self.latent_input:\n            if self.mistoline:\n                self.input_cond_block = MistolineCondDownsamplBlock(dtype=dtype, device=device, operations=operations)\n            else:\n                self.input_hint_block = nn.Sequential(\n                    operations.Conv2d(3, 16, 3, padding=1, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, stride=2, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, stride=2, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, stride=2, dtype=dtype, device=device),\n                    nn.SiLU(),\n                    operations.Conv2d(16, 16, 3, padding=1, dtype=dtype, device=device)\n                )\n\n    def forward_orig(\n        self,\n        img: Tensor,\n        img_ids: Tensor,\n        controlnet_cond: Tensor,\n        txt: Tensor,\n        txt_ids: Tensor,\n        timesteps: Tensor,\n        y: Tensor,\n        guidance: Tensor = None,\n        control_type: Tensor = None,\n    ) -> Tensor:\n        if img.ndim != 3 or txt.ndim != 3:\n            raise ValueError(\"Input img and txt tensors must have 3 dimensions.\")\n\n        # running on sequences img\n        img = self.img_in(img)\n\n        controlnet_cond = self.pos_embed_input(controlnet_cond)\n        img = img + controlnet_cond\n        vec = self.time_in(timestep_embedding(timesteps, 256))\n        if self.params.guidance_embed:\n            vec = vec + self.guidance_in(timestep_embedding(guidance, 256))\n        vec = vec + self.vector_in(y)\n        txt = self.txt_in(txt)\n\n        if self.controlnet_mode_embedder is not None and len(control_type) > 0:\n            control_cond = self.controlnet_mode_embedder(torch.tensor(control_type, device=img.device), out_dtype=img.dtype).unsqueeze(0).repeat((txt.shape[0], 1, 1))\n            txt = torch.cat([control_cond, txt], dim=1)\n            txt_ids = torch.cat([txt_ids[:,:1], txt_ids], dim=1)\n\n        ids = torch.cat((txt_ids, img_ids), dim=1)\n        pe = self.pe_embedder(ids)\n\n        controlnet_double = ()\n\n        for i in range(len(self.double_blocks)):\n            img, txt = self.double_blocks[i](img=img, txt=txt, vec=vec, pe=pe)\n            controlnet_double = controlnet_double + (self.controlnet_blocks[i](img),)\n\n        img = torch.cat((txt, img), 1)\n\n        controlnet_single = ()\n\n        for i in range(len(self.single_blocks)):\n            img = self.single_blocks[i](img, vec=vec, pe=pe)\n            controlnet_single = controlnet_single + (self.controlnet_single_blocks[i](img[:, txt.shape[1] :, ...]),)\n\n        repeat = math.ceil(self.main_model_double / len(controlnet_double))\n        if self.latent_input:\n            out_input = ()\n            for x in controlnet_double:\n                    out_input += (x,) * repeat\n        else:\n            out_input = (controlnet_double * repeat)\n\n        out = {\"input\": out_input[:self.main_model_double]}\n        if len(controlnet_single) > 0:\n            repeat = math.ceil(self.main_model_single / len(controlnet_single))\n            out_output = ()\n            if self.latent_input:\n                for x in controlnet_single:\n                        out_output += (x,) * repeat\n            else:\n                out_output = (controlnet_single * repeat)\n            out[\"output\"] = out_output[:self.main_model_single]\n        return out\n\n    def forward(self, x, timesteps, context, y, guidance=None, hint=None, **kwargs):\n        patch_size = 2\n        if self.latent_input:\n            hint = comfy.ldm.common_dit.pad_to_patch_size(hint, (patch_size, patch_size))\n        elif self.mistoline:\n            hint = hint * 2.0 - 1.0\n            hint = self.input_cond_block(hint)\n        else:\n            hint = hint * 2.0 - 1.0\n            hint = self.input_hint_block(hint)\n\n        hint = rearrange(hint, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=patch_size, pw=patch_size)\n\n        bs, c, h, w = x.shape\n        x = comfy.ldm.common_dit.pad_to_patch_size(x, (patch_size, patch_size))\n\n        img = rearrange(x, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=patch_size, pw=patch_size)\n\n        h_len = ((h + (patch_size // 2)) // patch_size)\n        w_len = ((w + (patch_size // 2)) // patch_size)\n        img_ids = torch.zeros((h_len, w_len, 3), device=x.device, dtype=x.dtype)\n        img_ids[..., 1] = img_ids[..., 1] + torch.linspace(0, h_len - 1, steps=h_len, device=x.device, dtype=x.dtype)[:, None]\n        img_ids[..., 2] = img_ids[..., 2] + torch.linspace(0, w_len - 1, steps=w_len, device=x.device, dtype=x.dtype)[None, :]\n        img_ids = repeat(img_ids, \"h w c -> b (h w) c\", b=bs)\n\n        txt_ids = torch.zeros((bs, context.shape[1], 3), device=x.device, dtype=x.dtype)\n        return self.forward_orig(img, img_ids, hint, context, txt_ids, timesteps, y, guidance, control_type=kwargs.get(\"control_type\", []))\n"
  },
  {
    "path": "legacy/flux/layers.py",
    "content": "# Adapted from: https://github.com/black-forest-labs/flux\n\nimport math\nimport torch\nfrom torch import Tensor, nn\n\nimport torch.nn.functional as F\nfrom einops import rearrange\nfrom torch import Tensor\nfrom dataclasses import dataclass\n\nfrom .math import attention, rope, apply_rope\nimport comfy.ldm.common_dit\n\nclass EmbedND(nn.Module):\n    def __init__(self, dim: int, theta: int, axes_dim: list):\n        super().__init__()\n        self.dim = dim\n        self.theta = theta\n        self.axes_dim = axes_dim\n\n    def forward(self, ids: Tensor) -> Tensor:\n        n_axes = ids.shape[-1]\n        emb = torch.cat(\n            [rope(ids[..., i], self.axes_dim[i], self.theta) for i in range(n_axes)],\n            dim=-3,\n        )\n\n        return emb.unsqueeze(1)\n\ndef attention_weights(q, k):\n    # implementation of in-place softmax to reduce memory req\n    scores = torch.matmul(q, k.transpose(-2, -1))\n    scores.div_(math.sqrt(q.size(-1)))\n    torch.exp(scores, out=scores)\n    summed = torch.sum(scores, dim=-1, keepdim=True)\n    scores /= summed\n    return scores.nan_to_num_(0.0, 65504., -65504.)\n\ndef timestep_embedding(t: Tensor, dim, max_period=10000, time_factor: float = 1000.0):\n    \"\"\"\n    Create sinusoidal timestep embeddings.\n    :param t: a 1-D Tensor of N indices, one per batch element. \n                      These may be fractional.\n    :param dim: the dimension of the output.\n    :param max_period: controls the minimum frequency of the embeddings.\n    :return: an (N, D) Tensor of positional embeddings.\n    \"\"\"\n    t = time_factor * t\n    half = dim // 2\n    freqs = torch.exp(-math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32, device=t.device) / half)\n\n    args = t[:, None].float() * freqs[None]\n    embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)\n    if dim % 2:\n        embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)\n    if torch.is_floating_point(t):\n        embedding = embedding.to(t)\n    return embedding\n\nclass MLPEmbedder(nn.Module):\n    def __init__(self, in_dim: int, hidden_dim: int, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.in_layer  = operations.Linear(    in_dim, hidden_dim, bias=True, dtype=dtype, device=device)\n        self.silu = nn.SiLU()\n        self.out_layer = operations.Linear(hidden_dim, hidden_dim, bias=True, dtype=dtype, device=device)\n\n    def forward(self, x: Tensor) -> Tensor:\n        return self.out_layer(self.silu(self.in_layer(x)))\n\n\nclass RMSNorm(torch.nn.Module):\n    def __init__(self, dim: int, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.scale = nn.Parameter(torch.empty((dim), dtype=dtype, device=device))    # self.scale.shape = 128\n\n    def forward(self, x: Tensor):\n        return comfy.ldm.common_dit.rms_norm(x, self.scale, 1e-6)\n\n\nclass QKNorm(torch.nn.Module):\n    def __init__(self, dim: int, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.query_norm = RMSNorm(dim, dtype=dtype, device=device, operations=operations)\n        self.key_norm   = RMSNorm(dim, dtype=dtype, device=device, operations=operations)\n\n    def forward(self, q: Tensor, k: Tensor, v: Tensor) -> tuple:\n        q = self.query_norm(q)\n        k = self.key_norm(k)\n        return q.to(v), k.to(v)\n\n\nclass SelfAttention(nn.Module):\n    def __init__(self, dim: int, num_heads: int = 8, qkv_bias: bool = False, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.num_heads = num_heads    # 24\n        head_dim = dim // num_heads   # 128 = 3072 / 24\n\n        self.qkv = operations.Linear(dim, dim * 3, bias=qkv_bias, dtype=dtype, device=device)\n        self.norm = QKNorm(head_dim, dtype=dtype, device=device, operations=operations)\n        self.proj = operations.Linear(dim, dim, dtype=dtype, device=device)    # dim is usually 3072\n\n\n@dataclass\nclass ModulationOut:\n    shift: Tensor\n    scale: Tensor\n    gate: Tensor\n\nclass Modulation(nn.Module):\n    def __init__(self, dim: int, double: bool, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.is_double = double\n        self.multiplier = 6 if double else 3\n        self.lin = operations.Linear(dim, self.multiplier * dim, bias=True, dtype=dtype, device=device)\n\n    def forward(self, vec: Tensor) -> tuple:\n        out = self.lin(nn.functional.silu(vec))[:, None, :].chunk(self.multiplier, dim=-1)\n        return (ModulationOut(*out[:3]),    ModulationOut(*out[3:]) if self.is_double else None,)\n\n\nclass DoubleStreamBlock(nn.Module):\n    def __init__(self, hidden_size: int, num_heads: int, mlp_ratio: float, qkv_bias: bool = False, dtype=None, device=None, operations=None, idx=-1):\n        super().__init__()\n\n        self.idx = idx\n\n        mlp_hidden_dim   = int(hidden_size * mlp_ratio)\n        self.num_heads   = num_heads\n        self.hidden_size = hidden_size\n        \n        self.img_mod = Modulation(hidden_size, double=True, dtype=dtype, device=device, operations=operations) # in_features=3072, out_features=18432 (3072*6)\n        self.txt_mod = Modulation(hidden_size, double=True, dtype=dtype, device=device, operations=operations) # in_features=3072, out_features=18432 (3072*6)\n\n        self.img_attn = SelfAttention(dim=hidden_size, num_heads=num_heads, qkv_bias=qkv_bias, dtype=dtype, device=device, operations=operations) # .qkv: in_features=3072, out_features=9216   .proj: 3072,3072\n        self.txt_attn = SelfAttention(dim=hidden_size, num_heads=num_heads, qkv_bias=qkv_bias, dtype=dtype, device=device, operations=operations) # .qkv: in_features=3072, out_features=9216   .proj: 3072,3072\n\n        self.img_norm1 = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n        self.txt_norm1 = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n\n        self.img_norm2 = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n        self.txt_norm2 = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n\n        self.img_mlp = nn.Sequential(\n            operations.Linear(hidden_size, mlp_hidden_dim, bias=True, dtype=dtype, device=device),\n            nn.GELU(approximate=\"tanh\"),\n            operations.Linear(mlp_hidden_dim, hidden_size, bias=True, dtype=dtype, device=device),\n        ) # 3072->12288, 12288->3072  (3072*4)\n        self.txt_mlp = nn.Sequential(\n            operations.Linear(hidden_size, mlp_hidden_dim, bias=True, dtype=dtype, device=device),\n            nn.GELU(approximate=\"tanh\"),\n            operations.Linear(mlp_hidden_dim, hidden_size, bias=True, dtype=dtype, device=device),\n        ) # 3072->12288, 12288->3072  (3072*4)\n    \n    def img_attn_preproc(self, img, img_mod1):\n        img_modulated = self.img_norm1(img)\n        img_modulated = (1 + img_mod1.scale) * img_modulated + img_mod1.shift\n        img_qkv = self.img_attn.qkv(img_modulated)\n        img_q, img_k, img_v = rearrange(img_qkv, \"B L (K H D) -> K B H L D\", K=3, H=self.num_heads)\n        img_q, img_k = self.img_attn.norm(img_q, img_k, img_v)\n        return img_q, img_k, img_v\n    \n    def txt_attn_preproc(self, txt, txt_mod1):\n        txt_modulated = self.txt_norm1(txt)\n        txt_modulated = (1 + txt_mod1.scale) * txt_modulated + txt_mod1.shift\n        txt_qkv = self.txt_attn.qkv(txt_modulated)\n        txt_q, txt_k, txt_v = rearrange(txt_qkv, \"B L (K H D) -> K B H L D\", K=3, H=self.num_heads)    # Batch SeqLen (9216==3*3072) -> 3*1 24 SeqLen 128\n        txt_q, txt_k = self.txt_attn.norm(txt_q, txt_k, txt_v)\n        return txt_q, txt_k, txt_v\n    \n    def forward(self, img: Tensor, txt: Tensor, vec: Tensor, pe: Tensor, timestep, transformer_options={}, mask=None, weight=1): # vec 1,3072\n\n        img_mod1, img_mod2 = self.img_mod(vec) # -> 3072, 3072\n        txt_mod1, txt_mod2 = self.txt_mod(vec)\n\n        img_q, img_k, img_v = self.img_attn_preproc(img, img_mod1)\n        txt_q, txt_k, txt_v = self.txt_attn_preproc(txt, txt_mod1)\n\n        q, k, v = torch.cat((txt_q, img_q), dim=2), torch.cat((txt_k, img_k), dim=2), torch.cat((txt_v, img_v), dim=2)\n        \n        \"\"\"if mask is None:\n            attn = attention(q, k, v, pe=pe)\n        else:\n            attn_false = attention(q, k, v, pe=pe)\n            attn = attention(q, k, v, pe=pe, mask=mask.to(torch.bool))\n            attn = attn_false + weight * (attn - attn_false)\"\"\"\n        \n        #I = torch.eye(q.shape[-2], q.shape[-2], dtype=q.dtype, device=q.device).expand((1,1) + (-1, -1))\n        #attn_map = attention_weights(q, k)\n        \"\"\"mask_resized = None\n        if mask is not None:\n            txt_a = txt[:,:,:]\n            txt_qa, txt_ka, txt_va = self.txt_attn_preproc(txt_a, txt_mod1)\n            \n            txt_q_rope, txt_k_rope = apply_rope(txt_q, txt_k, pe[:,:,:512,:,:])\n            img_q_rope, img_k_rope = apply_rope(img_q, img_k, pe[:,:,512:,:,:])\n\n            attn_weights = attention_weights(txt_q_rope, img_k_rope)\n            attn_weights = attn_weights.permute(0,1,3,2)\n            attn_weights_slice = attn_weights[:,:,:,:]\n            test = attn_weights_slice.mean(dim=1)\n            test2 = rearrange(test, \"b (h w) (c ph pw) -> b c (h ph) (w pw)\", h=64, w=64, ph=1, pw=1)\n            test3 = test2.mean(dim=1)\n            mask_resized = F.interpolate(test3[None,:,:,:], size=(1024,1024), mode='bilinear', align_corners=False).squeeze(1)\"\"\"\n            \n        attn = attention(q, k, v, pe=pe, mask=mask)\n        txt_attn = attn[:, :txt.shape[1]]                         # 1, 768,3072\n        img_attn = attn[:,  txt.shape[1]:]  \n        \n        img += img_mod1.gate * self.img_attn.proj(img_attn)\n        txt += txt_mod1.gate * self.txt_attn.proj(txt_attn)\n        \n        img += img_mod2.gate * self.img_mlp((1 + img_mod2.scale) * self.img_norm2(img) + img_mod2.shift)\n        txt += txt_mod2.gate * self.txt_mlp((1 + txt_mod2.scale) * self.txt_norm2(txt) + txt_mod2.shift)\n        \n        return img, txt #, mask_resized\n\n\n\nclass SingleStreamBlock(nn.Module):\n    \"\"\"\n    A DiT block with parallel linear layers as described in\n    https://arxiv.org/abs/2302.05442 and adapted modulation interface.\n    \"\"\"\n    def __init__(self, hidden_size: int,  num_heads: int, mlp_ratio: float = 4.0, qk_scale: float = None, dtype=None, device=None, operations=None, idx=-1):\n        super().__init__()\n        self.idx = idx\n        self.hidden_dim = hidden_size #3072\n        self.num_heads = num_heads    #24\n        head_dim = hidden_size // num_heads\n        self.scale = qk_scale or head_dim**-0.5   #0.08838834764831845\n\n        self.mlp_hidden_dim = int(hidden_size * mlp_ratio)    #12288 == 3072 * 4\n        # qkv and mlp_in\n        self.linear1 = operations.Linear(hidden_size, hidden_size * 3 + self.mlp_hidden_dim, dtype=dtype, device=device)\n        # proj and mlp_out\n        self.linear2 = operations.Linear(hidden_size + self.mlp_hidden_dim, hidden_size, dtype=dtype, device=device)\n\n        self.norm = QKNorm(head_dim, dtype=dtype, device=device, operations=operations)\n\n        self.hidden_size = hidden_size #3072\n        self.pre_norm = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n\n        self.mlp_act = nn.GELU(approximate=\"tanh\")\n        self.modulation = Modulation(hidden_size, double=False, dtype=dtype, device=device, operations=operations)\n        \n    def img_attn(self, img, mod, pe, mask, weight):\n        img_mod = (1 + mod.scale) * self.pre_norm(img) + mod.shift   # mod => vec\n        qkv, mlp = torch.split(self.linear1(img_mod), [3 * self.hidden_size, self.mlp_hidden_dim], dim=-1)\n\n        q, k, v = rearrange(qkv, \"B L (K H D) -> K B H L D\", K=3, H=self.num_heads)\n        q, k = self.norm(q, k, v)\n\n        \"\"\"if mask is None:\n            attn = attention(q, k, v, pe=pe)\n        else:\n            attn_false = attention(q, k, v, pe=pe)\n            attn = attention(q, k, v, pe=pe, mask=mask.to(torch.bool))\n            attn = attn_false + weight * (attn - attn_false)\"\"\"\n\n        attn = attention(q, k, v, pe=pe, mask=mask)\n        return attn, mlp\n\n    # vec 1,3072    x 1,9984,3072\n    def forward(self, img: Tensor, vec: Tensor, pe: Tensor, timestep, transformer_options={}, mask=None, weight=1) -> Tensor:   # x 1,9984,3072 if 2 reg embeds, 1,9472,3072 if none    # 9216x4096 = 16x1536x1536\n        mod, _ = self.modulation(vec)\n        attn, mlp = self.img_attn(img, mod, pe, mask, weight)\n        output = self.linear2(torch.cat((attn, self.mlp_act(mlp)), 2))\n\n        img += mod.gate * output\n        return img\n\n\n\nclass LastLayer(nn.Module):\n    def __init__(self, hidden_size: int, patch_size: int, out_channels: int, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.norm_final = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n        self.linear = operations.Linear(hidden_size, patch_size * patch_size * out_channels, bias=True, dtype=dtype, device=device)\n        self.adaLN_modulation = nn.Sequential(nn.SiLU(), operations.Linear(hidden_size, 2 * hidden_size, bias=True, dtype=dtype, device=device))\n\n    def forward(self, x: Tensor, vec: Tensor) -> Tensor:\n        shift, scale = self.adaLN_modulation(vec).chunk(2, dim=1)\n        x = (1 + scale[:, None, :]) * self.norm_final(x) + shift[:, None, :]\n        x = self.linear(x)\n        return x\n\n\n"
  },
  {
    "path": "legacy/flux/math.py",
    "content": "import torch\nfrom einops import rearrange\nfrom torch import Tensor\nfrom comfy.ldm.modules.attention import optimized_attention\nimport comfy.model_management\n\ndef attention(q: Tensor, k: Tensor, v: Tensor, pe: Tensor, mask=None) -> Tensor:\n    q, k = apply_rope(q, k, pe)\n\n    heads = q.shape[1]\n    x = optimized_attention(q, k, v, heads, skip_reshape=True, mask=mask)\n    return x\n\n\ndef rope(pos: Tensor, dim: int, theta: int) -> Tensor:\n    assert dim % 2 == 0\n    if comfy.model_management.is_device_mps(pos.device) or comfy.model_management.is_intel_xpu():\n        device = torch.device(\"cpu\")\n    else:\n        device = pos.device\n\n    scale = torch.linspace(0, (dim - 2) / dim, steps=dim//2, dtype=torch.float64, device=device)\n    omega = 1.0 / (theta**scale)\n    out = torch.einsum(\"...n,d->...nd\", pos.to(dtype=torch.float32, device=device), omega)\n    out = torch.stack([torch.cos(out), -torch.sin(out), torch.sin(out), torch.cos(out)], dim=-1)\n    out = rearrange(out, \"b n d (i j) -> b n d i j\", i=2, j=2)\n    return out.to(dtype=torch.float32, device=pos.device)\n\n\ndef apply_rope(xq: Tensor, xk: Tensor, freqs_cis: Tensor):\n    xq_ = xq.float().reshape(*xq.shape[:-1], -1, 1, 2)\n    xk_ = xk.float().reshape(*xk.shape[:-1], -1, 1, 2)\n    xq_out = freqs_cis[..., 0] * xq_[..., 0] + freqs_cis[..., 1] * xq_[..., 1]\n    xk_out = freqs_cis[..., 0] * xk_[..., 0] + freqs_cis[..., 1] * xk_[..., 1]\n    return xq_out.reshape(*xq.shape).type_as(xq), xk_out.reshape(*xk.shape).type_as(xk)\n\n\n"
  },
  {
    "path": "legacy/flux/model.py",
    "content": "# Adapted from: https://github.com/black-forest-labs/flux\n\nimport torch\nfrom torch import Tensor, nn\nfrom dataclasses import dataclass\nimport copy\n\nfrom .layers import (\n    DoubleStreamBlock,\n    EmbedND,\n    LastLayer,\n    MLPEmbedder,\n    SingleStreamBlock,\n    timestep_embedding,\n)\n\nfrom comfy.ldm.flux.layers import timestep_embedding\nfrom comfy.ldm.flux.model import Flux as Flux\n\nfrom einops import rearrange, repeat\nimport comfy.ldm.common_dit\n\n@dataclass\nclass FluxParams:\n    in_channels: int\n    out_channels: int\n    vec_in_dim: int\n    context_in_dim: int\n    hidden_size: int\n    mlp_ratio: float\n    num_heads: int\n    depth: int\n    depth_single_blocks: int\n    axes_dim: list\n    theta: int\n    patch_size: int\n    qkv_bias: bool\n    guidance_embed: bool\n\nclass ReFlux(Flux):\n    def __init__(self, image_model=None, final_layer=True, dtype=None, device=None, operations=None, **kwargs):\n        super().__init__()\n        self.dtype = dtype\n        self.timestep = -1.0\n        self.threshold_inv = False\n        params = FluxParams(**kwargs)\n        \n        self.params = params #self.params FluxParams(in_channels=16, out_channels=16, vec_in_dim=768, context_in_dim=4096, hidden_size=3072, mlp_ratio=4.0, num_heads=24, depth=19, depth_single_blocks=38, axes_dim=[16, 56, 56], theta=10000, patch_size=2, qkv_bias=True, guidance_embed=False)\n        self.patch_size = params.patch_size\n        self.in_channels  = params.in_channels  * params.patch_size * params.patch_size    # in_channels 64\n        self.out_channels = params.out_channels * params.patch_size * params.patch_size    # out_channels 64\n        \n        if params.hidden_size % params.num_heads != 0:\n            raise ValueError(f\"Hidden size {params.hidden_size} must be divisible by num_heads {params.num_heads}\")\n        pe_dim = params.hidden_size // params.num_heads\n        if sum(params.axes_dim) != pe_dim:\n            raise ValueError(f\"Got {params.axes_dim} but expected positional dim {pe_dim}\")\n        \n        self.hidden_size = params.hidden_size  # 3072\n        self.num_heads   = params.num_heads    # 24\n        self.pe_embedder = EmbedND(dim=pe_dim, theta=params.theta, axes_dim=params.axes_dim)\n        \n        self.img_in = operations.Linear(     self.in_channels, self.hidden_size, bias=True, dtype=dtype, device=device)   # in_features=  64, out_features=3072\n        self.txt_in = operations.Linear(params.context_in_dim, self.hidden_size,            dtype=dtype, device=device)   # in_features=4096, out_features=3072, bias=True\n\n        self.time_in      = MLPEmbedder(           in_dim=256, hidden_dim=self.hidden_size, dtype=dtype, device=device, operations=operations)\n        self.vector_in    = MLPEmbedder(params.vec_in_dim,                self.hidden_size, dtype=dtype, device=device, operations=operations) # in_features=768, out_features=3072 (first layer) second layer 3072,3072\n        self.guidance_in = (MLPEmbedder(           in_dim=256, hidden_dim=self.hidden_size, dtype=dtype, device=device, operations=operations) if params.guidance_embed else nn.Identity())\n\n        self.double_blocks = nn.ModuleList([DoubleStreamBlock(self.hidden_size, self.num_heads, mlp_ratio=params.mlp_ratio, qkv_bias=params.qkv_bias, dtype=dtype, device=device, operations=operations, idx=_) for _ in range(params.depth)])\n        self.single_blocks = nn.ModuleList([SingleStreamBlock(self.hidden_size, self.num_heads, mlp_ratio=params.mlp_ratio,                           dtype=dtype, device=device, operations=operations, idx=_) for _ in range(params.depth_single_blocks)])\n\n        if final_layer:\n            self.final_layer = LastLayer(self.hidden_size, 1, self.out_channels, dtype=dtype, device=device, operations=operations)\n\n\n    \n    \n    def forward_blocks(self, img: Tensor, img_ids: Tensor, txt: Tensor, txt_ids: Tensor, timesteps: Tensor, y: Tensor, guidance: Tensor = None, control=None, transformer_options = {},) -> Tensor:\n        if img.ndim != 3 or txt.ndim != 3:\n            raise ValueError(\"Input img and txt tensors must have 3 dimensions.\")\n\n        # running on sequences img\n        img = self.img_in(img)    # 1,9216,64  == 768x192       # 1,9216,64   ==   1,16,128,256 + 1,16,64,64    # 1,8192,64 with uncond/cond   #:,:,64 -> :,:,3072\n        vec = self.time_in(timestep_embedding(timesteps, 256).to(img.dtype)) # 1 -> 1,3072\n        if self.params.guidance_embed:\n            if guidance is None:\n                print(\"Guidance strength is none, not using distilled guidance.\")\n            else:\n                vec = vec + self.guidance_in(timestep_embedding(guidance, 256).to(img.dtype))\n\n        vec = vec + self.vector_in(y)  #y.shape=1,768  y==all 0s\n        txt = self.txt_in(txt)         #\n\n        ids = torch.cat((txt_ids, img_ids), dim=1) # img_ids.shape=1,8192,3    txt_ids.shape=1,512,3    #ids.shape=1,8704,3\n        pe = self.pe_embedder(ids)                 # pe.shape 1,1,8704,64,2,2\n        \n        weight = transformer_options['reg_cond_weight'] if 'reg_cond_weight' in transformer_options else 0.0\n        floor  = transformer_options['reg_cond_floor']  if 'reg_cond_floor'  in transformer_options else 0.0\n        mask_orig, mask_self = None, None\n        mask_obj = transformer_options.get('patches', {}).get('regional_conditioning_mask', None)\n        if mask_obj is not None and weight >= 0:\n            mask_orig = mask_obj[0](transformer_options, weight.item())\n            mask_self = mask_orig.clone()\n            mask_self[mask_obj[0].text_len:,   mask_obj[0].text_len:] = mask_self.max()\n\n        mask_resized_list = []\n        mask = None\n        mask_obj = transformer_options.get('patches', {}).get('regional_conditioning_mask', None)\n        if mask_obj is not None and weight >= 0:\n            mask = mask_obj[0](transformer_options, weight.item())\n            text_len = mask_obj[0].text_len\n            mask[text_len:,text_len:] = torch.clamp(mask[text_len:,text_len:], min=floor.to(mask.device))\n\n\n\n        for i, block in enumerate(self.double_blocks):\n            #img, txt, mask_resized = block(img=img, txt=txt, vec=vec, pe=pe, timestep=timesteps, transformer_options=transformer_options, mask=mask, weight=weight) #, mask=mask)\n            img, txt = block(img=img, txt=txt, vec=vec, pe=pe, timestep=timesteps, transformer_options=transformer_options, mask=mask, weight=weight) #, mask=mask)\n            #if mask is not None:\n            #    mask_resized_list.append(mask_resized)\n\n            if control is not None: # Controlnet\n                control_i = control.get(\"input\")\n                if i < len(control_i):\n                    add = control_i[i]\n                    if add is not None:\n                        img[:1] += add\n\n\n\n        img = torch.cat((txt, img), 1)   #first 256 is txt embed\n        for i, block in enumerate(self.single_blocks):\n            img = block(img, vec=vec, pe=pe, timestep=timesteps, transformer_options=transformer_options, mask=mask, weight=weight)\n\n            if control is not None: # Controlnet\n                control_o = control.get(\"output\")\n                if i < len(control_o):\n                    add = control_o[i]\n                    if add is not None:\n                        img[:1, txt.shape[1] :, ...] += add\n                        \n                        \n                        \n        img = img[:, txt.shape[1] :, ...]\n        img = self.final_layer(img, vec)  # (N, T, patch_size ** 2 * out_channels)     1,8192,3072 -> 1,8192,64 \n        return img\n    \n    \n    \n    def _get_img_ids(self, x, bs, h_len, w_len, h_start, h_end, w_start, w_end):\n        img_ids = torch.zeros((h_len, w_len, 3), device=x.device, dtype=x.dtype)\n        img_ids[..., 1] = img_ids[..., 1] + torch.linspace(h_start, h_end - 1, steps=h_len, device=x.device, dtype=x.dtype)[:, None]\n        img_ids[..., 2] = img_ids[..., 2] + torch.linspace(w_start, w_end - 1, steps=w_len, device=x.device, dtype=x.dtype)[None, :]\n        img_ids = repeat(img_ids, \"h w c -> b (h w) c\", b=bs)\n        return img_ids\n\n\n\n    def forward(self, x, timestep, context, y, guidance, control=None, transformer_options={}, **kwargs):\n\n        out_list = []\n        for i in range(len(transformer_options['cond_or_uncond'])):\n            UNCOND = transformer_options['cond_or_uncond'][i] == 1\n\n            bs, c, h, w = x.shape\n            transformer_options['original_shape'] = x.shape\n            patch_size = 2\n            x = comfy.ldm.common_dit.pad_to_patch_size(x, (patch_size, patch_size))    # 1,16,192,192\n            transformer_options['patch_size'] = patch_size\n            \n            #if 'regional_conditioning_weight' not in transformer_options:    # this breaks the graph\n            #    transformer_options['regional_conditioning_weight'] = timestep[0] / 1.5\n                \n            h_len = ((h + (patch_size // 2)) // patch_size) # h_len 96\n            w_len = ((w + (patch_size // 2)) // patch_size) # w_len 96\n\n            img = rearrange(x, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=patch_size, pw=patch_size) # img 1,9216,64\n\n            if UNCOND:\n                transformer_options['reg_cond_weight'] = -1\n                context_tmp = context[i][None,...].clone()\n            elif UNCOND == False:\n                transformer_options['reg_cond_weight'] = transformer_options['regional_conditioning_weight']\n                transformer_options['reg_cond_floor'] = transformer_options['regional_conditioning_floor'] #if \"regional_conditioning_floor\" in transformer_options else 0.0\n                regional_conditioning_positive = transformer_options.get('patches', {}).get('regional_conditioning_positive', None)\n                context_tmp = regional_conditioning_positive[0].concat_cond(context[i][None,...], transformer_options)\n                \n            txt_ids      = torch.zeros((bs, context_tmp.shape[1], 3), device=x.device, dtype=x.dtype)      # txt_ids        1, 256,3\n            img_ids_orig = self._get_img_ids(x, bs, h_len, w_len, 0, h_len, 0, w_len)                  # img_ids_orig = 1,9216,3\n\n            out_tmp = self.forward_blocks(img       [i][None,...].clone(), \n                                        img_ids_orig[i][None,...].clone(), \n                                        context_tmp,\n                                        txt_ids     [i][None,...].clone(), \n                                        timestep    [i][None,...].clone(), \n                                        y           [i][None,...].clone(),\n                                        guidance    [i][None,...].clone(),\n                                        control, transformer_options=transformer_options)  # context 1,256,4096   y 1,768\n            out_list.append(out_tmp)\n            \n        out = torch.stack(out_list, dim=0).squeeze(dim=1)\n        \n        return rearrange(out, \"b (h w) (c ph pw) -> b c (h ph) (w pw)\", h=h_len, w=w_len, ph=2, pw=2)[:,:,:h,:w]\n"
  },
  {
    "path": "legacy/flux/redux.py",
    "content": "import torch\nimport comfy.ops\n\nops = comfy.ops.manual_cast\n\nclass ReduxImageEncoder(torch.nn.Module):\n    def __init__(\n        self,\n        redux_dim: int = 1152,\n        txt_in_features: int = 4096,\n        device=None,\n        dtype=None,\n    ) -> None:\n        super().__init__()\n\n        self.redux_dim = redux_dim\n        self.device = device\n        self.dtype = dtype\n\n        self.redux_up = ops.Linear(redux_dim, txt_in_features * 3, dtype=dtype)\n        self.redux_down = ops.Linear(txt_in_features * 3, txt_in_features, dtype=dtype)\n\n    def forward(self, sigclip_embeds) -> torch.Tensor:\n        projected_x = self.redux_down(torch.nn.functional.silu(self.redux_up(sigclip_embeds)))\n        return projected_x\n"
  },
  {
    "path": "legacy/helper.py",
    "content": "import re\nimport torch\nfrom comfy.samplers import SCHEDULER_NAMES\nimport torch.nn.functional as F\nfrom ..res4lyf import RESplain\n\n\ndef get_extra_options_kv(key, default, extra_options):\n\n    match = re.search(rf\"{key}\\s*=\\s*([a-zA-Z0-9_.+-]+)\", extra_options)\n    if match:\n        value = match.group(1)\n    else:\n        value = default\n    return value\n\ndef get_extra_options_list(key, default, extra_options):\n\n    match = re.search(rf\"{key}\\s*=\\s*([a-zA-Z0-9_.,+-]+)\", extra_options)\n    if match:\n        value = match.group(1)\n    else:\n        value = default\n    return value\n\ndef extra_options_flag(flag, extra_options):\n    return bool(re.search(rf\"{flag}\", extra_options))\n\ndef safe_get_nested(d, keys, default=None):\n    for key in keys:\n        if isinstance(d, dict):\n            d = d.get(key, default)\n        else:\n            return default\n    return d\n\ndef is_video_model(model):\n    is_video_model = False\n    try :\n        is_video_model = 'video'  in model.inner_model.inner_model.model_config.unet_config['image_model'] or \\\n                         'cosmos' in model.inner_model.inner_model.model_config.unet_config['image_model']\n    except:\n        pass\n    return is_video_model\n\ndef is_RF_model(model):\n    from comfy import model_sampling\n    modelsampling = model.inner_model.inner_model.model_sampling\n    return isinstance(modelsampling, model_sampling.CONST)\n\n\n\ndef lagrange_interpolation(x_values, y_values, x_new):\n\n    if not isinstance(x_values, torch.Tensor):\n        x_values = torch.tensor(x_values, dtype=torch.get_default_dtype())\n    if x_values.ndim != 1:\n        raise ValueError(\"x_values must be a 1D tensor or a list of scalars.\")\n\n    if not isinstance(x_new, torch.Tensor):\n        x_new = torch.tensor(x_new, dtype=x_values.dtype, device=x_values.device)\n    if x_new.ndim == 0:\n        x_new = x_new.unsqueeze(0)\n\n    if isinstance(y_values, list):\n        y_values = torch.stack(y_values, dim=0)\n    if y_values.ndim < 1:\n        raise ValueError(\"y_values must have at least one dimension (the sample dimension).\")\n\n    n = x_values.shape[0]\n    if y_values.shape[0] != n:\n        raise ValueError(f\"Mismatch: x_values has length {n} but y_values has {y_values.shape[0]} samples.\")\n\n    m = x_new.shape[0]\n    result_shape = (m,) + y_values.shape[1:]\n    result = torch.zeros(result_shape, dtype=y_values.dtype, device=y_values.device)\n\n    for i in range(n):\n        Li = torch.ones_like(x_new, dtype=y_values.dtype, device=y_values.device)\n        xi = x_values[i]\n        for j in range(n):\n            if i == j:\n                continue\n            xj = x_values[j]\n            Li = Li * ((x_new - xj) / (xi - xj))\n        extra_dims = (1,) * (y_values.ndim - 1)\n        Li = Li.view(m, *extra_dims)\n        result = result + Li * y_values[i]\n\n    return result\n\n\ndef get_cosine_similarity_manual(a, b):\n    return (a * b).sum() / (torch.norm(a) * torch.norm(b))\n\n\n\ndef get_cosine_similarity(a, b):\n    if a.dim() == 5 and b.dim() == 5 and b.shape[2] == 1:\n        b = b.expand(-1, -1, a.shape[2], -1, -1)\n    return F.cosine_similarity(a.flatten(), b.flatten(), dim=0)\n\n\ndef get_pearson_similarity(a, b):\n    a = a.mean(dim=(-2,-1))\n    b = b.mean(dim=(-2,-1))\n    if a.dim() == 5 and b.dim() == 5 and b.shape[2] == 1:\n        b = b.expand(-1, -1, a.shape[2], -1, -1)\n    return F.cosine_similarity(a.flatten(), b.flatten(), dim=0)\n\n\n\ndef initialize_or_scale(tensor, value, steps):\n    if tensor is None:\n        return torch.full((steps,), value)\n    else:\n        return value * tensor\n\n\ndef has_nested_attr(obj, attr_path):\n    attrs = attr_path.split('.')\n    for attr in attrs:\n        if not hasattr(obj, attr):\n            return False\n        obj = getattr(obj, attr)\n    return True\n\ndef get_res4lyf_scheduler_list():\n    scheduler_names = SCHEDULER_NAMES.copy()\n    if \"beta57\" not in scheduler_names:\n        scheduler_names.append(\"beta57\")\n    return scheduler_names\n\ndef conditioning_set_values(conditioning, values={}):\n    c = []\n    for t in conditioning:\n        n = [t[0], t[1].copy()]\n        for k in values:\n            n[1][k] = values[k]\n        c.append(n)\n\n    return c\n\n\ndef get_collinear_alt(x, y):\n\n    y_flat = y.view(y.size(0), -1).clone()\n    x_flat = x.view(x.size(0), -1).clone()\n\n    y_flat /= y_flat.norm(dim=-1, keepdim=True)\n    x_proj_y = torch.sum(x_flat * y_flat, dim=-1, keepdim=True) * y_flat\n\n    return x_proj_y.view_as(x)\n\n\ndef get_collinear(x, y):\n\n    y_flat = y.view(y.size(0), -1).clone()\n    x_flat = x.view(x.size(0), -1).clone()\n\n    y_flat /= y_flat.norm(dim=-1, keepdim=True)\n    x_proj_y = torch.sum(x_flat * y_flat, dim=-1, keepdim=True) * y_flat\n\n    return x_proj_y.view_as(x)\n\n\ndef get_orthogonal(x, y):\n\n    y_flat = y.view(y.size(0), -1).clone()\n    x_flat = x.view(x.size(0), -1).clone()\n\n    y_flat /= y_flat.norm(dim=-1, keepdim=True)\n    x_proj_y = torch.sum(x_flat * y_flat, dim=-1, keepdim=True) * y_flat\n    \n    x_ortho_y = x_flat - x_proj_y \n\n    return x_ortho_y.view_as(x)\n\n\n# pytorch slerp implementation from https://gist.github.com/Birch-san/230ac46f99ec411ed5907b0a3d728efa\nfrom torch import FloatTensor, LongTensor, Tensor, Size, lerp, zeros_like\nfrom torch.linalg import norm\n\n# adapted to PyTorch from:\n# https://gist.github.com/dvschultz/3af50c40df002da3b751efab1daddf2c\n# most of the extra complexity is to support:\n# - many-dimensional vectors\n# - v0 or v1 with last dim all zeroes, or v0 ~colinear with v1\n#   - falls back to lerp()\n#   - conditional logic implemented with parallelism rather than Python loops\n# - many-dimensional tensor for t\n#   - you can ask for batches of slerp outputs by making t more-dimensional than the vectors\n#   -   slerp(\n#         v0:   torch.Size([2,3]),\n#         v1:   torch.Size([2,3]),\n#         t:  torch.Size([4,1,1]), \n#       )\n#   - this makes it interface-compatible with lerp()\ndef slerp(v0: FloatTensor, v1: FloatTensor, t: float|FloatTensor, DOT_THRESHOLD=0.9995):\n  '''\n  Spherical linear interpolation\n  Args:\n    v0: Starting vector\n    v1: Final vector\n    t: Float value between 0.0 and 1.0\n    DOT_THRESHOLD: Threshold for considering the two vectors as\n                            colinear. Not recommended to alter this.\n  Returns:\n      Interpolation vector between v0 and v1\n  '''\n  assert v0.shape == v1.shape, \"shapes of v0 and v1 must match\"\n\n  # Normalize the vectors to get the directions and angles\n  v0_norm: FloatTensor = norm(v0, dim=-1)\n  v1_norm: FloatTensor = norm(v1, dim=-1)\n\n  v0_normed: FloatTensor = v0 / v0_norm.unsqueeze(-1)\n  v1_normed: FloatTensor = v1 / v1_norm.unsqueeze(-1)\n\n  # Dot product with the normalized vectors\n  dot: FloatTensor = (v0_normed * v1_normed).sum(-1)\n  dot_mag: FloatTensor = dot.abs()\n\n  # if dp is NaN, it's because the v0 or v1 row was filled with 0s\n  # If absolute value of dot product is almost 1, vectors are ~colinear, so use lerp\n  gotta_lerp: LongTensor = dot_mag.isnan() | (dot_mag > DOT_THRESHOLD)\n  can_slerp: LongTensor = ~gotta_lerp\n\n  t_batch_dim_count: int = max(0, t.dim()-v0.dim()) if isinstance(t, Tensor) else 0\n  t_batch_dims: Size = t.shape[:t_batch_dim_count] if isinstance(t, Tensor) else Size([])\n  out: FloatTensor = zeros_like(v0.expand(*t_batch_dims, *[-1]*v0.dim()))\n\n  # if no elements are lerpable, our vectors become 0-dimensional, preventing broadcasting\n  if gotta_lerp.any():\n    lerped: FloatTensor = lerp(v0, v1, t)\n\n    out: FloatTensor = lerped.where(gotta_lerp.unsqueeze(-1), out)\n\n  # if no elements are slerpable, our vectors become 0-dimensional, preventing broadcasting\n  if can_slerp.any():\n\n    # Calculate initial angle between v0 and v1\n    theta_0: FloatTensor = dot.arccos().unsqueeze(-1)\n    sin_theta_0: FloatTensor = theta_0.sin()\n    # Angle at timestep t\n    theta_t: FloatTensor = theta_0 * t\n    sin_theta_t: FloatTensor = theta_t.sin()\n    # Finish the slerp algorithm\n    s0: FloatTensor = (theta_0 - theta_t).sin() / sin_theta_0\n    s1: FloatTensor = sin_theta_t / sin_theta_0\n    slerped: FloatTensor = s0 * v0 + s1 * v1\n\n    out: FloatTensor = slerped.where(can_slerp.unsqueeze(-1), out)\n  \n  return out\n\n\n\n\nclass OptionsManager:\n    APPEND_OPTIONS = {\"extra_options\"}\n\n    def __init__(self, options_inputs=None):\n        self.options_list = options_inputs or []\n        self._merged_dict = None\n\n    def add_option(self, option):\n        \"\"\"Add a single options dictionary\"\"\"\n        if option is not None:\n            self.options_list.append(option)\n            self._merged_dict = None # invalidate cached merged options\n\n    @property\n    def merged(self):\n        \"\"\"Get merged options with proper priority handling\"\"\"\n        if self._merged_dict is None:\n            self._merged_dict = {}\n\n            special_string_options = {\n                key: [] for key in self.APPEND_OPTIONS\n            }\n\n            for options_dict in self.options_list:\n                if options_dict is not None:\n                    for key, value in options_dict.items():\n                        if key in self.APPEND_OPTIONS and value:\n                            special_string_options[key].append(value)\n                        elif isinstance(value, dict):\n                            # Deep merge dictionaries\n                            if key not in self._merged_dict:\n                                self._merged_dict[key] = {}\n\n                            if isinstance(self._merged_dict[key], dict):\n                                self._deep_update(self._merged_dict[key], value)\n                            else:\n                                self._merged_dict[key] = value.copy()\n                        else:\n                            self._merged_dict[key] = value\n\n            # append special case string options (e.g. extra_options)\n            for key, value in special_string_options.items():\n                if value:\n                    self._merged_dict[key] = \"\\n\".join(value)\n\n        return self._merged_dict\n\n    def get(self, key, default=None):\n        return self.merged.get(key, default)\n\n    def _deep_update(self, target_dict, source_dict):\n\n        for key, value in source_dict.items():\n            if isinstance(value, dict) and key in target_dict and isinstance(target_dict[key], dict):\n                # recursive dict update\n                self._deep_update(target_dict[key], value)\n            else:\n                target_dict[key] = value\n\n    def __getitem__(self, key):\n        \"\"\"Allow dictionary-like access to options\"\"\"\n        return self.merged[key]\n\n    def __contains__(self, key):\n        \"\"\"Allow 'in' operator for options\"\"\"\n        return key in self.merged\n\n    def as_dict(self):\n        \"\"\"Return the merged options as a dictionary\"\"\"\n        return self.merged.copy()\n\n    def __bool__(self):\n        \"\"\"Return True if there are any options\"\"\"\n        return len(self.options_list) > 0 and any(opt is not None for opt in self.options_list)\n\n    def debug_print_options(self):\n        for i, options_dict in enumerate(self.options_list):\n            RESplain(f\"Options {i}:\", debug=True)\n            if options_dict is not None:\n                for key, value in options_dict.items():\n                    RESplain(f\"  {key}: {value}\", debug=True)\n            else:\n                RESplain(\"  None\", \"\\n\", debug=True)\n"
  },
  {
    "path": "legacy/latents.py",
    "content": "\r\nimport torch\r\nimport torch.nn.functional as F\r\nimport math\r\nimport itertools\r\n\r\nimport comfy.samplers\r\nimport comfy.sample\r\nimport comfy.sampler_helpers\r\nimport comfy.utils\r\n\r\nfrom .noise_classes import NOISE_GENERATOR_NAMES, NOISE_GENERATOR_CLASSES, precision_tool, prepare_noise\r\n\r\n\r\n\r\ndef initialize_or_scale(tensor, value, steps):\r\n    if tensor is None:\r\n        return torch.full((steps,), value)\r\n    else:\r\n        return value * tensor\r\n    \r\ndef latent_normalize_channels(x):\r\n    mean = x.mean(dim=(2, 3), keepdim=True)\r\n    std  = x.std (dim=(2, 3), keepdim=True)\r\n    return  (x - mean) / std\r\n\r\ndef latent_stdize_channels(x):\r\n    std  = x.std (dim=(2, 3), keepdim=True)\r\n    return  x / std\r\n\r\ndef latent_meancenter_channels(x):\r\n    mean = x.mean(dim=(2, 3), keepdim=True)\r\n    return  x - mean\r\n\r\n\r\ndef initialize_or_scale(tensor, value, steps):\r\n    if tensor is None:\r\n        return torch.full((steps,), value)\r\n    else:\r\n        return value * tensor\r\n\r\n\r\ndef normalize_latent(target, source=None, mean=True, std=True, set_mean=None, set_std=None, channelwise=True):\r\n    target = target.clone()\r\n    source = source.clone() if source is not None else None\r\n    def normalize_single_latent(single_target, single_source=None):\r\n        y = torch.zeros_like(single_target)\r\n        for b in range(y.shape[0]):\r\n            if channelwise:\r\n                for c in range(y.shape[1]):\r\n                    single_source_mean = single_source[b][c].mean() if set_mean is None else set_mean\r\n                    single_source_std  = single_source[b][c].std()  if set_std  is None else set_std\r\n                    \r\n                    if mean and std:\r\n                        y[b][c] = (single_target[b][c] - single_target[b][c].mean()) / single_target[b][c].std()\r\n                        if single_source is not None:\r\n                            y[b][c] = y[b][c] * single_source_std + single_source_mean\r\n                    elif mean:\r\n                        y[b][c] = single_target[b][c] - single_target[b][c].mean()\r\n                        if single_source is not None:\r\n                            y[b][c] = y[b][c] + single_source_mean\r\n                    elif std:\r\n                        y[b][c] = single_target[b][c] / single_target[b][c].std()\r\n                        if single_source is not None:\r\n                            y[b][c] = y[b][c] * single_source_std\r\n            else:\r\n                single_source_mean = single_source[b].mean() if set_mean is None else set_mean\r\n                single_source_std  = single_source[b].std()  if set_std  is None else set_std\r\n                \r\n                if mean and std:\r\n                    y[b] = (single_target[b] - single_target[b].mean()) / single_target[b].std()\r\n                    if single_source is not None:\r\n                        y[b] = y[b] * single_source_std + single_source_mean\r\n                elif mean:\r\n                    y[b] = single_target[b] - single_target[b].mean()\r\n                    if single_source is not None:\r\n                        y[b] = y[b] + single_source_mean\r\n                elif std:\r\n                    y[b] = single_target[b] / single_target[b].std()\r\n                    if single_source is not None:\r\n                        y[b] = y[b] * single_source_std\r\n        return y\r\n\r\n    if isinstance(target, (list, tuple)):\r\n        if source is not None:\r\n            assert isinstance(source, (list, tuple)) and len(source) == len(target), \\\r\n                \"If target is a list/tuple, source must be a list/tuple of the same length.\"\r\n            return [normalize_single_latent(t, s) for t, s in zip(target, source)]\r\n        else:\r\n            return [normalize_single_latent(t) for t in target]\r\n    else:\r\n        return normalize_single_latent(target, source)\r\n\r\n\r\n\r\n\r\nclass AdvancedNoise:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\":{\r\n                \"alpha\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.1, \"round\": 0.01}),\r\n                \"k\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":2.0, \"round\": 0.01}),\r\n                \"noise_seed\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\r\n                \"noise_type\": (NOISE_GENERATOR_NAMES, ),\r\n            },\r\n        }\r\n\r\n    RETURN_TYPES = (\"NOISE\",)\r\n    FUNCTION = \"get_noise\"\r\n    CATEGORY = \"RES4LYF/noise\"\r\n\r\n    def get_noise(self, noise_seed, noise_type, alpha, k):\r\n        return (Noise_RandomNoise(noise_seed, noise_type, alpha, k),)\r\n\r\nclass Noise_RandomNoise:\r\n    def __init__(self, seed, noise_type, alpha, k):\r\n        self.seed = seed\r\n        self.noise_type = noise_type\r\n        self.alpha = alpha\r\n        self.k = k\r\n\r\n    def generate_noise(self, input_latent):\r\n        latent_image = input_latent[\"samples\"]\r\n        batch_inds = input_latent[\"batch_index\"] if \"batch_index\" in input_latent else None\r\n        return prepare_noise(latent_image, self.seed, self.noise_type, batch_inds, self.alpha, self.k)\r\n\r\n\r\nclass LatentNoised:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {\r\n                    \"add_noise\": (\"BOOLEAN\", {\"default\": True}),\r\n                    \"noise_is_latent\": (\"BOOLEAN\", {\"default\": False}),\r\n                    \"noise_type\": (NOISE_GENERATOR_NAMES, ),\r\n                    \"alpha\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.1, \"round\": 0.01}),\r\n                    \"k\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":2.0, \"round\": 0.01}),\r\n                    \"noise_seed\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\r\n                    \"latent_image\": (\"LATENT\", ),\r\n                    \"noise_strength\": (\"FLOAT\", {\"default\": 1.0, \"min\": -20.0, \"max\": 20.0, \"step\": 0.01, \"round\": 0.01}),\r\n                    \"normalize\": ([\"false\", \"true\"], {\"default\": \"false\"}),\r\n                     },\r\n                \"optional\": \r\n                    {\r\n                    \"latent_noise\": (\"LATENT\", ),\r\n                    \"mask\": (\"MASK\", ),\r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    RETURN_NAMES = (\"latent_noised\",)\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/noise\"\r\n    \r\n    def main(self, add_noise, noise_is_latent, noise_type, noise_seed, alpha, k, latent_image, noise_strength, normalize, latent_noise=None, mask=None):\r\n            latent_out = latent_image.copy()\r\n            samples = latent_out[\"samples\"].clone()\r\n\r\n            torch.manual_seed(noise_seed)\r\n\r\n            if not add_noise:\r\n                noise = torch.zeros(samples.size(), dtype=samples.dtype, layout=samples.layout, device=\"cpu\")\r\n            elif latent_noise is None:\r\n                batch_inds = latent_out[\"batch_index\"] if \"batch_index\" in latent_out else None\r\n                noise = prepare_noise(samples, noise_seed, noise_type, batch_inds, alpha, k)\r\n            else:\r\n                noise = latent_noise[\"samples\"]\r\n\r\n            if normalize == \"true\":\r\n                latent_mean = samples.mean()\r\n                latent_std = samples.std()\r\n                noise = noise * latent_std + latent_mean\r\n\r\n            if noise_is_latent:\r\n                noise += samples.cpu()\r\n                noise.sub_(noise.mean()).div_(noise.std())\r\n            \r\n            noise = noise * noise_strength\r\n\r\n            if mask is not None:\r\n                mask = F.interpolate(mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])), \r\n                                    size=(samples.shape[2], samples.shape[3]), \r\n                                    mode=\"bilinear\")\r\n                mask = mask.expand((-1, samples.shape[1], -1, -1)).to(samples.device)\r\n                if mask.shape[0] < samples.shape[0]:\r\n                    mask = mask.repeat((samples.shape[0] - 1) // mask.shape[0] + 1, 1, 1, 1)[:samples.shape[0]]\r\n                elif mask.shape[0] > samples.shape[0]:\r\n                    mask = mask[:samples.shape[0]]\r\n                \r\n                noise = mask * noise + (1 - mask) * torch.zeros_like(noise)\r\n\r\n            latent_out[\"samples\"] = samples.cpu() + noise\r\n\r\n            return (latent_out,)\r\n\r\n\r\n\r\n\r\nclass MaskToggle:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                    \"enable\": (\"BOOLEAN\", {\"default\": True}),    \r\n                    \"mask\": (\"MASK\", ),\r\n                     },\r\n                }\r\n\r\n    RETURN_TYPES = (\"MASK\",)\r\n    RETURN_NAMES = (\"mask\",)\r\n    CATEGORY = \"RES4LYF/masks\"\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    def main(self, enable=True, mask=None):\r\n        if enable == False:\r\n            mask = None\r\n        return (mask, )\r\n\r\n\r\n\r\nclass set_precision:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                    \"latent_image\": (\"LATENT\", ),      \r\n                    \"precision\": ([\"16\", \"32\", \"64\"], ),\r\n                    \"set_default\": (\"BOOLEAN\", {\"default\": False})\r\n                     },\r\n                }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    RETURN_NAMES = (\"passthrough\",)\r\n    CATEGORY = \"RES4LYF/precision\"\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    def main(self, precision=\"32\", latent_image=None, set_default=False):\r\n        match precision:\r\n            case \"16\":\r\n                if set_default is True:\r\n                    torch.set_default_dtype(torch.float16)\r\n                x = latent_image[\"samples\"].to(torch.float16)\r\n            case \"32\":\r\n                if set_default is True:\r\n                    torch.set_default_dtype(torch.float32)\r\n                x = latent_image[\"samples\"].to(torch.float32)\r\n            case \"64\":\r\n                if set_default is True:\r\n                    torch.set_default_dtype(torch.float64)\r\n                x = latent_image[\"samples\"].to(torch.float64)\r\n        return ({\"samples\": x}, )\r\n    \r\n\r\nclass set_precision_universal:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                    \"precision\": ([\"bf16\", \"fp16\", \"fp32\", \"fp64\", \"passthrough\"], {\"default\": \"fp32\"}),\r\n                    \"set_default\": (\"BOOLEAN\", {\"default\": False})\r\n                    },\r\n            \"optional\": {\r\n                    \"cond_pos\": (\"CONDITIONING\",),\r\n                    \"cond_neg\": (\"CONDITIONING\",),\r\n                    \"sigmas\": (\"SIGMAS\", ),\r\n                    \"latent_image\": (\"LATENT\", ),\r\n                    },\r\n                }\r\n\r\n    RETURN_TYPES = (\"CONDITIONING\", \"CONDITIONING\", \"SIGMAS\", \"LATENT\",)\r\n    RETURN_NAMES = (\"cond_pos\",\"cond_neg\",\"sigmas\",\"latent_image\",)\r\n    CATEGORY = \"RES4LYF/precision\"\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    def main(self, precision=\"fp32\", cond_pos=None, cond_neg=None, sigmas=None, latent_image=None, set_default=False):\r\n        dtype = None\r\n        match precision:\r\n            case \"bf16\":\r\n                dtype = torch.bfloat16\r\n            case \"fp16\":\r\n                dtype = torch.float16\r\n            case \"fp32\":\r\n                dtype = torch.float32\r\n            case \"fp64\":\r\n                dtype = torch.float64\r\n            case \"passthrough\":\r\n                return (cond_pos, cond_neg, sigmas, latent_image, )\r\n        \r\n        if cond_pos is not None:\r\n            cond_pos[0][0] = cond_pos[0][0].clone().to(dtype)\r\n            cond_pos[0][1][\"pooled_output\"] = cond_pos[0][1][\"pooled_output\"].clone().to(dtype)\r\n        \r\n        if cond_neg is not None:\r\n            cond_neg[0][0] = cond_neg[0][0].clone().to(dtype)\r\n            cond_neg[0][1][\"pooled_output\"] = cond_neg[0][1][\"pooled_output\"].clone().to(dtype)\r\n            \r\n        if sigmas is not None:\r\n            sigmas = sigmas.clone().to(dtype)\r\n        \r\n        if latent_image is not None:\r\n            x = latent_image[\"samples\"].clone().to(dtype)    \r\n            latent_image = {\"samples\": x}\r\n\r\n        if set_default is True:\r\n            torch.set_default_dtype(dtype)\r\n        \r\n        return (cond_pos, cond_neg, sigmas, latent_image, )\r\n    \r\n    \r\nclass set_precision_advanced:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                    \"latent_image\": (\"LATENT\", ),      \r\n                    \"global_precision\": ([\"64\", \"32\", \"16\"], ),\r\n                    \"shark_precision\": ([\"64\", \"32\", \"16\"], ),\r\n                     },\r\n                }\r\n\r\n    RETURN_TYPES = (\"LATENT\",\"LATENT\",\"LATENT\",\"LATENT\",\"LATENT\",)\r\n    RETURN_NAMES = (\"PASSTHROUGH\",\"LATENT_CAST_TO_GLOBAL\",\"LATENT_16\",\"LATENT_32\",\"LATENT_64\",)\r\n    CATEGORY = \"RES4LYF/precision\"\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    def main(self, global_precision=\"32\", shark_precision=\"64\", latent_image=None):\r\n        dtype_map = {\r\n            \"16\": torch.float16,\r\n            \"32\": torch.float32,\r\n            \"64\": torch.float64\r\n        }\r\n        precision_map = {\r\n            \"16\": 'fp16',\r\n            \"32\": 'fp32',\r\n            \"64\": 'fp64'\r\n        }\r\n\r\n        torch.set_default_dtype(dtype_map[global_precision])\r\n        precision_tool.set_cast_type(precision_map[shark_precision])\r\n\r\n        latent_passthrough = latent_image[\"samples\"]\r\n\r\n        latent_out16 = latent_image[\"samples\"].to(torch.float16)\r\n        latent_out32 = latent_image[\"samples\"].to(torch.float32)\r\n        latent_out64 = latent_image[\"samples\"].to(torch.float64)\r\n\r\n        target_dtype = dtype_map[global_precision]\r\n        if latent_image[\"samples\"].dtype != target_dtype:\r\n            latent_image[\"samples\"] = latent_image[\"samples\"].to(target_dtype)\r\n\r\n        latent_cast_to_global = latent_image[\"samples\"]\r\n\r\n        return ({\"samples\": latent_passthrough}, {\"samples\": latent_cast_to_global}, {\"samples\": latent_out16}, {\"samples\": latent_out32}, {\"samples\": latent_out64})\r\n    \r\nclass latent_to_cuda:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                    \"latent\": (\"LATENT\", ),      \r\n                    \"to_cuda\": (\"BOOLEAN\", {\"default\": True}),\r\n                     },\r\n                }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    RETURN_NAMES = (\"passthrough\",)\r\n    CATEGORY = \"RES4LYF/latents\"\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    def main(self, latent, to_cuda):\r\n        match to_cuda:\r\n            case \"True\":\r\n                latent = latent.to('cuda')\r\n            case \"False\":\r\n                latent = latent.to('cpu')\r\n        return (latent,)\r\n\r\nclass latent_batch:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                    \"latent\": (\"LATENT\", ),      \r\n                    \"batch_size\": (\"INT\", {\"default\": 0, \"min\": -10000, \"max\": 10000}),\r\n                     },\r\n                }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    RETURN_NAMES = (\"latent_batch\",)\r\n    CATEGORY = \"RES4LYF/latents\"\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    def main(self, latent, batch_size):\r\n        latent = latent[\"samples\"]\r\n        b, c, h, w = latent.shape\r\n        batch_latents = torch.zeros([batch_size, 4, h, w], device=latent.device)\r\n        for i in range(batch_size):\r\n            batch_latents[i] = latent\r\n        return ({\"samples\": batch_latents}, )\r\n\r\nclass LatentPhaseMagnitude:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"latent_0_batch\": (\"LATENT\",),\r\n                \"latent_1_batch\": (\"LATENT\",),\r\n\r\n                \"phase_mix_power\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"magnitude_mix_power\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n\r\n                \"phase_luminosity\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"phase_cyan_red\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"phase_lime_purple\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"phase_pattern_structure\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n\r\n                \"magnitude_luminosity\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"magnitude_cyan_red\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"magnitude_lime_purple\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"magnitude_pattern_structure\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n\r\n                \"latent_0_normal\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"latent_1_normal\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"latent_out_normal\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"latent_0_stdize\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"latent_1_stdize\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"latent_out_stdize\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"latent_0_meancenter\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"latent_1_meancenter\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"latent_out_meancenter\": (\"BOOLEAN\", {\"default\": True}),\r\n            },\r\n            \"optional\": {\r\n                \"phase_mix_powers\": (\"SIGMAS\", ),\r\n                \"magnitude_mix_powers\": (\"SIGMAS\", ),\r\n\r\n                \"phase_luminositys\": (\"SIGMAS\", ),\r\n                \"phase_cyan_reds\": (\"SIGMAS\", ),\r\n                \"phase_lime_purples\": (\"SIGMAS\", ),\r\n                \"phase_pattern_structures\": (\"SIGMAS\", ),\r\n\r\n                \"magnitude_luminositys\": (\"SIGMAS\", ),\r\n                \"magnitude_cyan_reds\": (\"SIGMAS\", ),\r\n                \"magnitude_lime_purples\": (\"SIGMAS\", ),\r\n                \"magnitude_pattern_structures\": (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/latents\"\r\n    \r\n    @staticmethod\r\n    def latent_repeat(latent, batch_size):\r\n        b, c, h, w = latent.shape\r\n        batch_latents = torch.zeros((batch_size, c, h, w), dtype=latent.dtype, layout=latent.layout, device=latent.device)\r\n        for i in range(batch_size):\r\n            batch_latents[i] = latent\r\n        return batch_latents\r\n\r\n    @staticmethod\r\n    def mix_latent_phase_magnitude(latent_0, latent_1, power_phase, power_magnitude,\r\n                                    phase_luminosity, phase_cyan_red, phase_lime_purple, phase_pattern_structure,\r\n                                    magnitude_luminosity, magnitude_cyan_red, magnitude_lime_purple, magnitude_pattern_structure\r\n                                    ):\r\n        dtype = torch.promote_types(latent_0.dtype, latent_1.dtype)\r\n        # big accuracy problems with fp32 FFT! let's avoid that\r\n        latent_0 = latent_0.double()\r\n        latent_1 = latent_1.double()\r\n\r\n        latent_0_fft = torch.fft.fft2(latent_0)\r\n        latent_1_fft = torch.fft.fft2(latent_1)\r\n\r\n        latent_0_phase = torch.angle(latent_0_fft)\r\n        latent_1_phase = torch.angle(latent_1_fft)\r\n        latent_0_magnitude = torch.abs(latent_0_fft)\r\n        latent_1_magnitude = torch.abs(latent_1_fft)\r\n\r\n        # DC corruption...? handle separately??\r\n        #dc_index = (0, 0)\r\n        #dc_0 = latent_0_fft[:, :, dc_index[0], dc_index[1]]\r\n        #dc_1 = latent_1_fft[:, :, dc_index[0], dc_index[1]]\r\n        #mixed_dc = dc_0 * 0.5 + dc_1 * 0.5\r\n        #mixed_dc = dc_0 * (1 - phase_weight) + dc_1 * phase_weight\r\n\r\n        # create complex FFT using a weighted mix of phases\r\n        chan_weights_phase     = [w for w in [phase_luminosity,     phase_cyan_red,     phase_lime_purple,     phase_pattern_structure    ]]\r\n        chan_weights_magnitude = [w for w in [magnitude_luminosity, magnitude_cyan_red, magnitude_lime_purple, magnitude_pattern_structure]]\r\n        mixed_phase     = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\r\n        mixed_magnitude = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\r\n\r\n        for i in range(4):\r\n            mixed_phase[:, i]     = ( (latent_0_phase[:,i] * (1-chan_weights_phase[i])) ** power_phase + (latent_1_phase[:,i] * chan_weights_phase[i]) ** power_phase) ** (1/power_phase)\r\n            mixed_magnitude[:, i]     = ( (latent_0_magnitude[:,i] * (1-chan_weights_magnitude[i])) ** power_magnitude + (latent_1_magnitude[:,i] * chan_weights_magnitude[i]) ** power_magnitude) ** (1/power_magnitude)\r\n\r\n        new_fft = mixed_magnitude * torch.exp(1j * mixed_phase)\r\n\r\n        #new_fft[:, :, dc_index[0], dc_index[1]] = mixed_dc\r\n\r\n        # inverse FFT to convert back to spatial domain\r\n        mixed_phase_magnitude = torch.fft.ifft2(new_fft).real\r\n\r\n        return mixed_phase_magnitude.to(dtype)\r\n    \r\n    def main(self, #batch_size, latent_1_repeat,\r\n             latent_0_batch,  latent_1_batch, latent_0_normal, latent_1_normal, latent_out_normal,\r\n             latent_0_stdize, latent_1_stdize, latent_out_stdize, \r\n             latent_0_meancenter, latent_1_meancenter, latent_out_meancenter, \r\n             phase_mix_power, magnitude_mix_power, \r\n             phase_luminosity,           phase_cyan_red,           phase_lime_purple,           phase_pattern_structure, \r\n             magnitude_luminosity,       magnitude_cyan_red,       magnitude_lime_purple,       magnitude_pattern_structure, \r\n             phase_mix_powers=None,      magnitude_mix_powers=None,\r\n             phase_luminositys=None,     phase_cyan_reds=None,     phase_lime_purples=None,     phase_pattern_structures=None,\r\n             magnitude_luminositys=None, magnitude_cyan_reds=None, magnitude_lime_purples=None, magnitude_pattern_structures=None\r\n             ):\r\n        latent_0_batch = latent_0_batch[\"samples\"].double()\r\n        latent_1_batch = latent_1_batch[\"samples\"].double().to(latent_0_batch.device)\r\n\r\n        #if batch_size == 0:\r\n        batch_size = latent_0_batch.shape[0]\r\n        if latent_1_batch.shape[0] == 1:\r\n            latent_1_batch = self.latent_repeat(latent_1_batch, batch_size)\r\n\r\n\r\n        magnitude_mix_powers         = initialize_or_scale(magnitude_mix_powers,         magnitude_mix_power,         batch_size)\r\n        phase_mix_powers             = initialize_or_scale(phase_mix_powers,             phase_mix_power,            batch_size)\r\n\r\n        phase_luminositys            = initialize_or_scale(phase_luminositys,            phase_luminosity,            batch_size)\r\n        phase_cyan_reds              = initialize_or_scale(phase_cyan_reds,              phase_cyan_red,              batch_size)\r\n        phase_lime_purples           = initialize_or_scale(phase_lime_purples,           phase_lime_purple,           batch_size)\r\n        phase_pattern_structures     = initialize_or_scale(phase_pattern_structures,     phase_pattern_structure,     batch_size)\r\n\r\n        magnitude_luminositys        = initialize_or_scale(magnitude_luminositys,        magnitude_luminosity,        batch_size)\r\n        magnitude_cyan_reds          = initialize_or_scale(magnitude_cyan_reds,          magnitude_cyan_red,          batch_size)\r\n        magnitude_lime_purples       = initialize_or_scale(magnitude_lime_purples,       magnitude_lime_purple,       batch_size)\r\n        magnitude_pattern_structures = initialize_or_scale(magnitude_pattern_structures, magnitude_pattern_structure, batch_size)    \r\n\r\n        mixed_phase_magnitude_batch = torch.zeros(latent_0_batch.shape, device=latent_0_batch.device)\r\n\r\n        if latent_0_normal == True:\r\n            latent_0_batch = latent_normalize_channels(latent_0_batch)\r\n        if latent_1_normal == True:\r\n            latent_1_batch = latent_normalize_channels(latent_1_batch)\r\n        if latent_0_meancenter == True:\r\n            latent_0_batch = latent_meancenter_channels(latent_0_batch)\r\n        if latent_1_meancenter == True:\r\n            latent_1_batch = latent_meancenter_channels(latent_1_batch)\r\n        if latent_0_stdize == True:\r\n            latent_0_batch = latent_stdize_channels(latent_0_batch)\r\n        if latent_1_stdize == True:\r\n            latent_1_batch = latent_stdize_channels(latent_1_batch)\r\n \r\n        for i in range(batch_size):\r\n            mixed_phase_magnitude = self.mix_latent_phase_magnitude(latent_0_batch[i:i+1], latent_1_batch[i:i+1], phase_mix_powers[i].item(), magnitude_mix_powers[i].item(),\r\n                                                    phase_luminositys[i].item(), phase_cyan_reds[i].item(),phase_lime_purples[i].item(),phase_pattern_structures[i].item(),\r\n                                                    magnitude_luminositys[i].item(), magnitude_cyan_reds[i].item(),magnitude_lime_purples[i].item(),magnitude_pattern_structures[i].item()\r\n                                                    )\r\n            if latent_out_normal == True:\r\n                mixed_phase_magnitude = latent_normalize_channels(mixed_phase_magnitude)\r\n            if latent_out_stdize == True:\r\n                mixed_phase_magnitude = latent_stdize_channels(mixed_phase_magnitude)\r\n            if latent_out_meancenter == True:\r\n                mixed_phase_magnitude = latent_meancenter_channels(mixed_phase_magnitude)                                \r\n\r\n            mixed_phase_magnitude_batch[i, :, :, :] = mixed_phase_magnitude\r\n\r\n        return ({\"samples\": mixed_phase_magnitude_batch}, )\r\n\r\nclass LatentPhaseMagnitudeMultiply:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"latent_0_batch\": (\"LATENT\",),\r\n\r\n                \"phase_luminosity\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"phase_cyan_red\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"phase_lime_purple\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"phase_pattern_structure\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n\r\n                \"magnitude_luminosity\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"magnitude_cyan_red\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"magnitude_lime_purple\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"magnitude_pattern_structure\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n\r\n                \"latent_0_normal\": (\"BOOLEAN\", {\"default\": False}),\r\n                \"latent_out_normal\": (\"BOOLEAN\", {\"default\": False}),\r\n            },\r\n            \"optional\": {\r\n                \"phase_luminositys\": (\"SIGMAS\", ),\r\n                \"phase_cyan_reds\": (\"SIGMAS\", ),\r\n                \"phase_lime_purples\": (\"SIGMAS\", ),\r\n                \"phase_pattern_structures\": (\"SIGMAS\", ),\r\n\r\n                \"magnitude_luminositys\": (\"SIGMAS\", ),\r\n                \"magnitude_cyan_reds\": (\"SIGMAS\", ),\r\n                \"magnitude_lime_purples\": (\"SIGMAS\", ),\r\n                \"magnitude_pattern_structures\": (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/latents\"\r\n\r\n    @staticmethod\r\n    def latent_repeat(latent, batch_size):\r\n        b, c, h, w = latent.shape\r\n        batch_latents = torch.zeros((batch_size, c, h, w), dtype=latent.dtype, layout=latent.layout, device=latent.device)\r\n        for i in range(batch_size):\r\n            batch_latents[i] = latent\r\n        return batch_latents\r\n\r\n    @staticmethod\r\n    def mix_latent_phase_magnitude(latent_0,  \r\n                                    phase_luminosity, phase_cyan_red, phase_lime_purple, phase_pattern_structure,\r\n                                    magnitude_luminosity, magnitude_cyan_red, magnitude_lime_purple, magnitude_pattern_structure\r\n                                    ):\r\n        dtype = latent_0.dtype\r\n        # avoid big accuracy problems with fp32 FFT!\r\n        latent_0 = latent_0.double()\r\n\r\n        latent_0_fft = torch.fft.fft2(latent_0)\r\n\r\n        latent_0_phase = torch.angle(latent_0_fft)\r\n        latent_0_magnitude = torch.abs(latent_0_fft)\r\n\r\n        # create new complex FFT using weighted mix of phases\r\n        chan_weights_phase     = [w for w in [phase_luminosity,     phase_cyan_red,     phase_lime_purple,     phase_pattern_structure    ]]\r\n        chan_weights_magnitude = [ w for w in [magnitude_luminosity, magnitude_cyan_red, magnitude_lime_purple, magnitude_pattern_structure]]\r\n        mixed_phase     = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\r\n        mixed_magnitude = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\r\n\r\n        for i in range(4):\r\n            mixed_phase[:, i]     = latent_0_phase[:,i]     * chan_weights_phase[i]\r\n            mixed_magnitude[:, i] = latent_0_magnitude[:,i] * chan_weights_magnitude[i]\r\n\r\n        new_fft = mixed_magnitude * torch.exp(1j * mixed_phase)\r\n        \r\n        # inverse FFT to convert back to spatial domain\r\n        mixed_phase_magnitude = torch.fft.ifft2(new_fft).real\r\n\r\n        return mixed_phase_magnitude.to(dtype)\r\n    \r\n    def main(self,\r\n             latent_0_batch, latent_0_normal, latent_out_normal,\r\n             phase_luminosity,           phase_cyan_red,           phase_lime_purple,           phase_pattern_structure, \r\n             magnitude_luminosity,       magnitude_cyan_red,       magnitude_lime_purple,       magnitude_pattern_structure, \r\n             phase_luminositys=None,     phase_cyan_reds=None,     phase_lime_purples=None,     phase_pattern_structures=None,\r\n             magnitude_luminositys=None, magnitude_cyan_reds=None, magnitude_lime_purples=None, magnitude_pattern_structures=None\r\n             ):\r\n        latent_0_batch = latent_0_batch[\"samples\"].double()\r\n\r\n        batch_size = latent_0_batch.shape[0]\r\n\r\n        phase_luminositys            = initialize_or_scale(phase_luminositys,            phase_luminosity,            batch_size)\r\n        phase_cyan_reds              = initialize_or_scale(phase_cyan_reds,              phase_cyan_red,              batch_size)\r\n        phase_lime_purples           = initialize_or_scale(phase_lime_purples,           phase_lime_purple,           batch_size)\r\n        phase_pattern_structures     = initialize_or_scale(phase_pattern_structures,     phase_pattern_structure,     batch_size)\r\n\r\n        magnitude_luminositys        = initialize_or_scale(magnitude_luminositys,        magnitude_luminosity,        batch_size)\r\n        magnitude_cyan_reds          = initialize_or_scale(magnitude_cyan_reds,          magnitude_cyan_red,          batch_size)\r\n        magnitude_lime_purples       = initialize_or_scale(magnitude_lime_purples,       magnitude_lime_purple,       batch_size)\r\n        magnitude_pattern_structures = initialize_or_scale(magnitude_pattern_structures, magnitude_pattern_structure, batch_size)    \r\n\r\n        mixed_phase_magnitude_batch = torch.zeros(latent_0_batch.shape, device=latent_0_batch.device)\r\n\r\n        if latent_0_normal == True:\r\n            latent_0_batch = latent_normalize_channels(latent_0_batch)\r\n \r\n        for i in range(batch_size):\r\n            mixed_phase_magnitude = self.mix_latent_phase_magnitude(latent_0_batch[i:i+1],\r\n                                                    phase_luminositys[i].item(), phase_cyan_reds[i].item(),phase_lime_purples[i].item(),phase_pattern_structures[i].item(),\r\n                                                    magnitude_luminositys[i].item(), magnitude_cyan_reds[i].item(),magnitude_lime_purples[i].item(),magnitude_pattern_structures[i].item()\r\n                                                    )\r\n            if latent_out_normal == True:\r\n                mixed_phase_magnitude = latent_normalize_channels(mixed_phase_magnitude)\r\n\r\n            mixed_phase_magnitude_batch[i, :, :, :] = mixed_phase_magnitude\r\n\r\n        return ({\"samples\": mixed_phase_magnitude_batch}, )\r\n\r\n\r\n\r\nclass LatentPhaseMagnitudeOffset:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"latent_0_batch\": (\"LATENT\",),\r\n\r\n                \"phase_luminosity\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"phase_cyan_red\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"phase_lime_purple\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"phase_pattern_structure\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n\r\n                \"magnitude_luminosity\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"magnitude_cyan_red\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"magnitude_lime_purple\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"magnitude_pattern_structure\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n\r\n                \"latent_0_normal\": (\"BOOLEAN\", {\"default\": False}),\r\n                \"latent_out_normal\": (\"BOOLEAN\", {\"default\": False}),\r\n            },\r\n            \"optional\": {\r\n                \"phase_luminositys\": (\"SIGMAS\", ),\r\n                \"phase_cyan_reds\": (\"SIGMAS\", ),\r\n                \"phase_lime_purples\": (\"SIGMAS\", ),\r\n                \"phase_pattern_structures\": (\"SIGMAS\", ),\r\n\r\n                \"magnitude_luminositys\": (\"SIGMAS\", ),\r\n                \"magnitude_cyan_reds\": (\"SIGMAS\", ),\r\n                \"magnitude_lime_purples\": (\"SIGMAS\", ),\r\n                \"magnitude_pattern_structures\": (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/latents\"\r\n    \r\n    @staticmethod\r\n    def latent_repeat(latent, batch_size):\r\n        b, c, h, w = latent.shape\r\n        batch_latents = torch.zeros((batch_size, c, h, w), dtype=latent.dtype, layout=latent.layout, device=latent.device)\r\n        for i in range(batch_size):\r\n            batch_latents[i] = latent\r\n        return batch_latents\r\n\r\n    @staticmethod\r\n    def mix_latent_phase_magnitude(latent_0,  \r\n                                    phase_luminosity, phase_cyan_red, phase_lime_purple, phase_pattern_structure,\r\n                                    magnitude_luminosity, magnitude_cyan_red, magnitude_lime_purple, magnitude_pattern_structure\r\n                                    ):\r\n        dtype = latent_0.dtype\r\n        # avoid big accuracy problems with fp32 FFT!\r\n        latent_0 = latent_0.double()\r\n\r\n        latent_0_fft = torch.fft.fft2(latent_0)\r\n\r\n        latent_0_phase = torch.angle(latent_0_fft)\r\n        latent_0_magnitude = torch.abs(latent_0_fft)\r\n\r\n        # create new complex FFT using a weighted mix of phases\r\n        chan_weights_phase     = [w for w in [phase_luminosity,     phase_cyan_red,     phase_lime_purple,     phase_pattern_structure    ]]\r\n        chan_weights_magnitude = [ w for w in [magnitude_luminosity, magnitude_cyan_red, magnitude_lime_purple, magnitude_pattern_structure]]\r\n        mixed_phase     = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\r\n        mixed_magnitude = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\r\n\r\n        for i in range(4):\r\n            mixed_phase[:, i]     = latent_0_phase[:,i]     + chan_weights_phase[i]\r\n            mixed_magnitude[:, i] = latent_0_magnitude[:,i] + chan_weights_magnitude[i]\r\n\r\n        new_fft = mixed_magnitude * torch.exp(1j * mixed_phase)\r\n        \r\n        # inverse FFT to convert back to spatial domain\r\n        mixed_phase_magnitude = torch.fft.ifft2(new_fft).real\r\n\r\n        return mixed_phase_magnitude.to(dtype)\r\n    \r\n    def main(self,\r\n             latent_0_batch, latent_0_normal, latent_out_normal,\r\n             phase_luminosity,           phase_cyan_red,           phase_lime_purple,           phase_pattern_structure, \r\n             magnitude_luminosity,       magnitude_cyan_red,       magnitude_lime_purple,       magnitude_pattern_structure, \r\n             phase_luminositys=None,     phase_cyan_reds=None,     phase_lime_purples=None,     phase_pattern_structures=None,\r\n             magnitude_luminositys=None, magnitude_cyan_reds=None, magnitude_lime_purples=None, magnitude_pattern_structures=None\r\n             ):\r\n        latent_0_batch = latent_0_batch[\"samples\"].double()\r\n\r\n        batch_size = latent_0_batch.shape[0]\r\n\r\n        phase_luminositys            = initialize_or_scale(phase_luminositys,            phase_luminosity,            batch_size)\r\n        phase_cyan_reds              = initialize_or_scale(phase_cyan_reds,              phase_cyan_red,              batch_size)\r\n        phase_lime_purples           = initialize_or_scale(phase_lime_purples,           phase_lime_purple,           batch_size)\r\n        phase_pattern_structures     = initialize_or_scale(phase_pattern_structures,     phase_pattern_structure,     batch_size)\r\n\r\n        magnitude_luminositys        = initialize_or_scale(magnitude_luminositys,        magnitude_luminosity,        batch_size)\r\n        magnitude_cyan_reds          = initialize_or_scale(magnitude_cyan_reds,          magnitude_cyan_red,          batch_size)\r\n        magnitude_lime_purples       = initialize_or_scale(magnitude_lime_purples,       magnitude_lime_purple,       batch_size)\r\n        magnitude_pattern_structures = initialize_or_scale(magnitude_pattern_structures, magnitude_pattern_structure, batch_size)    \r\n\r\n        mixed_phase_magnitude_batch = torch.zeros(latent_0_batch.shape, device=latent_0_batch.device)\r\n\r\n        if latent_0_normal == True:\r\n            latent_0_batch = latent_normalize_channels(latent_0_batch)\r\n \r\n        for i in range(batch_size):\r\n            mixed_phase_magnitude = self.mix_latent_phase_magnitude(latent_0_batch[i:i+1],\r\n                                                    phase_luminositys[i].item(), phase_cyan_reds[i].item(),phase_lime_purples[i].item(),phase_pattern_structures[i].item(),\r\n                                                    magnitude_luminositys[i].item(), magnitude_cyan_reds[i].item(),magnitude_lime_purples[i].item(),magnitude_pattern_structures[i].item()\r\n                                                    )\r\n            if latent_out_normal == True:\r\n                mixed_phase_magnitude = latent_normalize_channels(mixed_phase_magnitude)\r\n\r\n            mixed_phase_magnitude_batch[i, :, :, :] = mixed_phase_magnitude\r\n\r\n        return ({\"samples\": mixed_phase_magnitude_batch}, )\r\n\r\n\r\n\r\nclass LatentPhaseMagnitudePower:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"latent_0_batch\": (\"LATENT\",),\r\n\r\n                \"phase_luminosity\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"phase_cyan_red\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"phase_lime_purple\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"phase_pattern_structure\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n\r\n                \"magnitude_luminosity\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"magnitude_cyan_red\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"magnitude_lime_purple\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"magnitude_pattern_structure\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n\r\n                \"latent_0_normal\": (\"BOOLEAN\", {\"default\": False}),\r\n                \"latent_out_normal\": (\"BOOLEAN\", {\"default\": False}),\r\n            },\r\n            \"optional\": {\r\n                \"phase_luminositys\": (\"SIGMAS\", ),\r\n                \"phase_cyan_reds\": (\"SIGMAS\", ),\r\n                \"phase_lime_purples\": (\"SIGMAS\", ),\r\n                \"phase_pattern_structures\": (\"SIGMAS\", ),\r\n\r\n                \"magnitude_luminositys\": (\"SIGMAS\", ),\r\n                \"magnitude_cyan_reds\": (\"SIGMAS\", ),\r\n                \"magnitude_lime_purples\": (\"SIGMAS\", ),\r\n                \"magnitude_pattern_structures\": (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/latents\"\r\n    \r\n    @staticmethod\r\n    def latent_repeat(latent, batch_size):\r\n        b, c, h, w = latent.shape\r\n        batch_latents = torch.zeros((batch_size, c, h, w), dtype=latent.dtype, layout=latent.layout, device=latent.device)\r\n        for i in range(batch_size):\r\n            batch_latents[i] = latent\r\n        return batch_latents\r\n\r\n    @staticmethod\r\n    def mix_latent_phase_magnitude(latent_0,  \r\n                                    phase_luminosity, phase_cyan_red, phase_lime_purple, phase_pattern_structure,\r\n                                    magnitude_luminosity, magnitude_cyan_red, magnitude_lime_purple, magnitude_pattern_structure\r\n                                    ):\r\n        dtype = latent_0.dtype\r\n        # avoid big accuracy problems with fp32 FFT!\r\n        latent_0 = latent_0.double()\r\n\r\n        latent_0_fft = torch.fft.fft2(latent_0)\r\n\r\n        latent_0_phase = torch.angle(latent_0_fft)\r\n        latent_0_magnitude = torch.abs(latent_0_fft)\r\n\r\n        # create new complex FFT using a weighted mix of phases\r\n        chan_weights_phase     = [w for w in [phase_luminosity,     phase_cyan_red,     phase_lime_purple,     phase_pattern_structure    ]]\r\n        chan_weights_magnitude = [ w for w in [magnitude_luminosity, magnitude_cyan_red, magnitude_lime_purple, magnitude_pattern_structure]]\r\n        mixed_phase     = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\r\n        mixed_magnitude = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\r\n\r\n        for i in range(4):\r\n            mixed_phase[:, i]     = latent_0_phase[:,i]     ** chan_weights_phase[i]\r\n            mixed_magnitude[:, i] = latent_0_magnitude[:,i] ** chan_weights_magnitude[i]\r\n\r\n        new_fft = mixed_magnitude * torch.exp(1j * mixed_phase)\r\n        \r\n        # inverse FFT to convert back to spatial domain\r\n        mixed_phase_magnitude = torch.fft.ifft2(new_fft).real\r\n\r\n        return mixed_phase_magnitude.to(dtype)\r\n    \r\n    def main(self,\r\n             latent_0_batch, latent_0_normal, latent_out_normal,\r\n             phase_luminosity,           phase_cyan_red,           phase_lime_purple,           phase_pattern_structure, \r\n             magnitude_luminosity,       magnitude_cyan_red,       magnitude_lime_purple,       magnitude_pattern_structure, \r\n             phase_luminositys=None,     phase_cyan_reds=None,     phase_lime_purples=None,     phase_pattern_structures=None,\r\n             magnitude_luminositys=None, magnitude_cyan_reds=None, magnitude_lime_purples=None, magnitude_pattern_structures=None\r\n             ):\r\n        latent_0_batch = latent_0_batch[\"samples\"].double()\r\n\r\n        batch_size = latent_0_batch.shape[0]\r\n\r\n        phase_luminositys            = initialize_or_scale(phase_luminositys,            phase_luminosity,            batch_size)\r\n        phase_cyan_reds              = initialize_or_scale(phase_cyan_reds,              phase_cyan_red,              batch_size)\r\n        phase_lime_purples           = initialize_or_scale(phase_lime_purples,           phase_lime_purple,           batch_size)\r\n        phase_pattern_structures     = initialize_or_scale(phase_pattern_structures,     phase_pattern_structure,     batch_size)\r\n\r\n        magnitude_luminositys        = initialize_or_scale(magnitude_luminositys,        magnitude_luminosity,        batch_size)\r\n        magnitude_cyan_reds          = initialize_or_scale(magnitude_cyan_reds,          magnitude_cyan_red,          batch_size)\r\n        magnitude_lime_purples       = initialize_or_scale(magnitude_lime_purples,       magnitude_lime_purple,       batch_size)\r\n        magnitude_pattern_structures = initialize_or_scale(magnitude_pattern_structures, magnitude_pattern_structure, batch_size)    \r\n\r\n        mixed_phase_magnitude_batch = torch.zeros(latent_0_batch.shape, device=latent_0_batch.device)\r\n\r\n        if latent_0_normal == True:\r\n            latent_0_batch = latent_normalize_channels(latent_0_batch)\r\n \r\n        for i in range(batch_size):\r\n            mixed_phase_magnitude = self.mix_latent_phase_magnitude(latent_0_batch[i:i+1],\r\n                                                    phase_luminositys[i].item(), phase_cyan_reds[i].item(),phase_lime_purples[i].item(),phase_pattern_structures[i].item(),\r\n                                                    magnitude_luminositys[i].item(), magnitude_cyan_reds[i].item(),magnitude_lime_purples[i].item(),magnitude_pattern_structures[i].item()\r\n                                                    )\r\n            if latent_out_normal == True:\r\n                mixed_phase_magnitude = latent_normalize_channels(mixed_phase_magnitude)\r\n\r\n            mixed_phase_magnitude_batch[i, :, :, :] = mixed_phase_magnitude\r\n\r\n        return ({\"samples\": mixed_phase_magnitude_batch}, )\r\n\r\n\r\n\r\nclass StableCascade_StageC_VAEEncode_Exact:\r\n    def __init__(self, device=\"cpu\"):\r\n        self.device = device\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\r\n            \"image\": (\"IMAGE\",),\r\n            \"vae\": (\"VAE\", ),\r\n            \"width\": (\"INT\", {\"default\": 24, \"min\": 1, \"max\": 1024, \"step\": 1}),\r\n            \"height\": (\"INT\", {\"default\": 24, \"min\": 1, \"max\": 1024, \"step\": 1}),\r\n        }}\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    RETURN_NAMES = (\"stage_c\",)\r\n    FUNCTION = \"generate\"\r\n\r\n    CATEGORY = \"RES4LYF/vae\"\r\n    \r\n    def generate(self, image, vae, width, height):\r\n        out_width = (width) * vae.downscale_ratio #downscale_ratio = 32\r\n        out_height = (height) * vae.downscale_ratio\r\n        #movedim(-1,1) goes from 1,1024,1024,3 to 1,3,1024,1024\r\n        s = comfy.utils.common_upscale(image.movedim(-1,1), out_width, out_height, \"lanczos\", \"center\").movedim(1,-1)\r\n\r\n        c_latent = vae.encode(s[:,:,:,:3]) #to slice off alpha channel?\r\n        return ({\r\n            \"samples\": c_latent,\r\n        },)\r\n        \r\n\r\n\r\nclass StableCascade_StageC_VAEEncode_Exact_Tiled:\r\n    def __init__(self, device=\"cpu\"):\r\n        self.device = device\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\r\n            \"image\": (\"IMAGE\",),\r\n            \"vae\": (\"VAE\", ),\r\n            \"tile_size\": (\"INT\", {\"default\": 512, \"min\": 320, \"max\": 4096, \"step\": 64}),\r\n            \"overlap\": (\"INT\", {\"default\": 16, \"min\": 8, \"max\": 128, \"step\": 8}),\r\n        }}\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    RETURN_NAMES = (\"stage_c\",)\r\n    FUNCTION = \"generate\"\r\n\r\n    CATEGORY = \"RES4LYF/vae\"\r\n\r\n    def generate(self, image, vae, tile_size, overlap):\r\n        img_width = image.shape[-2]\r\n        img_height = image.shape[-3]\r\n        upscale_amount = vae.downscale_ratio  # downscale_ratio = 32\r\n\r\n        image = image.movedim(-1, 1)  # bhwc -> bchw \r\n\r\n        encode_fn = lambda img: vae.encode(img.to(vae.device)).to(\"cpu\")\r\n\r\n        c_latent = tiled_scale_multidim(\r\n            image, encode_fn,\r\n            tile=(tile_size // 8, tile_size // 8),\r\n            overlap=overlap,\r\n            upscale_amount=upscale_amount,\r\n            out_channels=16, \r\n            output_device=self.device\r\n        )\r\n\r\n        return ({\r\n            \"samples\": c_latent,\r\n        },)\r\n\r\n@torch.inference_mode()\r\ndef tiled_scale_multidim(samples, function, tile=(64, 64), overlap=8, upscale_amount=4, out_channels=3, output_device=\"cpu\", pbar=None):\r\n    dims = len(tile)\r\n    output_shape = [samples.shape[0], out_channels] + list(map(lambda a: round(a * upscale_amount), samples.shape[2:]))\r\n    output = torch.zeros(output_shape, device=output_device)\r\n\r\n    for b in range(samples.shape[0]):\r\n        for it in itertools.product(*map(lambda a: range(0, a[0], a[1] - overlap), zip(samples.shape[2:], tile))):\r\n            s_in = samples[b:b+1]\r\n            upscaled = []\r\n\r\n            for d in range(dims):\r\n                pos = max(0, min(s_in.shape[d + 2] - overlap, it[d]))\r\n                l = min(tile[d], s_in.shape[d + 2] - pos)\r\n                s_in = s_in.narrow(d + 2, pos, l)\r\n                upscaled.append(round(pos * upscale_amount))\r\n\r\n            ps = function(s_in).to(output_device)\r\n            mask = torch.ones_like(ps)\r\n            feather = round(overlap * upscale_amount)\r\n\r\n            for t in range(feather):\r\n                for d in range(2, dims + 2):\r\n                    mask.narrow(d, t, 1).mul_((1.0 / feather) * (t + 1))\r\n                    mask.narrow(d, mask.shape[d] - 1 - t, 1).mul_((1.0 / feather) * (t + 1))\r\n\r\n            o = output[b:b+1]\r\n            for d in range(dims):\r\n                o = o.narrow(d + 2, upscaled[d], mask.shape[d + 2])\r\n\r\n            o.add_(ps * mask)\r\n\r\n            if pbar is not None:\r\n                pbar.update(1)\r\n\r\n    return output\r\n\r\n    \r\n\r\n\r\nclass EmptyLatentImageCustom:\r\n    def __init__(self):\r\n        self.device = comfy.model_management.intermediate_device()\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n            \"width\": (\"INT\", {\"default\": 24, \"min\": 1, \"max\": MAX_RESOLUTION, \"step\": 1}),\r\n            \"height\": (\"INT\", {\"default\": 24, \"min\": 1, \"max\": MAX_RESOLUTION, \"step\": 1}),\r\n            \"batch_size\": (\"INT\", {\"default\": 1, \"min\": 1, \"max\": 4096}),\r\n\r\n            \"channels\": (['4', '16'], {\"default\": '4'}),\r\n            \"mode\": (['sdxl', 'cascade_b', 'cascade_c', 'exact'], {\"default\": 'default'}),\r\n            \"compression\": (\"INT\", {\"default\": 42, \"min\": 4, \"max\": 128, \"step\": 1}),\r\n            \"precision\": (['fp16', 'fp32', 'fp64'], {\"default\": 'fp32'}),\r\n            \r\n        }}\r\n    \r\n    RETURN_TYPES = (\"LATENT\",)\r\n    FUNCTION = \"generate\"\r\n\r\n    CATEGORY = \"RES4LYF/latents\"\r\n\r\n    def generate(self, width, height, batch_size, channels, mode, compression, precision):\r\n        c = int(channels)\r\n\r\n        ratio = 1\r\n        match mode:\r\n            case \"sdxl\":\r\n                ratio = 8\r\n            case \"cascade_b\":\r\n                ratio = 4\r\n            case \"cascade_c\":\r\n                ratio = compression\r\n            case \"exact\":\r\n                ratio = 1\r\n\r\n        dtype=torch.float32\r\n        match precision:\r\n            case \"fp16\":\r\n                dtype=torch.float16\r\n            case \"fp32\":\r\n                dtype=torch.float32\r\n            case \"fp64\":\r\n                dtype=torch.float64\r\n\r\n        latent = torch.zeros([batch_size, c, height // ratio, width // ratio], dtype=dtype, device=self.device)\r\n        return ({\"samples\":latent}, )\r\n\r\nclass EmptyLatentImage64:\r\n    def __init__(self):\r\n        self.device = comfy.model_management.intermediate_device()\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \"width\": (\"INT\", {\"default\": 1024, \"min\": 16, \"max\": MAX_RESOLUTION, \"step\": 8}),\r\n                              \"height\": (\"INT\", {\"default\": 1024, \"min\": 16, \"max\": MAX_RESOLUTION, \"step\": 8}),\r\n                              \"batch_size\": (\"INT\", {\"default\": 1, \"min\": 1, \"max\": 4096})}}\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    FUNCTION = \"generate\"\r\n\r\n    CATEGORY = \"RES4LYF/latents\"\r\n\r\n    def generate(self, width, height, batch_size=1):\r\n        latent = torch.zeros([batch_size, 4, height // 8, width // 8], dtype=torch.float64, device=self.device)\r\n        return ({\"samples\":latent}, )\r\n\r\n\"\"\"class CheckpointLoader32:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \"config_name\": (folder_paths.get_filename_list(\"configs\"), ),\r\n                              \"ckpt_name\": (folder_paths.get_filename_list(\"checkpoints\"), )}}\r\n    RETURN_TYPES = (\"MODEL\", \"CLIP\", \"VAE\")\r\n    FUNCTION = \"load_checkpoint\"\r\n\r\n    CATEGORY = \"advanced/loaders\"\r\n\r\n    def load_checkpoint(self, config_name, ckpt_name, output_vae=True, output_clip=True):\r\n        #torch.set_default_dtype(torch.float64)\r\n        config_path = folder_paths.get_full_path(\"configs\", config_name)\r\n        ckpt_path = folder_paths.get_full_path(\"checkpoints\", ckpt_name)\r\n        return comfy.sd.load_checkpoint(config_path, ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths(\"embeddings\"))\"\"\"\r\n\r\nMAX_RESOLUTION=8192\r\n\r\nclass LatentNoiseBatch_perlin:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s): \r\n        return {\"required\": {\r\n            \"seed\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\r\n            \"width\": (\"INT\", {\"default\": 1024, \"min\": 8, \"max\": MAX_RESOLUTION, \"step\": 8}),\r\n            \"height\": (\"INT\", {\"default\": 1024, \"min\": 8, \"max\": MAX_RESOLUTION, \"step\": 8}),\r\n            \"batch_size\": (\"INT\", {\"default\": 1, \"min\": 1, \"max\": 256}),\r\n            \"detail_level\": (\"FLOAT\", {\"default\": 0, \"min\": -1, \"max\": 1.0, \"step\": 0.1}),\r\n            },\r\n            \"optional\": {\r\n                \"details\": (\"SIGMAS\", ),\r\n            }\r\n        }\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    FUNCTION = \"create_noisy_latents_perlin\"\r\n    CATEGORY = \"RES4LYF/noise\"\r\n\r\n    # found at https://gist.github.com/vadimkantorov/ac1b097753f217c5c11bc2ff396e0a57\r\n    # which was ported from https://github.com/pvigier/perlin-numpy/blob/master/perlin2d.py\r\n    def rand_perlin_2d(self, shape, res, fade = lambda t: 6*t**5 - 15*t**4 + 10*t**3):\r\n        delta = (res[0] / shape[0], res[1] / shape[1])\r\n        d = (shape[0] // res[0], shape[1] // res[1])\r\n        \r\n        grid = torch.stack(torch.meshgrid(torch.arange(0, res[0], delta[0]), torch.arange(0, res[1], delta[1])), dim = -1) % 1\r\n        angles = 2*math.pi*torch.rand(res[0]+1, res[1]+1)\r\n        gradients = torch.stack((torch.cos(angles), torch.sin(angles)), dim = -1)\r\n        \r\n        tile_grads = lambda slice1, slice2: gradients[slice1[0]:slice1[1], slice2[0]:slice2[1]].repeat_interleave(d[0], 0).repeat_interleave(d[1], 1)\r\n        dot = lambda grad, shift: (torch.stack((grid[:shape[0],:shape[1],0] + shift[0], grid[:shape[0],:shape[1], 1] + shift[1]  ), dim = -1) * grad[:shape[0], :shape[1]]).sum(dim = -1)\r\n        \r\n        n00 = dot(tile_grads([0, -1], [0, -1]), [0,  0])\r\n        n10 = dot(tile_grads([1, None], [0, -1]), [-1, 0])\r\n        n01 = dot(tile_grads([0, -1],[1, None]), [0, -1])\r\n        n11 = dot(tile_grads([1, None], [1, None]), [-1,-1])\r\n        t = fade(grid[:shape[0], :shape[1]])\r\n        return math.sqrt(2) * torch.lerp(torch.lerp(n00, n10, t[..., 0]), torch.lerp(n01, n11, t[..., 0]), t[..., 1])\r\n\r\n    def rand_perlin_2d_octaves(self, shape, res, octaves=1, persistence=0.5):\r\n        noise = torch.zeros(shape)\r\n        frequency = 1\r\n        amplitude = 1\r\n        for _ in range(octaves):\r\n            noise += amplitude * self.rand_perlin_2d(shape, (frequency*res[0], frequency*res[1]))\r\n            frequency *= 2\r\n            amplitude *= persistence\r\n        noise = torch.remainder(torch.abs(noise)*1000000,11)/11\r\n        # noise = (torch.sin(torch.remainder(noise*1000000,83))+1)/2\r\n        return noise\r\n    \r\n    def scale_tensor(self, x):\r\n        min_value = x.min()\r\n        max_value = x.max()\r\n        x = (x - min_value) / (max_value - min_value)\r\n        return x\r\n\r\n    def create_noisy_latents_perlin(self, seed, width, height, batch_size, detail_level, details=None):\r\n        if details is None:\r\n             details = torch.full((10000,), detail_level)\r\n        else:\r\n            details = detail_level * details\r\n        torch.manual_seed(seed)\r\n        noise = torch.zeros((batch_size, 4, height // 8, width // 8), dtype=torch.float32, device=\"cpu\").cpu()\r\n        for i in range(batch_size):\r\n            for j in range(4):\r\n                noise_values = self.rand_perlin_2d_octaves((height // 8, width // 8), (1,1), 1, 1)\r\n                result = (1+details[i]/10)*torch.erfinv(2 * noise_values - 1) * (2 ** 0.5)\r\n                result = torch.clamp(result,-5,5)\r\n                noise[i, j, :, :] = result\r\n        return ({\"samples\": noise},)\r\n\r\n\r\nclass LatentNoiseBatch_gaussian_channels:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"latent\": (\"LATENT\",),\r\n                \"mean\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"mean_luminosity\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"mean_cyan_red\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"mean_lime_purple\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"mean_pattern_structure\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"std\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"steps\": (\"INT\", {\"default\": 0, \"min\": -10000, \"max\": 10000}),\r\n                \"seed\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\r\n            },\r\n            \"optional\": {\r\n                \"means\": (\"SIGMAS\", ),\r\n                \"mean_luminositys\": (\"SIGMAS\", ),\r\n                \"mean_cyan_reds\": (\"SIGMAS\", ),\r\n                \"mean_lime_purples\": (\"SIGMAS\", ),\r\n                \"mean_pattern_structures\": (\"SIGMAS\", ),\r\n                \"stds\": (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/noise\"\r\n\r\n    \"\"\"    @staticmethod\r\n    def gaussian_noise_channels_like(x, mean=0.0, mean_luminosity = -0.1, mean_cyan_red = 0.0, mean_lime_purple=0.0, mean_pattern_structure=0.0, std_dev=1.0, seed=42):\r\n        x = x.squeeze(0)\r\n\r\n        noise = torch.randn_like(x) * std_dev + mean\r\n\r\n        luminosity = noise[0:1] + mean_luminosity\r\n        cyan_red = noise[1:2] + mean_cyan_red\r\n        lime_purple = noise[2:3] + mean_lime_purple\r\n        pattern_structure = noise[3:4] + mean_pattern_structure\r\n\r\n        noise = torch.unsqueeze(torch.cat([luminosity, cyan_red, lime_purple, pattern_structure]), 0)\r\n\r\n        return noise.to(x.device)\"\"\"\r\n    \r\n    @staticmethod\r\n    def gaussian_noise_channels(x, mean_luminosity = -0.1, mean_cyan_red = 0.0, mean_lime_purple=0.0, mean_pattern_structure=0.0):\r\n        x = x.squeeze(0)\r\n\r\n        luminosity = x[0:1] + mean_luminosity\r\n        cyan_red = x[1:2] + mean_cyan_red\r\n        lime_purple = x[2:3] + mean_lime_purple\r\n        pattern_structure = x[3:4] + mean_pattern_structure\r\n\r\n        x = torch.unsqueeze(torch.cat([luminosity, cyan_red, lime_purple, pattern_structure]), 0)\r\n\r\n        return x\r\n\r\n    def main(self, latent, steps, seed, \r\n              mean, mean_luminosity, mean_cyan_red, mean_lime_purple, mean_pattern_structure, std,\r\n              means=None, mean_luminositys=None, mean_cyan_reds=None, mean_lime_purples=None, mean_pattern_structures=None, stds=None):\r\n        if steps == 0:\r\n            steps = len(means)\r\n\r\n        x = latent[\"samples\"]\r\n        b, c, h, w = x.shape  \r\n\r\n        noise_latents = torch.zeros([steps, 4, h, w], dtype=x.dtype, layout=x.layout, device=x.device)\r\n\r\n        noise_sampler = NOISE_GENERATOR_CLASSES.get('gaussian')(x=x, seed = seed)\r\n\r\n        means = initialize_or_scale(means, mean, steps)\r\n        mean_luminositys = initialize_or_scale(mean_luminositys, mean_luminosity, steps)\r\n        mean_cyan_reds = initialize_or_scale(mean_cyan_reds, mean_cyan_red, steps)\r\n        mean_lime_purples = initialize_or_scale(mean_lime_purples, mean_lime_purple, steps)\r\n        mean_pattern_structures = initialize_or_scale(mean_pattern_structures, mean_pattern_structure, steps)\r\n\r\n        stds = initialize_or_scale(stds, std, steps)\r\n\r\n        for i in range(steps):\r\n            noise = noise_sampler(mean=means[i].item(), std=stds[i].item())\r\n            noise = self.gaussian_noise_channels(noise, mean_luminositys[i].item(), mean_cyan_reds[i].item(), mean_lime_purples[i].item(), mean_pattern_structures[i].item())\r\n            noise_latents[i] = x + noise\r\n\r\n        return ({\"samples\": noise_latents}, )\r\n\r\nclass LatentNoiseBatch_gaussian:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"latent\": (\"LATENT\",),\r\n                \"mean\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"std\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"steps\": (\"INT\", {\"default\": 0, \"min\": -10000, \"max\": 10000}),\r\n                \"seed\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\r\n            },\r\n            \"optional\": {\r\n                \"means\": (\"SIGMAS\", ),\r\n                \"stds\": (\"SIGMAS\", ),\r\n                \"steps_\": (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/noise\"\r\n\r\n    def main(self, latent, mean, std, steps, seed, means=None, stds=None, steps_=None):\r\n        if steps_ is not None:\r\n            steps = len(steps_)\r\n\r\n        means = initialize_or_scale(means, mean, steps)\r\n        stds = initialize_or_scale(stds, std, steps)    \r\n\r\n        latent_samples = latent[\"samples\"]\r\n        b, c, h, w = latent_samples.shape  \r\n\r\n        noise_latents = torch.zeros([steps, c, h, w], dtype=latent_samples.dtype, layout=latent_samples.layout, device=latent_samples.device)\r\n\r\n        noise_sampler = NOISE_GENERATOR_CLASSES.get('gaussian')(x=latent_samples, seed = seed)\r\n\r\n        for i in range(steps):\r\n            noise_latents[i] = noise_sampler(mean=means[i].item(), std=stds[i].item())\r\n        return ({\"samples\": noise_latents}, )\r\n\r\nclass LatentNoiseBatch_fractal:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"latent\": (\"LATENT\",),\r\n                \"alpha\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"k_flip\": (\"BOOLEAN\", {\"default\": False}),\r\n                \"steps\": (\"INT\", {\"default\": 0, \"min\": -10000, \"max\": 10000}),\r\n                \"seed\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\r\n            },\r\n            \"optional\": {\r\n                \"alphas\": (\"SIGMAS\", ),\r\n                \"ks\": (\"SIGMAS\", ),\r\n                \"steps_\": (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/noise\"\r\n\r\n    def main(self, latent, alpha, k_flip, steps, seed=42, alphas=None, ks=None, sigmas_=None, steps_=None):\r\n        if steps_ is not None:\r\n            steps = len(steps_)\r\n\r\n        alphas = initialize_or_scale(alphas, alpha, steps)\r\n        k_flip = -1 if k_flip else 1\r\n        ks = initialize_or_scale(ks, k_flip, steps)\r\n\r\n        latent_samples = latent[\"samples\"]\r\n        b, c, h, w = latent_samples.shape  \r\n        noise_latents = torch.zeros([steps, c, h, w], dtype=latent_samples.dtype, layout=latent_samples.layout, device=latent_samples.device)\r\n\r\n        noise_sampler = NOISE_GENERATOR_CLASSES.get('fractal')(x=latent_samples, seed = seed)\r\n\r\n        for i in range(steps):\r\n            noise_latents[i] = noise_sampler(alpha=alphas[i].item(), k=ks[i].item(), scale=0.1)\r\n\r\n        return ({\"samples\": noise_latents}, )\r\n\r\nclass LatentNoiseList:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"latent\": (\"LATENT\",),\r\n                \"alpha\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\r\n                \"k_flip\": (\"BOOLEAN\", {\"default\": False}),\r\n                \"steps\": (\"INT\", {\"default\": 0, \"min\": -10000, \"max\": 10000}),\r\n                \"seed\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\r\n            },\r\n            \"optional\": {\r\n                \"alphas\": (\"SIGMAS\", ),\r\n                \"ks\": (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    OUTPUT_IS_LIST = (True,)\r\n    FUNCTION = \"main\"\r\n    \r\n    CATEGORY = \"RES4LYF/noise\"\r\n\r\n    def main(self, seed, latent, alpha, k_flip, steps, alphas=None, ks=None):\r\n        alphas = initialize_or_scale(alphas, alpha, steps)\r\n        k_flip = -1 if k_flip else 1\r\n        ks = initialize_or_scale(ks, k_flip, steps)    \r\n\r\n        latent_samples = latent[\"samples\"]\r\n        latents = []\r\n        size = latent_samples.shape\r\n\r\n        steps = len(alphas) if steps == 0 else steps\r\n\r\n        noise_sampler = NOISE_GENERATOR_CLASSES.get('fractal')(x=latent_samples, seed=seed)\r\n\r\n        for i in range(steps):\r\n            noise = noise_sampler(alpha=alphas[i].item(), k=ks[i].item(), scale=0.1)\r\n            noisy_latent = latent_samples + noise\r\n            new_latent = {\"samples\": noisy_latent}\r\n            latents.append(new_latent)\r\n\r\n        return (latents, )\r\n    \r\nclass LatentBatch_channels:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"latent\": (\"LATENT\",),\r\n                \"mode\": ([\"offset\", \"multiply\", \"power\"],),\r\n                \"luminosity\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"cyan_red\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"lime_purple\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"pattern_structure\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n            },\r\n            \"optional\": {\r\n                \"luminositys\": (\"SIGMAS\", ),\r\n                \"cyan_reds\": (\"SIGMAS\", ),\r\n                \"lime_purples\": (\"SIGMAS\", ),\r\n                \"pattern_structures\": (\"SIGMAS\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/latents\"\r\n    \r\n    @staticmethod\r\n    def latent_channels_multiply(x, luminosity = -0.1, cyan_red = 0.0, lime_purple=0.0, pattern_structure=0.0):\r\n        luminosity = x[0:1] * luminosity\r\n        cyan_red = x[1:2] * cyan_red\r\n        lime_purple = x[2:3] * lime_purple\r\n        pattern_structure = x[3:4] * pattern_structure\r\n\r\n        x = torch.unsqueeze(torch.cat([luminosity, cyan_red, lime_purple, pattern_structure]), 0)\r\n        return x\r\n\r\n    @staticmethod\r\n    def latent_channels_offset(x, luminosity = -0.1, cyan_red = 0.0, lime_purple=0.0, pattern_structure=0.0):\r\n        luminosity = x[0:1] + luminosity\r\n        cyan_red = x[1:2] + cyan_red\r\n        lime_purple = x[2:3] + lime_purple\r\n        pattern_structure = x[3:4] + pattern_structure\r\n\r\n        x = torch.unsqueeze(torch.cat([luminosity, cyan_red, lime_purple, pattern_structure]), 0)\r\n        return x\r\n    \r\n    @staticmethod\r\n    def latent_channels_power(x, luminosity = -0.1, cyan_red = 0.0, lime_purple=0.0, pattern_structure=0.0):\r\n        luminosity = x[0:1] ** luminosity\r\n        cyan_red = x[1:2] ** cyan_red\r\n        lime_purple = x[2:3] ** lime_purple\r\n        pattern_structure = x[3:4] ** pattern_structure\r\n\r\n        x = torch.unsqueeze(torch.cat([luminosity, cyan_red, lime_purple, pattern_structure]), 0)\r\n        return x\r\n\r\n    def main(self, latent, mode,\r\n              luminosity, cyan_red, lime_purple, pattern_structure, \r\n              luminositys=None, cyan_reds=None, lime_purples=None, pattern_structures=None):\r\n        \r\n        x = latent[\"samples\"]\r\n        b, c, h, w = x.shape  \r\n\r\n        noise_latents = torch.zeros([b, c, h, w], dtype=x.dtype, layout=x.layout, device=x.device)\r\n\r\n        luminositys = initialize_or_scale(luminositys, luminosity, b)\r\n        cyan_reds = initialize_or_scale(cyan_reds, cyan_red, b)\r\n        lime_purples = initialize_or_scale(lime_purples, lime_purple, b)\r\n        pattern_structures = initialize_or_scale(pattern_structures, pattern_structure, b)\r\n\r\n        for i in range(b):\r\n            if mode == \"offset\":\r\n                noise = self.latent_channels_offset(x[i], luminositys[i].item(), cyan_reds[i].item(), lime_purples[i].item(), pattern_structures[i].item())\r\n            elif mode == \"multiply\":  \r\n                noise = self.latent_channels_multiply(x[i], luminositys[i].item(), cyan_reds[i].item(), lime_purples[i].item(), pattern_structures[i].item())\r\n            elif mode == \"power\":  \r\n                noise = self.latent_channels_power(x[i], luminositys[i].item(), cyan_reds[i].item(), lime_purples[i].item(), pattern_structures[i].item())\r\n            noise_latents[i] = noise\r\n\r\n        return ({\"samples\": noise_latents}, )\r\n    \r\n\r\nclass LatentBatch_channels_16:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"latent\": (\"LATENT\",),\r\n                \"mode\": ([\"offset\", \"multiply\", \"power\"],),\r\n                \"chan_1\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_2\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_3\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_4\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_5\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_6\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_7\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_8\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_9\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_10\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_11\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_12\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_13\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_14\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_15\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"chan_16\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n            },\r\n            \"optional\": {\r\n                \"chan_1s\": (\"SIGMAS\", ),\r\n                \"chan_2s\": (\"SIGMAS\", ),\r\n                \"chan_3s\": (\"SIGMAS\", ),\r\n                \"chan_4s\": (\"SIGMAS\", ),\r\n                \"chan_5s\": (\"SIGMAS\", ),\r\n                \"chan_6s\": (\"SIGMAS\", ),\r\n                \"chan_7s\": (\"SIGMAS\", ),\r\n                \"chan_8s\": (\"SIGMAS\", ),\r\n                \"chan_9s\": (\"SIGMAS\", ),\r\n                \"chan_10s\": (\"SIGMAS\", ),\r\n                \"chan_11s\": (\"SIGMAS\", ),\r\n                \"chan_12s\": (\"SIGMAS\", ),\r\n                \"chan_13s\": (\"SIGMAS\", ),\r\n                \"chan_14s\": (\"SIGMAS\", ),\r\n                \"chan_15s\": (\"SIGMAS\", ),\r\n                \"chan_16s\": (\"SIGMAS\", ),\r\n\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/latents\"\r\n    \r\n    @staticmethod\r\n    def latent_channels_multiply(x, chan_1 = 0.0, chan_2 = 0.0, chan_3 = 0.0, chan_4 = 0.0, chan_5 = 0.0, chan_6 = 0.0, chan_7 = 0.0, chan_8 = 0.0, chan_9 = 0.0, chan_10 = 0.0, chan_11 = 0.0, chan_12 = 0.0, chan_13 = 0.0, chan_14 = 0.0, chan_15 = 0.0, chan_16 = 0.0):\r\n        chan_1 = x[0:1] * chan_1\r\n        chan_2 = x[1:2] * chan_2\r\n        chan_3 = x[2:3] * chan_3\r\n        chan_4 = x[3:4] * chan_4\r\n        chan_5 = x[4:5] * chan_5\r\n        chan_6 = x[5:6] * chan_6\r\n        chan_7 = x[6:7] * chan_7\r\n        chan_8 = x[7:8] * chan_8\r\n        chan_9 = x[8:9] * chan_9\r\n        chan_10 = x[9:10] * chan_10\r\n        chan_11 = x[10:11] * chan_11\r\n        chan_12 = x[11:12] * chan_12\r\n        chan_13 = x[12:13] * chan_13\r\n        chan_14 = x[13:14] * chan_14\r\n        chan_15 = x[14:15] * chan_15\r\n        chan_16 = x[15:16] * chan_16\r\n\r\n        x = torch.unsqueeze(torch.cat([chan_1, chan_2, chan_3, chan_4, chan_5, chan_6, chan_7, chan_8, chan_9, chan_10, chan_11, chan_12, chan_13, chan_14, chan_15, chan_16]), 0)\r\n        return x\r\n\r\n    @staticmethod\r\n    def latent_channels_offset(x, chan_1 = 0.0, chan_2 = 0.0, chan_3 = 0.0, chan_4 = 0.0, chan_5 = 0.0, chan_6 = 0.0, chan_7 = 0.0, chan_8 = 0.0, chan_9 = 0.0, chan_10 = 0.0, chan_11 = 0.0, chan_12 = 0.0, chan_13 = 0.0, chan_14 = 0.0, chan_15 = 0.0, chan_16 = 0.0):\r\n        chan_1 = x[0:1] + chan_1\r\n        chan_2 = x[1:2] + chan_2\r\n        chan_3 = x[2:3] + chan_3\r\n        chan_4 = x[3:4] + chan_4\r\n        chan_5 = x[4:5] + chan_5\r\n        chan_6 = x[5:6] + chan_6\r\n        chan_7 = x[6:7] + chan_7\r\n        chan_8 = x[7:8] + chan_8\r\n        chan_9 = x[8:9] + chan_9\r\n        chan_10 = x[9:10] + chan_10\r\n        chan_11 = x[10:11] + chan_11\r\n        chan_12 = x[11:12] + chan_12\r\n        chan_13 = x[12:13] + chan_13\r\n        chan_14 = x[13:14] + chan_14\r\n        chan_15 = x[14:15] + chan_15\r\n        chan_16 = x[15:16] + chan_16\r\n\r\n        x = torch.unsqueeze(torch.cat([chan_1, chan_2, chan_3, chan_4, chan_5, chan_6, chan_7, chan_8, chan_9, chan_10, chan_11, chan_12, chan_13, chan_14, chan_15, chan_16]), 0)\r\n        return x\r\n\r\n    @staticmethod\r\n    def latent_channels_power(x, chan_1 = 0.0, chan_2 = 0.0, chan_3 = 0.0, chan_4 = 0.0, chan_5 = 0.0, chan_6 = 0.0, chan_7 = 0.0, chan_8 = 0.0, chan_9 = 0.0, chan_10 = 0.0, chan_11 = 0.0, chan_12 = 0.0, chan_13 = 0.0, chan_14 = 0.0, chan_15 = 0.0, chan_16 = 0.0):\r\n        chan_1 = x[0:1] ** chan_1\r\n        chan_2 = x[1:2] ** chan_2\r\n        chan_3 = x[2:3] ** chan_3\r\n        chan_4 = x[3:4] ** chan_4\r\n        chan_5 = x[4:5] ** chan_5\r\n        chan_6 = x[5:6] ** chan_6\r\n        chan_7 = x[6:7] ** chan_7\r\n        chan_8 = x[7:8] ** chan_8\r\n        chan_9 = x[8:9] ** chan_9\r\n        chan_10 = x[9:10] ** chan_10\r\n        chan_11 = x[10:11] ** chan_11\r\n        chan_12 = x[11:12] ** chan_12\r\n        chan_13 = x[12:13] ** chan_13\r\n        chan_14 = x[13:14] ** chan_14\r\n        chan_15 = x[14:15] ** chan_15\r\n        chan_16 = x[15:16] ** chan_16\r\n\r\n        x = torch.unsqueeze(torch.cat([chan_1, chan_2, chan_3, chan_4, chan_5, chan_6, chan_7, chan_8, chan_9, chan_10, chan_11, chan_12, chan_13, chan_14, chan_15, chan_16]), 0)\r\n        return x\r\n\r\n    def main(self, latent, mode,\r\n              chan_1, chan_2, chan_3, chan_4, chan_5, chan_6, chan_7, chan_8, chan_9, chan_10, chan_11, chan_12, chan_13, chan_14, chan_15, chan_16,\r\n              chan_1s=None, chan_2s=None, chan_3s=None, chan_4s=None, chan_5s=None, chan_6s=None, chan_7s=None, chan_8s=None, chan_9s=None, chan_10s=None, chan_11s=None, chan_12s=None, chan_13s=None, chan_14s=None, chan_15s=None, chan_16s=None):\r\n        \r\n        x = latent[\"samples\"]\r\n        b, c, h, w = x.shape  \r\n\r\n        noise_latents = torch.zeros([b, c, h, w], dtype=x.dtype, layout=x.layout, device=x.device)\r\n        chan_1s = initialize_or_scale(chan_1s, chan_1, b)\r\n        chan_2s = initialize_or_scale(chan_2s, chan_2, b)\r\n        chan_3s = initialize_or_scale(chan_3s, chan_3, b)\r\n        chan_4s = initialize_or_scale(chan_4s, chan_4, b)\r\n        chan_5s = initialize_or_scale(chan_5s, chan_5, b)\r\n        chan_6s = initialize_or_scale(chan_6s, chan_6, b)\r\n        chan_7s = initialize_or_scale(chan_7s, chan_7, b)\r\n        chan_8s = initialize_or_scale(chan_8s, chan_8, b)\r\n        chan_9s = initialize_or_scale(chan_9s, chan_9, b)\r\n        chan_10s = initialize_or_scale(chan_10s, chan_10, b)\r\n        chan_11s = initialize_or_scale(chan_11s, chan_11, b)\r\n        chan_12s = initialize_or_scale(chan_12s, chan_12, b)\r\n        chan_13s = initialize_or_scale(chan_13s, chan_13, b)\r\n        chan_14s = initialize_or_scale(chan_14s, chan_14, b)\r\n        chan_15s = initialize_or_scale(chan_15s, chan_15, b)\r\n        chan_16s = initialize_or_scale(chan_16s, chan_16, b)\r\n\r\n        for i in range(b):\r\n            if mode == \"offset\":\r\n                noise = self.latent_channels_offset(x[i], chan_1s[i].item(), chan_2s[i].item(), chan_3s[i].item(), chan_4s[i].item(), chan_5s[i].item(), chan_6s[i].item(), chan_7s[i].item(), chan_8s[i].item(), chan_9s[i].item(), chan_10s[i].item(), chan_11s[i].item(), chan_12s[i].item(), chan_13s[i].item(), chan_14s[i].item(), chan_15s[i].item(), chan_16s[i].item())\r\n            elif mode == \"multiply\":  \r\n                noise = self.latent_channels_multiply(x[i], chan_1s[i].item(), chan_2s[i].item(), chan_3s[i].item(), chan_4s[i].item(), chan_5s[i].item(), chan_6s[i].item(), chan_7s[i].item(), chan_8s[i].item(), chan_9s[i].item(), chan_10s[i].item(), chan_11s[i].item(), chan_12s[i].item(), chan_13s[i].item(), chan_14s[i].item(), chan_15s[i].item(), chan_16s[i].item())\r\n            elif mode == \"power\":  \r\n                noise = self.latent_channels_power(x[i], chan_1s[i].item(), chan_2s[i].item(), chan_3s[i].item(), chan_4s[i].item(), chan_5s[i].item(), chan_6s[i].item(), chan_7s[i].item(), chan_8s[i].item(), chan_9s[i].item(), chan_10s[i].item(), chan_11s[i].item(), chan_12s[i].item(), chan_13s[i].item(), chan_14s[i].item(), chan_15s[i].item(), chan_16s[i].item())\r\n            noise_latents[i] = noise\r\n\r\n        return ({\"samples\": noise_latents}, )\r\n    \r\nclass latent_normalize_channels:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                    \"latent\": (\"LATENT\", ),     \r\n                    \"mode\": ([\"full\", \"channels\"],), \r\n                    \"operation\": ([\"normalize\", \"center\", \"standardize\"],), \r\n                     },\r\n                }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    RETURN_NAMES = (\"passthrough\",)\r\n    CATEGORY = \"RES4LYF/latents\"\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    def main(self, latent, mode, operation):\r\n        x = latent[\"samples\"]\r\n        b, c, h, w = x.shape\r\n\r\n        if mode == \"full\":\r\n            if operation == \"normalize\":\r\n                x = (x - x.mean()) / x.std()\r\n            elif operation == \"center\":\r\n                x = x - x.mean()\r\n            elif operation == \"standardize\":\r\n                x = x / x.std()\r\n\r\n        elif mode == \"channels\":\r\n            if operation == \"normalize\":\r\n                for i in range(b):\r\n                    for j in range(c):\r\n                        x[i, j] = (x[i, j] - x[i, j].mean()) / x[i, j].std()\r\n            elif operation == \"center\":\r\n                for i in range(b):\r\n                    for j in range(c):\r\n                        x[i, j] = x[i, j] - x[i, j].mean()\r\n            elif operation == \"standardize\":\r\n                for i in range(b):\r\n                    for j in range(c):\r\n                        x[i, j] = x[i, j] / x[i, j].std()\r\n\r\n        return ({\"samples\": x},)\r\n\r\n\r\n\r\n\r\ndef hard_light_blend(base_latent, blend_latent):\r\n    if base_latent.sum() == 0 and base_latent.std() == 0:\r\n        return base_latent\r\n    \r\n    blend_latent = (blend_latent - blend_latent.min()) / (blend_latent.max() - blend_latent.min())\r\n\r\n    positive_mask = base_latent >= 0\r\n    negative_mask = base_latent < 0\r\n    \r\n    positive_latent = base_latent * positive_mask.float()\r\n    negative_latent = base_latent * negative_mask.float()\r\n\r\n    positive_result = torch.where(blend_latent < 0.5,\r\n                                  2 * positive_latent * blend_latent,\r\n                                  1 - 2 * (1 - positive_latent) * (1 - blend_latent))\r\n\r\n    negative_result = torch.where(blend_latent < 0.5,\r\n                                  2 * negative_latent.abs() * blend_latent,\r\n                                  1 - 2 * (1 - negative_latent.abs()) * (1 - blend_latent))\r\n    negative_result = -negative_result\r\n\r\n    combined_result = positive_result * positive_mask.float() + negative_result * negative_mask.float()\r\n\r\n    #combined_result *= base_latent.max()\r\n    \r\n    ks = combined_result\r\n    ks2 = torch.zeros_like(base_latent)\r\n    for n in range(base_latent.shape[1]):\r\n        ks2[0][n] = (ks[0][n]) / ks[0][n].std()\r\n        ks2[0][n] = (ks2[0][n] * base_latent[0][n].std())\r\n    combined_result = ks2\r\n    \r\n    return combined_result\r\n\r\n\r\n\r\n"
  },
  {
    "path": "legacy/legacy_sampler_rk.py",
    "content": "import torch\r\nimport torch.nn.functional as F\r\n\r\n\r\nfrom tqdm.auto import trange\r\nimport math\r\nimport copy\r\nimport gc\r\n\r\nimport comfy.model_patcher\r\n\r\nfrom .noise_classes import NOISE_GENERATOR_CLASSES_SIMPLE, NOISE_GENERATOR_CLASSES\r\nfrom .deis_coefficients import get_deis_coeff_list\r\nfrom .latents import hard_light_blend\r\n\r\nfrom .noise_sigmas_timesteps_scaling import get_res4lyf_step_with_model, get_res4lyf_half_step3\r\n\r\n\r\ndef get_epsilon(model, x, sigma, **extra_args):\r\n    s_in = x.new_ones([x.shape[0]])\r\n    x0 = model(x, sigma * s_in, **extra_args)\r\n    eps = (x - x0) / (sigma * s_in) \r\n    return eps\r\n\r\ndef get_denoised(model, x, sigma, **extra_args):\r\n    s_in = x.new_ones([x.shape[0]])\r\n    x0 = model(x, sigma * s_in, **extra_args)\r\n    return x0\r\n\r\n\r\n# Remainder solution\r\ndef __phi(j, neg_h):\r\n  remainder = torch.zeros_like(neg_h)\r\n  \r\n  for k in range(j): \r\n    remainder += (neg_h)**k / math.factorial(k)\r\n  phi_j_h = ((neg_h).exp() - remainder) / (neg_h)**j\r\n  \r\n  return phi_j_h\r\n  \r\n  \r\ndef calculate_gamma(c2, c3):\r\n    return (3*(c3**3) - 2*c3) / (c2*(2 - 3*c2))\r\n\r\n\r\n\r\n\r\nfrom typing import Optional\r\n\r\n\r\ndef _gamma(n: int,) -> int:\r\n  \"\"\"\r\n  https://en.wikipedia.org/wiki/Gamma_function\r\n  for every positive integer n,\r\n  Γ(n) = (n-1)!\r\n  \"\"\"\r\n  return math.factorial(n-1)\r\n\r\ndef _incomplete_gamma(s: int, x: float, gamma_s: Optional[int] = None) -> float:\r\n  \"\"\"\r\n  https://en.wikipedia.org/wiki/Incomplete_gamma_function#Special_values\r\n  if s is a positive integer,\r\n  Γ(s, x) = (s-1)!*∑{k=0..s-1}(x^k/k!)\r\n  \"\"\"\r\n  if gamma_s is None:\r\n    gamma_s = _gamma(s)\r\n\r\n  sum_: float = 0\r\n  # {k=0..s-1} inclusive\r\n  for k in range(s):\r\n    numerator: float = x**k\r\n    denom: int = math.factorial(k)\r\n    quotient: float = numerator/denom\r\n    sum_ += quotient\r\n  incomplete_gamma_: float = sum_ * math.exp(-x) * gamma_s\r\n  return incomplete_gamma_\r\n\r\n\r\n# Exact analytic solution originally calculated by Clybius. https://github.com/Clybius/ComfyUI-Extra-Samplers/tree/main\r\ndef phi(j: int, neg_h: float, ):\r\n  \"\"\"\r\n  For j={1,2,3}: you could alternatively use Kat's phi_1, phi_2, phi_3 which perform fewer steps\r\n\r\n  Lemma 1\r\n  https://arxiv.org/abs/2308.02157\r\n  ϕj(-h) = 1/h^j*∫{0..h}(e^(τ-h)*(τ^(j-1))/((j-1)!)dτ)\r\n\r\n  https://www.wolframalpha.com/input?i=integrate+e%5E%28%CF%84-h%29*%28%CF%84%5E%28j-1%29%2F%28j-1%29%21%29d%CF%84\r\n  = 1/h^j*[(e^(-h)*(-τ)^(-j)*τ(j))/((j-1)!)]{0..h}\r\n  https://www.wolframalpha.com/input?i=integrate+e%5E%28%CF%84-h%29*%28%CF%84%5E%28j-1%29%2F%28j-1%29%21%29d%CF%84+between+0+and+h\r\n  = 1/h^j*((e^(-h)*(-h)^(-j)*h^j*(Γ(j)-Γ(j,-h)))/(j-1)!)\r\n  = (e^(-h)*(-h)^(-j)*h^j*(Γ(j)-Γ(j,-h))/((j-1)!*h^j)\r\n  = (e^(-h)*(-h)^(-j)*(Γ(j)-Γ(j,-h))/(j-1)!\r\n  = (e^(-h)*(-h)^(-j)*(Γ(j)-Γ(j,-h))/Γ(j)\r\n  = (e^(-h)*(-h)^(-j)*(1-Γ(j,-h)/Γ(j))\r\n\r\n  requires j>0\r\n  \"\"\"\r\n  assert j > 0\r\n  gamma_: float = _gamma(j)\r\n  incomp_gamma_: float = _incomplete_gamma(j, neg_h, gamma_s=gamma_)\r\n  phi_: float = math.exp(neg_h) * neg_h**-j * (1-incomp_gamma_/gamma_)\r\n  return phi_\r\n\r\n\r\nrk_coeff = {\r\n    \"gauss-legendre_5s\": (\r\n    [\r\n        [4563950663 / 32115191526, \r\n         (310937500000000 / 2597974476091533 + 45156250000 * (739**0.5) / 8747388808389), \r\n         (310937500000000 / 2597974476091533 - 45156250000 * (739**0.5) / 8747388808389),\r\n         (5236016175 / 88357462711 + 709703235 * (739**0.5) / 353429850844),\r\n         (5236016175 / 88357462711 - 709703235 * (739**0.5) / 353429850844)],\r\n         \r\n        [(4563950663 / 32115191526 - 38339103 * (739**0.5) / 6250000000),\r\n         (310937500000000 / 2597974476091533 + 9557056475401 * (739**0.5) / 3498955523355600000),\r\n         (310937500000000 / 2597974476091533 - 14074198220719489 * (739**0.5) / 3498955523355600000),\r\n         (5236016175 / 88357462711 + 5601362553163918341 * (739**0.5) / 2208936567775000000000),\r\n         (5236016175 / 88357462711 - 5040458465159165409 * (739**0.5) / 2208936567775000000000)],\r\n         \r\n        [(4563950663 / 32115191526 + 38339103 * (739**0.5) / 6250000000),\r\n         (310937500000000 / 2597974476091533 + 14074198220719489 * (739**0.5) / 3498955523355600000),\r\n         (310937500000000 / 2597974476091533 - 9557056475401 * (739**0.5) / 3498955523355600000),\r\n         (5236016175 / 88357462711 + 5040458465159165409 * (739**0.5) / 2208936567775000000000),\r\n         (5236016175 / 88357462711 - 5601362553163918341 * (739**0.5) / 2208936567775000000000)],\r\n         \r\n        [(4563950663 / 32115191526 - 38209 * (739**0.5) / 7938810),\r\n         (310937500000000 / 2597974476091533 - 359369071093750 * (739**0.5) / 70145310854471391),\r\n         (310937500000000 / 2597974476091533 - 323282178906250 * (739**0.5) / 70145310854471391),\r\n         (5236016175 / 88357462711 - 470139 * (739**0.5) / 1413719403376),\r\n         (5236016175 / 88357462711 - 44986764863 * (739**0.5) / 21205791050640)],\r\n         \r\n        [(4563950663 / 32115191526 + 38209 * (739**0.5) / 7938810),\r\n         (310937500000000 / 2597974476091533 + 359369071093750 * (739**0.5) / 70145310854471391),\r\n         (310937500000000 / 2597974476091533 + 323282178906250 * (739**0.5) / 70145310854471391),\r\n         (5236016175 / 88357462711 + 44986764863 * (739**0.5) / 21205791050640),\r\n         (5236016175 / 88357462711 + 470139 * (739**0.5) / 1413719403376)],\r\n        \r\n        [4563950663 / 16057595763,\r\n         621875000000000 / 2597974476091533,\r\n         621875000000000 / 2597974476091533,\r\n         10472032350 / 88357462711,\r\n         10472032350 / 88357462711]\r\n    ],\r\n    [\r\n        1 / 2,\r\n        1 / 2 - 99 * (739**0.5) / 10000,\r\n        1 / 2 + 99 * (739**0.5) / 10000,\r\n        1 / 2 - (739**0.5) / 60,\r\n        1 / 2 + (739**0.5) / 60\r\n    ]\r\n    ),\r\n    \"gauss-legendre_4s\": (\r\n        [\r\n            [1/4, 1/4 - 15**0.5 / 6, 1/4 + 15**0.5 / 6, 1/4],            \r\n            [1/4 + 15**0.5 / 6, 1/4, 1/4 - 15**0.5 / 6, 1/4],          \r\n            [1/4, 1/4 + 15**0.5 / 6, 1/4, 1/4 - 15**0.5 / 6],            \r\n            [1/4 - 15**0.5 / 6, 1/4, 1/4 + 15**0.5 / 6, 1/4],           \r\n            [1/8, 3/8, 3/8, 1/8]                                        \r\n        ],\r\n        [\r\n            1/2 - 15**0.5 / 10,                                     \r\n            1/2 + 15**0.5 / 10,                                         \r\n            1/2 + 15**0.5 / 10,                                        \r\n            1/2 - 15**0.5 / 10                                         \r\n        ]\r\n    ),\r\n    \"gauss-legendre_3s\": (\r\n        [\r\n            [5/36, 2/9 - 15**0.5 / 15, 5/36 - 15**0.5 / 30],\r\n            [5/36 + 15**0.5 / 24, 2/9, 5/36 - 15**0.5 / 24],\r\n            [5/36 + 15**0.5 / 30, 2/9 + 15**0.5 / 15, 5/36],\r\n            [5/18, 4/9, 5/18]\r\n        ],\r\n        [1/2 - 15**0.5 / 10, 1/2, 1/2 + 15**0.5 / 10]\r\n    ),\r\n    \"gauss-legendre_2s\": (\r\n        [\r\n            [1/4, 1/4 - 3**0.5 / 6],\r\n            [1/4 + 3**0.5 / 6, 1/4],\r\n            [1/2, 1/2],\r\n        ],\r\n        [1/2 - 3**0.5 / 6, 1/2 + 3**0.5 / 6]\r\n    ),\r\n    \"radau_iia_3s\": (\r\n        [    \r\n            [11/45 - 7*6**0.5 / 360, 37/225 - 169*6**0.5 / 1800, -2/225 + 6**0.5 / 75],\r\n            [37/225 + 169*6**0.5 / 1800, 11/45 + 7*6**0.5 / 360, -2/225 - 6**0.5 / 75],\r\n            [4/9 - 6**0.5 / 36, 4/9 + 6**0.5 / 36, 1/9],\r\n            [4/9 - 6**0.5 / 36, 4/9 + 6**0.5 / 36, 1/9],\r\n        ],\r\n        [2/5 - 6**0.5 / 10, 2/5 + 6**0.5 / 10, 1.]\r\n    ),\r\n    \"radau_iia_2s\": (\r\n        [    \r\n            [5/12, -1/12],\r\n            [3/4, 1/4],\r\n            [3/4, 1/4],\r\n        ],\r\n        [1/3, 1]\r\n    ),\r\n    \"lobatto_iiic_3s\": (\r\n        [    \r\n            [1/6, -1/3, 1/6],\r\n            [1/6, 5/12, -1/12],\r\n            [1/6, 2/3, 1/6],\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1]\r\n    ),\r\n    \"lobatto_iiic_2s\": (\r\n        [    \r\n            [1/2, -1/2],\r\n            [1/2, 1/2],\r\n            [1/2, 1/2],\r\n        ],\r\n        [0, 1]\r\n    ),\r\n    \"dormand-prince_13s\": (\r\n        [\r\n            [1/18, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\r\n            [1/48, 1/16, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\r\n            [1/32, 0, 3/32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\r\n            [5/16, 0, -75/64, 75/64, 0, 0, 0, 0, 0, 0, 0, 0, 0],\r\n            [3/80, 0, 0, 3/16, 3/20, 0, 0, 0, 0, 0, 0, 0, 0],\r\n            [29443841/614563906, 0, 0, 77736538/692538347, -28693883/1125000000, 23124283/1800000000, 0, 0, 0, 0, 0, 0, 0],\r\n            [16016141/946692911, 0, 0, 61564180/158732637, 22789713/633445777, 545815736/2771057229, -180193667/1043307555, 0, 0, 0, 0, 0, 0],\r\n            [39632708/573591083, 0, 0, -433636366/683701615, -421739975/2616292301, 100302831/723423059, 790204164/839813087, 800635310/3783071287, 0, 0, 0, 0, 0],\r\n            [246121993/1340847787, 0, 0, -37695042795/15268766246, -309121744/1061227803, -12992083/490766935, 6005943493/2108947869, 393006217/1396673457, 123872331/1001029789, 0, 0, 0, 0],\r\n            [-1028468189/846180014, 0, 0, 8478235783/508512852, 1311729495/1432422823, -10304129995/1701304382, -48777925059/3047939560, 15336726248/1032824649, -45442868181/3398467696, 3065993473/597172653, 0, 0, 0],\r\n            [185892177/718116043, 0, 0, -3185094517/667107341, -477755414/1098053517, -703635378/230739211, 5731566787/1027545527, 5232866602/850066563, -4093664535/808688257, 3962137247/1805957418, 65686358/487910083, 0, 0],\r\n            [403863854/491063109, 0, 0, -5068492393/434740067, -411421997/543043805, 652783627/914296604, 11173962825/925320556, -13158990841/6184727034, 3936647629/1978049680, -160528059/685178525, 248638103/1413531060, 0, 0],\r\n            [14005451/335480064, 0, 0, 0, 0, -59238493/1068277825, 181606767/758867731, 561292985/797845732, -1041891430/1371343529, 760417239/1151165299, 118820643/751138087, -528747749/2220607170, 1/4]\r\n        ],\r\n        [0, 1/18, 1/12, 1/8, 5/16, 3/8, 59/400, 93/200, 5490023248 / 9719169821, 13/20, 1201146811 / 1299019798, 1, 1],\r\n    ),\r\n    \"dormand-prince_6s\": (\r\n        [\r\n            [1/5, 0, 0, 0, 0, 0, 0],\r\n            [3/40, 9/40, 0, 0, 0, 0, 0],\r\n            [44/45, -56/15, 32/9, 0, 0, 0, 0],\r\n            [19372/6561, -25360/2187, 64448/6561, -212/729, 0, 0, 0],\r\n            [9017/3168, -355/33, 46732/5247, 49/176, -5103/18656, 0],\r\n            [35/384, 0, 500/1113, 125/192, -2187/6784, 11/84, 0],\r\n        ],\r\n        [0, 1/5, 3/10, 4/5, 8/9, 1],\r\n    ),\r\n    \"dormand-prince_6s_alt\": (\r\n        [\r\n            [1/5, 0, 0, 0, 0, 0, 0],\r\n            [3/40, 9/40, 0, 0, 0, 0, 0],\r\n            [44/45, -56/15, 32/9, 0, 0, 0, 0],\r\n            [19372/6561, -25360/2187, 64448/6561, -212/729, 0, 0, 0],\r\n            [9017/3168, -355/33, 46732/5247, 49/176, -5103/18656, 0],\r\n            [35/384, 0, 500/1113, 125/192, -2187/6784, 11/84, 0],\r\n        ],\r\n        [0, 1/5, 3/10, 4/5, 8/9, 1],\r\n    ),\r\n    \"dormand-prince_7s\": (\r\n        [\r\n            [1/5, 0, 0, 0, 0, 0, 0],\r\n            [3/40, 9/40, 0, 0, 0, 0, 0],\r\n            [44/45, -56/15, 32/9, 0, 0, 0, 0],\r\n            [19372/6561, -25360/2187, 64448/6561, -212/729, 0, 0, 0],\r\n            [9017/3168, -355/33, 46732/5247, 49/176, -5103/18656, 0],\r\n            [35/384, 0, 500/1113, 125/192, -2187/6784, 11/84, 0],\r\n        ],\r\n        [0, 1/5, 3/10, 4/5, 8/9, 1],\r\n    ),\r\n    \"bogacki-shampine_7s\": ( #5th order\r\n        [\r\n            [1/6, 0, 0, 0, 0, 0, 0],\r\n            [2/27, 4/27, 0, 0, 0, 0, 0],\r\n            [183/1372, -162/343, 1053/1372, 0, 0, 0, 0],\r\n            [68/297, -4/11, 42/143, 1960/3861, 0, 0, 0],\r\n            [597/22528, 81/352, 63099/585728, 58653/366080, 4617/20480, 0, 0],\r\n            [174197/959244, -30942/79937, 8152137/19744439, 666106/1039181, -29421/29068, 482048/414219, 0],\r\n            [587/8064, 0, 4440339/15491840, 24353/124800, 387/44800, 2152/5985, 7267/94080]\r\n        ],\r\n        [0, 1/6, 2/9, 3/7, 2/3, 3/4, 1] \r\n    ),\r\n    \"rk4_4s\": (\r\n        [\r\n            [1/2, 0, 0, 0],\r\n            [0, 1/2, 0, 0],\r\n            [0, 0, 1, 0],\r\n            [1/6, 1/3, 1/3, 1/6]\r\n        ],\r\n        [0, 1/2, 1/2, 1],\r\n    ),\r\n    \"rk38_4s\": (\r\n        [\r\n            [1/3, 0, 0, 0],\r\n            [-1/3, 1, 0, 0],\r\n            [1, -1, 1, 0],\r\n            [1/8, 3/8, 3/8, 1/8]\r\n        ],\r\n        [0, 1/3, 2/3, 1],\r\n    ),\r\n    \"ralston_4s\": (\r\n        [\r\n            [2/5, 0, 0, 0],\r\n            [(-2889+1428 * 5**0.5)/1024,   (3785-1620 * 5**0.5)/1024,  0, 0],\r\n            [(-3365+2094 * 5**0.5)/6040,   (-975-3046 * 5**0.5)/2552,  (467040+203968*5**0.5)/240845, 0],\r\n            [(263+24*5**0.5)/1812, (125-1000*5**0.5)/3828, (3426304+1661952*5**0.5)/5924787, (30-4*5**0.5)/123]\r\n        ],\r\n        [0, 2/5, (14-3 * 5**0.5)/16, 1],\r\n    ),\r\n    \"heun_3s\": (\r\n        [\r\n            [1/3, 0, 0],\r\n            [0, 2/3, 0],\r\n            [1/4, 0, 3/4]\r\n        ],\r\n        [0, 1/3, 2/3],\r\n    ),\r\n    \"kutta_3s\": (\r\n        [\r\n            [1/2, 0, 0],\r\n            [-1, 2, 0],\r\n            [1/6, 2/3, 1/6]\r\n        ],\r\n        [0, 1/2, 1],\r\n    ),\r\n    \"ralston_3s\": (\r\n        [\r\n            [1/2, 0, 0],\r\n            [0, 3/4, 0],\r\n            [2/9, 1/3, 4/9]\r\n        ],\r\n        [0, 1/2, 3/4],\r\n    ),\r\n    \"houwen-wray_3s\": (\r\n        [\r\n            [8/15, 0, 0],\r\n            [1/4, 5/12, 0],\r\n            [1/4, 0, 3/4]\r\n        ],\r\n        [0, 8/15, 2/3],\r\n    ),\r\n    \"ssprk3_3s\": (\r\n        [\r\n            [1, 0, 0],\r\n            [1/4, 1/4, 0],\r\n            [1/6, 1/6, 2/3]\r\n        ],\r\n        [0, 1, 1/2],\r\n    ),\r\n    \"midpoint_2s\": (\r\n        [\r\n            [1/2, 0],\r\n            [0, 1]\r\n        ],\r\n        [0, 1/2],\r\n    ),\r\n    \"heun_2s\": (\r\n        [\r\n            [1, 0],\r\n            [1/2, 1/2]\r\n        ],\r\n        [0, 1],\r\n    ),\r\n    \"ralston_2s\": (\r\n        [\r\n            [2/3, 0],\r\n            [1/4, 3/4]\r\n        ],\r\n        [0, 2/3],\r\n    ),\r\n    \"buehler\": (\r\n        [\r\n            [1],\r\n        ],\r\n        [0],\r\n    ),\r\n}\r\n\r\n\r\ndef get_rk_methods(rk_type, h, c1=0.0, c2=0.5, c3=1.0, h_prev=None, h_prev2=None, stepcount=0, sigmas=None):\r\n    FSAL = False\r\n    multistep_stages = 0\r\n    \r\n    if rk_type[:4] == \"deis\": \r\n        order = int(rk_type[-2])\r\n        if stepcount < order:\r\n            if order == 4:\r\n                rk_type = \"res_3s\"\r\n                order = 3\r\n            elif order == 3:\r\n                rk_type = \"res_3s\"\r\n            elif order == 2:\r\n                rk_type = \"res_2s\"\r\n        else:\r\n            rk_type = \"deis\"\r\n            multistep_stages = order-1\r\n\r\n    \r\n    if rk_type[-2:] == \"2m\": #multistep method\r\n        if h_prev is not None: \r\n            multistep_stages = 1\r\n            c2 = -h_prev / h\r\n            rk_type = rk_type[:-2] + \"2s\"\r\n        else:\r\n            rk_type = rk_type[:-2] + \"2s\"\r\n            \r\n    if rk_type[-2:] == \"3m\": #multistep method\r\n        if h_prev2 is not None: \r\n            multistep_stages = 2\r\n            c2 = -h_prev2 / h_prev\r\n            c3 = -h_prev / h\r\n            rk_type = rk_type[:-2] + \"3s\"\r\n        else:\r\n            rk_type = rk_type[:-2] + \"3s\"\r\n    \r\n    if rk_type in rk_coeff:\r\n        ab, ci = copy.deepcopy(rk_coeff[rk_type])\r\n        ci = ci[:]\r\n        ci.append(1)\r\n        \r\n        alpha_fn = lambda h: 1\r\n        t_fn = lambda sigma: sigma\r\n        sigma_fn = lambda t: t\r\n        h_fn = lambda sigma_down, sigma: sigma_down - sigma\r\n        model_call = get_denoised\r\n        EPS_PRED = False\r\n\r\n    else:\r\n        alpha_fn = lambda neg_h: torch.exp(neg_h)\r\n        t_fn = lambda sigma: sigma.log().neg()\r\n        sigma_fn = lambda t: t.neg().exp()\r\n        h_fn = lambda sigma_down, sigma: -torch.log(sigma_down/sigma)\r\n        model_call = get_denoised\r\n        EPS_PRED = False\r\n    \r\n    match rk_type:\r\n        case \"deis\": \r\n            alpha_fn = lambda neg_h: torch.exp(neg_h)\r\n            t_fn = lambda sigma: sigma.log().neg()\r\n            sigma_fn = lambda t: t.neg().exp()\r\n            h_fn = lambda sigma_down, sigma: -torch.log(sigma_down/sigma)\r\n            model_call = get_epsilon\r\n            EPS_PRED = True\r\n\r\n            coeff_list = get_deis_coeff_list(sigmas, multistep_stages+1, deis_mode=\"rhoab\")\r\n            coeff_list = [[elem / h for elem in inner_list] for inner_list in coeff_list]\r\n            if multistep_stages == 1:\r\n                b1, b2 = coeff_list[stepcount]\r\n                ab = [\r\n                        [0, 0],\r\n                        [b1, b2],\r\n                ]\r\n                ci = [0, 0, 1]\r\n            if multistep_stages == 2:\r\n                b1, b2, b3 = coeff_list[stepcount]\r\n                ab = [\r\n                        [0, 0, 0],\r\n                        [0, 0, 0],\r\n                        [b1, b2, b3],\r\n                ]\r\n                ci = [0, 0, 0, 1]\r\n            if multistep_stages == 3:\r\n                b1, b2, b3, b4 = coeff_list[stepcount]\r\n                ab = [\r\n                        [0, 0, 0, 0],\r\n                        [0, 0, 0, 0],\r\n                        [0, 0, 0, 0],\r\n                        [b1, b2, b3, b4],\r\n                ]\r\n                ci = [0, 0, 0, 0, 1]\r\n\r\n        case \"dormand-prince_6s\":\r\n            FSAL = True\r\n\r\n        case \"ddim\":\r\n            b1 = phi(1, -h)\r\n            ab = [\r\n                    [b1],\r\n            ]\r\n            ci = [0, 1]\r\n\r\n        case \"res_2s\":\r\n            a2_1 = c2 * phi(1, -h*c2)\r\n            b1 =        phi(1, -h) - phi(2, -h)/c2\r\n            b2 =        phi(2, -h)/c2\r\n            \r\n            a2_1 /= (1 - torch.exp(-h*c2)) / h\r\n            b1 /= phi(1, -h)\r\n            b2 /= phi(1, -h)\r\n\r\n            ab = [\r\n                    [a2_1, 0],\r\n                    [b1, b2],\r\n            ]\r\n            ci = [0, c2, 1]\r\n\r\n        case \"res_3s\":\r\n            gamma = calculate_gamma(c2, c3)\r\n            a2_1 = c2 * phi(1, -h*c2)\r\n            a3_2 = gamma * c2 * phi(2, -h*c2) + (c3 ** 2 / c2) * phi(2, -h*c3) #phi_2_c3_h  # a32 from k2 to k3\r\n            a3_1 = c3 * phi(1, -h*c3) - a3_2 # a31 from k1 to k3\r\n            b3 = (1 / (gamma * c2 + c3)) * phi(2, -h)      \r\n            b2 = gamma * b3  #simplified version of: b2 = (gamma / (gamma * c2 + c3)) * phi_2_h  \r\n            b1 = phi(1, -h) - b2 - b3     \r\n            0\r\n            a3_2 /= (1 - torch.exp(-h*c3)) / h\r\n            a3_1 /= (1 - torch.exp(-h*c3)) / h\r\n            b1 /= phi(1, -h)\r\n            b2 /= phi(1, -h)\r\n            b3 /= phi(1, -h)\r\n            \r\n            ab = [\r\n                    [a2_1, 0, 0],\r\n                    [a3_1, a3_2, 0],\r\n                    [b1, b2, b3],\r\n            ]\r\n            ci = [c1, c2, c3, 1]\r\n            #ci = [0, c2, c3, 1]\r\n\r\n        case \"dpmpp_2s\":\r\n            #c2 = 0.5\r\n            a2_1 =         c2   * phi(1, -h*c2)\r\n            b1 = (1 - 1/(2*c2)) * phi(1, -h)\r\n            b2 =     (1/(2*c2)) * phi(1, -h)\r\n            \r\n            a2_1 /= (1 - torch.exp(-h*c2)) / h\r\n            b1 /= phi(1, -h)\r\n            b2 /= phi(1, -h)\r\n            \r\n            ab = [\r\n                    [a2_1, 0],\r\n                    [b1, b2],\r\n            ]\r\n            ci = [0, c2, 1]\r\n            \r\n        case \"dpmpp_sde_2s\":\r\n            c2 = 1.0 #hardcoded to 1.0 to more closely emulate the configuration for k-diffusion's implementation\r\n            a2_1 =         c2   * phi(1, -h*c2)\r\n            b1 = (1 - 1/(2*c2)) * phi(1, -h)\r\n            b2 =     (1/(2*c2)) * phi(1, -h)\r\n            \r\n            a2_1 /= (1 - torch.exp(-h*c2)) / h\r\n            b1 /= phi(1, -h)\r\n            b2 /= phi(1, -h)\r\n            \r\n            ab = [\r\n                    [a2_1, 0],\r\n                    [b1, b2],\r\n            ]\r\n            ci = [0, c2, 1]\r\n\r\n        case \"dpmpp_3s\":\r\n            a2_1 = c2 * phi(1, -h*c2)\r\n            a3_2 = (c3**2 / c2) * phi(2, -h*c3)\r\n            a3_1 = c3 * phi(1, -h*c3) - a3_2\r\n            b2 = 0\r\n            b3 = (1/c3) * phi(2, -h)\r\n            b1 = phi(1, -h) - b2 - b3\r\n            \r\n            a2_1 /= (1 - torch.exp(-h*c2)) / h\r\n            a3_2 /= (1 - torch.exp(-h*c3)) / h\r\n            a3_1 /= (1 - torch.exp(-h*c3)) / h\r\n            b1 /= phi(1, -h)\r\n            b2 /= phi(1, -h)\r\n            b3 /= phi(1, -h)\r\n            \r\n            ab = [\r\n                    [a2_1, 0, 0],\r\n                    [a3_1, a3_2, 0],\r\n                    [b1, b2, b3],\r\n            ]\r\n            ci = [0, c2, c3, 1]\r\n            \r\n        case \"rk_exp_5s\":\r\n                \r\n            c1, c2, c3, c4, c5 = 0., 0.5, 0.5, 1., 0.5\r\n            \r\n            a2_1 = 0.5 * phi(1, -h * c2)\r\n            \r\n            a3_1 = 0.5 * phi(1, -h * c3) - phi(2, -h * c3)\r\n            a3_2 = phi(2, -h * c3)\r\n            \r\n            a4_1 = phi(1, -h * c4) - 2 * phi(2, -h * c4)\r\n            a4_2 = a4_3 = phi(2, -h * c4)\r\n            \r\n            a5_2 = a5_3 = 0.5 * phi(2, -h * c5) - phi(3, -h * c4) + 0.25 * phi(2, -h * c4) - 0.5 * phi(3, -h * c5)\r\n            a5_4 = 0.25 * phi(2, -h * c5) - a5_2\r\n            a5_1 = 0.5 * phi(1, -h * c5) - 2 * a5_2 - a5_4\r\n                    \r\n            b1 = phi(1, -h) - 3 * phi(2, -h) + 4 * phi(3, -h)\r\n            b2 = b3 = 0\r\n            b4 = -phi(2, -h) + 4*phi(3, -h)\r\n            b5 = 4 * phi(2, -h) - 8 * phi(3, -h)\r\n            \r\n            a2_1 /= (1 - torch.exp(-h*c2)) / h\r\n            \r\n            a3_1 /= (1 - torch.exp(-h*c3)) / h\r\n            a3_2 /= (1 - torch.exp(-h*c3)) / h\r\n            \r\n            a4_1 /= (1 - torch.exp(-h*c4)) / h\r\n            a4_2 /= (1 - torch.exp(-h*c4)) / h\r\n            a4_3 /= (1 - torch.exp(-h*c4)) / h\r\n            \r\n            a5_1 /= (1 - torch.exp(-h*c5)) / h\r\n            a5_2 /= (1 - torch.exp(-h*c5)) / h\r\n            a5_3 /= (1 - torch.exp(-h*c5)) / h\r\n            a5_4 /= (1 - torch.exp(-h*c5)) / h\r\n            \r\n            b1 /= phi(1, -h)\r\n            b2 /= phi(1, -h)\r\n            b3 /= phi(1, -h)\r\n            b4 /= phi(1, -h)\r\n            b5 /= phi(1, -h)\r\n            \r\n            ab = [\r\n                    [a2_1, 0, 0, 0, 0],\r\n                    [a3_1, a3_2, 0, 0, 0],\r\n                    [a4_1, a4_2, a4_3, 0, 0],\r\n                    [a5_1, a5_2, a5_3, a5_4, 0],\r\n                    [b1, b2, b3, b4, b5],\r\n            ]\r\n            ci = [0., 0.5, 0.5, 1., 0.5, 1]\r\n\r\n\r\n    return ab, ci, multistep_stages, model_call, alpha_fn, t_fn, sigma_fn, h_fn, FSAL, EPS_PRED\r\n\r\ndef get_rk_methods_order(rk_type):\r\n    ab, ci, multistep_stages, model_call, alpha_fn, t_fn, sigma_fn, h_fn, FSAL, EPS_PRED = get_rk_methods(rk_type, torch.tensor(1.0).to('cuda').to(torch.float64), c1=0.0, c2=0.5, c3=1.0)\r\n    return len(ci)-1\r\n\r\ndef get_rk_methods_order_and_fn(rk_type, h=None, c1=None, c2=None, c3=None, h_prev=None, h_prev2=None, stepcount=0, sigmas=None):\r\n    if h == None:\r\n        ab, ci, multistep_stages, model_call, alpha_fn, t_fn, sigma_fn, h_fn, FSAL, EPS_PRED = get_rk_methods(rk_type, torch.tensor(1.0).to('cuda').to(torch.float64), c1=0.0, c2=0.5, c3=1.0)\r\n    else:\r\n        ab, ci, multistep_stages, model_call, alpha_fn, t_fn, sigma_fn, h_fn, FSAL, EPS_PRED = get_rk_methods(rk_type, h, c1, c2, c3, h_prev, h_prev2, stepcount, sigmas)\r\n    return len(ci)-1, model_call, alpha_fn, t_fn, sigma_fn, h_fn, FSAL, EPS_PRED\r\n\r\ndef get_rk_methods_coeff(rk_type, h, c1, c2, c3, h_prev=None, h_prev2=None, stepcount=0, sigmas=None):\r\n    ab, ci, multistep_stages, model_call, alpha_fn, t_fn, sigma_fn, h_fn, FSAL, EPS_PRED = get_rk_methods(rk_type, h, c1, c2, c3, h_prev, h_prev2, stepcount, sigmas)\r\n    return ab, ci, multistep_stages, EPS_PRED\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n@torch.no_grad()\r\ndef legacy_sample_rk(model, x, sigmas, extra_args=None, callback=None, disable=None, noise_sampler=None, noise_sampler_type=\"brownian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"res_2m\", implicit_sampler_name=\"default\",\r\n              sigma_fn_formula=\"\", t_fn_formula=\"\",\r\n                  eta=0.0, eta_var=0.0, s_noise=1., d_noise=1., alpha=-1.0, k=1.0, scale=0.1, c1=0.0, c2=0.5, c3=1.0, MULTISTEP=False, cfgpp=0.0, implicit_steps=0, reverse_weight=0.0, exp_mode=False,\r\n                  latent_guide=None, latent_guide_inv=None, latent_guide_weight=0.0, latent_guide_weights=None, guide_mode=\"blend\",\r\n                  GARBAGE_COLLECT=False, mask=None, LGW_MASK_RESCALE_MIN=True, sigmas_override=None, t_is=None,\r\n                  ):\r\n    extra_args = {} if extra_args is None else extra_args\r\n    \r\n    if sigmas_override is not None:\r\n        sigmas = sigmas_override.clone()\r\n    sigmas = sigmas.clone() * d_noise\r\n    sigmin = model.inner_model.inner_model.model_sampling.sigma_min \r\n    sigmax = model.inner_model.inner_model.model_sampling.sigma_max \r\n    \r\n    UNSAMPLE = False\r\n    if sigmas[0] == 0.0:      #remove padding used to avoid need for model patch with noise inversion\r\n        UNSAMPLE = True\r\n        sigmas = sigmas[1:-1]\r\n    \r\n    if mask is None:\r\n        mask = torch.ones_like(x)\r\n        LGW_MASK_RESCALE_MIN = False\r\n    else:\r\n        mask = mask.unsqueeze(1)\r\n        mask = mask.repeat(1, x.shape[1], 1, 1) \r\n        mask = F.interpolate(mask, size=(x.shape[2], x.shape[3]), mode='bilinear', align_corners=False)\r\n        mask = mask.to(x.dtype).to(x.device)\r\n        \r\n    y0, y0_inv = torch.zeros_like(x), torch.zeros_like(x)\r\n    if latent_guide is not None:\r\n        if sigmas[0] > sigmas[1]:\r\n            y0 = latent_guide = model.inner_model.inner_model.process_latent_in(latent_guide['samples']).clone().to(x.device)\r\n        else:\r\n            x = model.inner_model.inner_model.process_latent_in(latent_guide['samples']).clone().to(x.device)\r\n\r\n    if latent_guide_inv is not None:\r\n        if sigmas[0] > sigmas[1]:\r\n            y0_inv = latent_guide_inv = model.inner_model.inner_model.process_latent_in(latent_guide_inv['samples']).clone().to(x.device)\r\n        elif UNSAMPLE and mask is not None:\r\n            x = mask * x + (1-mask) * model.inner_model.inner_model.process_latent_in(latent_guide_inv['samples']).clone().to(x.device)\r\n\r\n    uncond = [torch.full_like(x, 0.0)]\r\n    if cfgpp != 0.0:\r\n        def post_cfg_function(args):\r\n            uncond[0] = args[\"uncond_denoised\"]\r\n            return args[\"denoised\"]\r\n        model_options = extra_args.get(\"model_options\", {}).copy()\r\n        extra_args[\"model_options\"] = comfy.model_patcher.set_model_options_post_cfg_function(model_options, post_cfg_function, disable_cfg1_optimization=True)\r\n\r\n    if noise_seed == -1:\r\n        seed = torch.initial_seed() + 1\r\n    else:\r\n        seed = noise_seed\r\n\r\n    if noise_sampler_type == \"fractal\":\r\n        noise_sampler = NOISE_GENERATOR_CLASSES.get(noise_sampler_type)(x=x, seed=seed, sigma_min=sigmin, sigma_max=sigmax)\r\n        noise_sampler.alpha = alpha\r\n        noise_sampler.k = k\r\n        noise_sampler.scale = scale\r\n    else:\r\n        noise_sampler = NOISE_GENERATOR_CLASSES_SIMPLE.get(noise_sampler_type)(x=x, seed=seed, sigma_min=sigmin, sigma_max=sigmax)\r\n\r\n    if UNSAMPLE and sigmas[0] < sigmas[1]: #sigma_next > sigma:\r\n        y0 = noise_sampler(sigma=sigmax, sigma_next=sigmin)\r\n        y0 = (y0 - y0.mean()) / y0.std()\r\n        y0_inv = noise_sampler(sigma=sigmax, sigma_next=sigmin)\r\n        y0_inv = (y0_inv - y0_inv.mean()) / y0_inv.std()\r\n        \r\n    order, model_call, alpha_fn, t_fn, sigma_fn, h_fn, FSAL, EPS_PRED = get_rk_methods_order_and_fn(rk_type)\r\n\r\n    if exp_mode:\r\n        model_call = get_denoised\r\n        alpha_fn = lambda neg_h: torch.exp(neg_h)\r\n        t_fn     = lambda sigma: sigma.log().neg()\r\n        sigma_fn = lambda t: t.neg().exp() \r\n    \r\n    xi, ki, ki_u = [torch.zeros_like(x)]*(order+2), [torch.zeros_like(x)]*(order+1), [torch.zeros_like(x)]*(order+1)\r\n    h, h_prev, h_prev2 = None, None, None\r\n        \r\n    xi[0] = x\r\n\r\n    for _ in trange(len(sigmas)-1, disable=disable):\r\n        sigma, sigma_next = sigmas[_], sigmas[_+1]\r\n        \r\n        if sigma_next == 0.0:\r\n            rk_type = \"buehler\"\r\n            eta, eta_var = 0, 0\r\n\r\n        order, model_call, alpha_fn, t_fn, sigma_fn, h_fn, FSAL, EPS_PRED = get_rk_methods_order_and_fn(rk_type)\r\n        \r\n        #sigma_up, sigma, sigma_down, alpha_ratio = get_res4lyf_step_with_model(model, sigma, sigma_next, eta, eta_var, noise_mode, h_fn(sigma_next,sigma) )\r\n        sigma_up, sigma, sigma_down, alpha_ratio = get_res4lyf_step_with_model(model, sigma, sigma_next, eta, noise_mode)\r\n\r\n        t_down, t = t_fn(sigma_down), t_fn(sigma)\r\n        h = h_fn(sigma_down, sigma)\r\n        \r\n        c2, c3 = get_res4lyf_half_step3(sigma, sigma_down, c2, c3, t_fn=t_fn, sigma_fn=sigma_fn, t_fn_formula=t_fn_formula, sigma_fn_formula=sigma_fn_formula)\r\n        ab, ci, multistep_stages, EPS_PRED = get_rk_methods_coeff(rk_type, h, c1, c2, c3, h_prev, h_prev2, _, sigmas)\r\n        order = len(ci)-1\r\n        \r\n        if exp_mode:\r\n            for i in range(order):\r\n                for j in range(order):\r\n                    ab[i][j] = ab[i][j] * phi(1, -h * ci[i+1])\r\n        \r\n        if isinstance(model.inner_model.inner_model.model_sampling, comfy.model_sampling.CONST) == False and noise_mode == \"hard\" and sigma_next > 0.0:\r\n            noise = noise_sampler(sigma=sigmas[_], sigma_next=sigmas[_+1])\r\n            noise = torch.nan_to_num((noise - noise.mean()) / noise.std(), 0.0)\r\n            xi[0] = alpha_ratio * xi[0] + noise * s_noise * sigma_up\r\n\r\n        xi_0 = xi[0] # needed for implicit sampling\r\n\r\n        if (MULTISTEP == False and FSAL == False) or _ == 0:\r\n            ki[0]   = model_call(model, xi_0, sigma, **extra_args)\r\n            if EPS_PRED and rk_type.startswith(\"deis\"):\r\n                ki[0] = (xi_0 - ki[0]) / sigma\r\n                ki[0] = ki[0] * (sigma_down-sigma)/(sigma_next-sigma)\r\n            ki_u[0] = uncond[0]\r\n\r\n        if cfgpp != 0.0:\r\n            ki[0] = uncond[0] + cfgpp * (ki[0] - uncond[0])\r\n        ki_u[0] = uncond[0]\r\n\r\n        for iteration in range(implicit_steps+1):\r\n            for i in range(multistep_stages, order):\r\n                if implicit_steps > 0 and iteration > 0 and implicit_sampler_name != \"default\":\r\n                    ab, ci, multistep_stages, EPS_PRED = get_rk_methods_coeff(implicit_sampler_name, h, c1, c2, c3, h_prev, h_prev2, _, sigmas)\r\n                    order = len(ci)-1\r\n                    if len(ki) < order + 1:\r\n                        last_value_ki = ki[-1]\r\n                        last_value_ki_u = ki_u[-1]\r\n                        ki.extend(  [last_value_ki]   * ((order + 1) - len(ki)))\r\n                        ki_u.extend([last_value_ki_u] * ((order + 1) - len(ki_u)))\r\n                    if len(xi) < order + 2:\r\n                        xi.extend([torch.zeros_like(xi[0])] * ((order + 2) - len(xi)))\r\n                    \r\n                    ki[0]   = model_call(model, xi_0, sigma, **extra_args)\r\n                    ki_u[0] = uncond[0]\r\n                \r\n                sigma_mid = sigma_fn(t + h*ci[i+1])\r\n                alpha_t_1 = alpha_t_1_inv = torch.exp(torch.log(sigma_down/sigma) * ci[i+1] )\r\n                if sigma_next > sigma:\r\n                    alpha_t_1_inv = torch.nan_to_num(   torch.exp(torch.log((sigmax - sigma_down)/(sigmax - sigma)) * ci[i+1]),    1.)\r\n                \r\n                if LGW_MASK_RESCALE_MIN: \r\n                    lgw_mask = mask * (1 - latent_guide_weights[_]) + latent_guide_weights[_]\r\n                    lgw_mask_inv = (1-mask) * (1 - latent_guide_weights[_]) + latent_guide_weights[_]\r\n                else:\r\n                    lgw_mask = mask * latent_guide_weights[_]    \r\n                    lgw_mask_inv = (1-mask) * latent_guide_weights[_]   \r\n                    \r\n                ks, ks_u, ys, ys_inv = torch.zeros_like(x), torch.zeros_like(x), torch.zeros_like(x), torch.zeros_like(x)\r\n                for j in range(order):\r\n                    ks     += ab[i][j] * ki[j]\r\n                    ks_u   += ab[i][j] * ki_u[j]\r\n                    ys     += ab[i][j] * y0\r\n                    ys_inv += ab[i][j] * y0_inv\r\n                    \r\n                if EPS_PRED and rk_type.startswith(\"deis\"):\r\n                    epsilon = (h * ks) / (sigma_down - sigma)       #xi[(i+1)%order]  = xi_0 + h*ks\r\n                    ks = xi_0 - epsilon * sigma        # denoised\r\n                else:\r\n                    if implicit_sampler_name.startswith(\"lobatto\") == False:\r\n                        ks /= sum(ab[i])\r\n                    elif iteration == 0:\r\n                        ks /= sum(ab[i])\r\n                \r\n                if UNSAMPLE == False and latent_guide is not None and latent_guide_weights[_] > 0.0:\r\n                    if guide_mode == \"hard_light\":\r\n                        lg = latent_guide * sum(ab[i])\r\n                        if EPS_PRED:\r\n                            lg = (alpha_fn(-h*ci[i+1]) * xi[0] - latent_guide) / (sigma_fn(t + h*ci[i]) + 1e-8)\r\n                        hard_light_blend_1 = hard_light_blend(lg, ks)\r\n                        ks = (1 - lgw_mask) * ks   +   lgw_mask * hard_light_blend_1\r\n                        \r\n                    elif guide_mode == \"mean_std\":\r\n                        ks2 = torch.zeros_like(x)\r\n                        for n in range(latent_guide.shape[1]):\r\n                            ks2[0][n] = (ks[0][n] - ks[0][n].mean()) / ks[0][n].std()\r\n                            ks2[0][n] = (ks2[0][n] * latent_guide[0][n].std()) + latent_guide[0][n].mean()\r\n                        ks = (1 - lgw_mask) * ks   +   lgw_mask * ks2\r\n                        \r\n                    elif guide_mode == \"mean\":\r\n                        ks2 = torch.zeros_like(x)\r\n                        for n in range(latent_guide.shape[1]):\r\n                            ks2[0][n] = (ks[0][n] - ks[0][n].mean())\r\n                            ks2[0][n] = (ks2[0][n]) + latent_guide[0][n].mean()\r\n                        ks3 = torch.zeros_like(x)\r\n                        \r\n                        for n in range(latent_guide.shape[1]):\r\n                            ks3[0][n] = (ks[0][n] - ks[0][n].mean())\r\n                            ks3[0][n] = (ks3[0][n]) + latent_guide_inv[0][n].mean()\r\n                        ks = (1 - lgw_mask) * ks   +   lgw_mask * ks2\r\n                        ks = (1 - lgw_mask_inv) * ks   +   lgw_mask_inv * ks3\r\n                        \r\n                    elif guide_mode == \"std\":\r\n                        ks2 = torch.zeros_like(x)\r\n                        for n in range(latent_guide.shape[1]):\r\n                            ks2[0][n] = (ks[0][n]) / ks[0][n].std()\r\n                            ks2[0][n] = (ks2[0][n] * latent_guide[0][n].std())\r\n                        ks = (1 - lgw_mask) * ks   +   lgw_mask * ks2\r\n                        \r\n                    elif guide_mode == \"blend\": \r\n                        ks = (1 - lgw_mask)     * ks   +   lgw_mask     * ys   #+   (1-lgw_mask) * latent_guide_inv\r\n                        ks = (1 - lgw_mask_inv) * ks   +   lgw_mask_inv * ys_inv\r\n                        \r\n                    elif guide_mode == \"inversion\": \r\n                        UNSAMPLE = True\r\n\r\n                cfgpp_term = cfgpp*h*(ks - ks_u)\r\n                xi[(i+1)%order]  = (1-UNSAMPLE * lgw_mask) * (alpha_t_1     * (xi_0 + cfgpp_term)    +    (1 - alpha_t_1)     * ks )     \\\r\n                                    + UNSAMPLE * lgw_mask  * (alpha_t_1_inv * (xi_0 + cfgpp_term)    +    (1 - alpha_t_1_inv) * ys )\r\n                if UNSAMPLE:\r\n                    xi[(i+1)%order]  = (1-lgw_mask_inv) * xi[(i+1)%order]   + UNSAMPLE * lgw_mask_inv  * (alpha_t_1_inv * (xi_0 + cfgpp_term)    +      (1 - alpha_t_1_inv) * ys_inv )\r\n\r\n                if (i+1)%order > 0 and (i+1)%order > multistep_stages-1:\r\n                    if GARBAGE_COLLECT: gc.collect(); torch.cuda.empty_cache()\r\n                    ki[i+1]   = model_call(model, xi[i+1], sigma_fn(t + h*ci[i+1]), **extra_args)\r\n                    if EPS_PRED and rk_type.startswith(\"deis\"):\r\n                        ki[i+1] = (xi[i+1] - ki[i+1]) / sigma_fn(t + h*ci[i+1])\r\n                        ki[i+1] = ki[i+1] * (sigma_down-sigma)/(sigma_next-sigma)\r\n                    ki_u[i+1] = uncond[0]\r\n\r\n            if FSAL and _ > 0:\r\n                ki  [0] = ki[order-1]\r\n                ki_u[0] = ki_u[order-1]\r\n            if MULTISTEP and _ > 0:\r\n                ki  [0] = denoised\r\n                ki_u[0] = ki_u[order-1]\r\n            for ms in range(multistep_stages):\r\n                ki  [multistep_stages - ms] = ki  [multistep_stages - ms - 1]\r\n                ki_u[multistep_stages - ms] = ki_u[multistep_stages - ms - 1]\r\n            if iteration < implicit_steps and implicit_sampler_name == \"default\":\r\n                ki  [0] = model_call(model, xi[0], sigma_down, **extra_args)\r\n                ki_u[0] = uncond[0]\r\n            elif iteration == implicit_steps and implicit_sampler_name != \"default\" and implicit_steps > 0:\r\n                ks, ks_u, ys, ys_inv = torch.zeros_like(x), torch.zeros_like(x), torch.zeros_like(x), torch.zeros_like(x)\r\n                for j in range(order):\r\n                    ks     += ab[i+1][j] * ki[j]\r\n                    ks_u   += ab[i+1][j] * ki_u[j]\r\n                    ys     += ab[i+1][j] * y0\r\n                    ys_inv += ab[i+1][j] * y0_inv\r\n                ks /= sum(ab[i+1])\r\n                \r\n                cfgpp_term = cfgpp*h*(ks - ks_u)  #GUIDES NOT FULLY IMPLEMENTED HERE WITH IMPLICIT FINAL STEP\r\n                xi[(i+1)%order]  = (1-UNSAMPLE * lgw_mask) * (alpha_t_1     * (xi_0 + cfgpp_term)    +    (1 - alpha_t_1)     * ks )     \\\r\n                                    + UNSAMPLE * lgw_mask  * (alpha_t_1_inv * (xi_0 + cfgpp_term)    +    (1 - alpha_t_1_inv) * ys )\r\n                if UNSAMPLE:\r\n                    xi[(i+1)%order]  = (1-lgw_mask_inv) * xi[(i+1)%order]   + UNSAMPLE * lgw_mask_inv  * (alpha_t_1_inv * (xi_0 + cfgpp_term)    +      (1 - alpha_t_1_inv) * ys_inv )\r\n                \r\n\r\n            if EPS_PRED == True and exp_mode == False and not rk_type.startswith(\"deis\"):\r\n                denoised = alpha_fn(-h*ci[i+1]) * xi[0] - sigma * ks\r\n            elif EPS_PRED == True and rk_type.startswith(\"deis\"):\r\n                epsilon = (h * ks) / (sigma_down - sigma)\r\n                denoised = xi_0 - epsilon * sigma        # denoised\r\n            elif iteration == implicit_steps and implicit_sampler_name != \"default\" and implicit_steps > 0:\r\n                denoised = ks\r\n            else:\r\n                denoised = ks / sum(ab[i])\r\n            \r\n            \"\"\"if iteration < implicit_steps and implicit_sampler_name != \"default\":\r\n                for idx in range(len(ki)):\r\n                        ki[idx] = denoised\"\"\"\r\n\r\n        if callback is not None:\r\n            callback({'x': xi[0], 'i': _, 'sigma': sigma, 'sigma_next': sigma_next, 'denoised': denoised})\r\n            \r\n        if (isinstance(model.inner_model.inner_model.model_sampling, comfy.model_sampling.CONST) or noise_mode != \"hard\") and sigma_next > 0.0:\r\n            noise = noise_sampler(sigma=sigma, sigma_next=sigma_next)\r\n            noise = (noise - noise.mean()) / noise.std()\r\n            \r\n            if guide_mode == \"noise_mean\":\r\n                noise2 = torch.zeros_like(x)\r\n                for n in range(latent_guide.shape[1]):\r\n                    noise2[0][n] = (noise[0][n] - noise[0][n].mean())\r\n                    noise2[0][n] = (noise2[0][n]) + latent_guide[0][n].mean()\r\n                noise = (1 - lgw_mask) * noise   +   lgw_mask * noise2\r\n            \r\n            xi[0] = alpha_ratio * xi[0] + noise * s_noise * sigma_up\r\n            \r\n        h_prev2 = h_prev\r\n        h_prev = h\r\n        \r\n    return xi[0]\r\n\r\n"
  },
  {
    "path": "legacy/legacy_samplers.py",
    "content": "import torch\r\nimport torch.nn.functional as F\r\n\r\nimport comfy.samplers\r\nimport comfy.sample\r\nimport comfy.sampler_helpers\r\nimport comfy.model_sampling\r\nimport comfy.latent_formats\r\nimport comfy.sd\r\nfrom comfy_extras.nodes_model_advanced import ModelSamplingSD3, ModelSamplingFlux, ModelSamplingAuraFlow, ModelSamplingStableCascade\r\nimport comfy.supported_models\r\n\r\nimport latent_preview\r\n\r\nfrom .noise_classes import NOISE_GENERATOR_NAMES, NOISE_GENERATOR_NAMES_SIMPLE, NOISE_GENERATOR_CLASSES_SIMPLE, NOISE_GENERATOR_CLASSES\r\n\r\nfrom .sigmas import get_sigmas\r\nfrom .helper import get_res4lyf_scheduler_list\r\n\r\n\r\ndef initialize_or_scale(tensor, value, steps):\r\n    if tensor is None:\r\n        return torch.full((steps,), value)\r\n    else:\r\n        return value * tensor\r\n    \r\ndef move_to_same_device(*tensors):\r\n    if not tensors:\r\n        return tensors\r\n\r\n    device = tensors[0].device\r\n    return tuple(tensor.to(device) for tensor in tensors)\r\n\r\n\r\n\r\nRK_SAMPLER_NAMES = [\"res_2m\",\r\n                    \"res_3m\",\r\n                    \"res_2s\", \r\n                    \"res_3s\",\r\n                    \"rk_exp_5s\",\r\n\r\n                    \"deis_2m\",\r\n                    \"deis_3m\", \r\n                    \"deis_4m\",\r\n                    \r\n                    \"ralston_2s\",\r\n                    \"ralston_3s\",\r\n                    \"ralston_4s\", \r\n                    \r\n                    \"dpmpp_2m\",\r\n                    \"dpmpp_3m\",\r\n                    \"dpmpp_2s\",\r\n                    \"dpmpp_sde_2s\",\r\n                    \"dpmpp_3s\",\r\n                    \r\n                    \"midpoint_2s\",\r\n                    \"heun_2s\", \r\n                    \"heun_3s\", \r\n                    \r\n                    \"houwen-wray_3s\",\r\n                    \"kutta_3s\", \r\n                    \"ssprk3_3s\",\r\n                    \r\n                    \"rk38_4s\",\r\n                    \"rk4_4s\", \r\n\r\n                    \"dormand-prince_6s\", \r\n                    \"dormand-prince_13s\", \r\n                    \"bogacki-shampine_7s\",\r\n\r\n                    \"ddim\",\r\n                    \"buehler\",\r\n                    ]\r\n\r\n\r\nIRK_SAMPLER_NAMES = [\r\n                    \"gauss-legendre_2s\",\r\n                    \"gauss-legendre_3s\", \r\n                    \"gauss-legendre_4s\",\r\n                    \"gauss-legendre_5s\",\r\n                    \r\n                    \"radau_iia_2s\",\r\n                    \"radau_iia_3s\",\r\n                    \r\n                    \"lobatto_iiic_2s\",\r\n                    \"lobatto_iiic_3s\",\r\n                    \r\n                    \"crouzeix_2s\",\r\n                    \"crouzeix_3s\",\r\n                    \r\n                    \"irk_exp_diag_2s\",\r\n\r\n                    \"use_explicit\", \r\n                    ]\r\n\r\n\r\nclass Legacy_ClownsharKSampler:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {\"model\": (\"MODEL\",),\r\n                    #\"add_noise\": (\"BOOLEAN\", {\"default\": True}),\r\n                    \"noise_type_init\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\r\n                    \"noise_type_sde\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"brownian\"}),\r\n                    \"noise_mode_sde\": ([\"hard\", \"hard_var\", \"hard_sq\", \"soft\", \"softer\", \"exp\"], {\"default\": 'hard', \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\r\n                    \"eta\": (\"FLOAT\", {\"default\": 0.25, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\r\n                    \"noise_seed\": (\"INT\", {\"default\": 0, \"min\": -1, \"max\": 0xffffffffffffffff}),\r\n                    #\"sampler_mode\": (['standard', 'unsample', 'resample'],),\r\n                    \"sampler_mode\": (['standard', 'unsample', 'resample',],),\r\n                    \"sampler_name\": (RK_SAMPLER_NAMES, {\"default\": \"res_2m\"}), \r\n                    \"implicit_sampler_name\": ([\"default\", \r\n                                               \"gauss-legendre_5s\",\r\n                                               \"gauss-legendre_4s\",\r\n                                               \"gauss-legendre_3s\", \r\n                                               \"gauss-legendre_2s\",\r\n                                               \"crouzeix_2s\",\r\n                                               \"radau_iia_3s\",\r\n                                               \"radau_iia_2s\",\r\n                                               \"lobatto_iiic_3s\",\r\n                                               \"lobatto_iiic_2s\",\r\n                                               ], {\"default\": \"default\"}), \r\n                    \"scheduler\": (get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"steps\": (\"INT\", {\"default\": 30, \"min\": 1, \"max\": 10000}),\r\n                    \"implicit_steps\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 10000}),\r\n                    \"denoise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    \"denoise_alt\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    \"cfg\": (\"FLOAT\", {\"default\": 5.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.1, \"round\": False, }),\r\n                    \"shift\": (\"FLOAT\", {\"default\": 3.0, \"min\": -1.0, \"max\": 100.0, \"step\":0.1, \"round\": False, }),\r\n                    \"base_shift\": (\"FLOAT\", {\"default\": 0.85, \"min\": -1.0, \"max\": 100.0, \"step\":0.1, \"round\": False, }),\r\n                    \"truncate_conditioning\": (['false', 'true'], {\"default\": \"true\"}),\r\n                     },\r\n                \"optional\": \r\n                    {\r\n                    \"positive\": (\"CONDITIONING\", ),\r\n                    \"negative\": (\"CONDITIONING\", ),\r\n                    \"sigmas\": (\"SIGMAS\", ),\r\n                    \"latent_image\": (\"LATENT\", ),     \r\n                    \"guides\": (\"GUIDES\", ),     \r\n                    \"options\": (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"LATENT\",\"LATENT\", ) #\"LATENT\",\"LATENT\")\r\n    RETURN_NAMES = (\"output\", \"denoised\",) # \"output_fp64\", \"denoised_fp64\")\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/legacy/samplers\"\r\n    DEPRECATED = True\r\n    \r\n    def main(self, model, cfg, truncate_conditioning, sampler_mode, scheduler, steps, denoise=1.0, denoise_alt=1.0,\r\n             noise_type_init=\"gaussian\", noise_type_sde=\"brownian\", noise_mode_sde=\"hard\", latent_image=None, \r\n             positive=None, negative=None, sigmas=None, latent_noise=None, latent_noise_match=None,\r\n             noise_stdev=1.0, noise_mean=0.0, noise_normalize=True, noise_is_latent=False, \r\n             eta=0.25, eta_var=0.0, d_noise=1.0, s_noise=1.0, alpha_init=-1.0, k_init=1.0, alpha_sde=-1.0, k_sde=1.0, cfgpp=0.0, c1=0.0, c2=0.5, c3=1.0, multistep=False, noise_seed=-1, sampler_name=\"res_2m\", implicit_sampler_name=\"default\",\r\n                    exp_mode=False, t_fn_formula=None, sigma_fn_formula=None, implicit_steps=0,\r\n                    latent_guide=None, latent_guide_inv=None, latent_guide_weight=0.0, guide_mode=\"blend\", latent_guide_weights=None, latent_guide_mask=None, rescale_floor=True, sigmas_override=None, unsampler_type=\"linear\",\r\n                    shift=3.0, base_shift=0.85, guides=None, options=None,\r\n                    ): \r\n            default_dtype = torch.float64\r\n            max_steps = 10000\r\n\r\n\r\n            if noise_seed == -1:\r\n                seed = torch.initial_seed() + 1\r\n            else:\r\n                seed = noise_seed\r\n                torch.manual_seed(noise_seed)\r\n            noise_seed_sde = seed + 1\r\n\r\n            if options is not None:\r\n                noise_stdev = options.get('noise_init_stdev', noise_stdev)\r\n                noise_mean = options.get('noise_init_mean', noise_mean)\r\n                noise_type_init = options.get('noise_type_init', noise_type_init)\r\n                noise_type_sde = options.get('noise_type_sde', noise_type_sde)\r\n                noise_mode_sde = options.get('noise_mode_sde', noise_mode_sde)\r\n                eta = options.get('eta', eta)\r\n                s_noise = options.get('s_noise', s_noise)\r\n                d_noise = options.get('d_noise', d_noise)\r\n                alpha_init = options.get('alpha_init', alpha_init)\r\n                k_init = options.get('k_init', k_init)\r\n                alpha_sde = options.get('alpha_sde', alpha_sde)\r\n                k_sde = options.get('k_sde', k_sde)\r\n                noise_seed_sde = options.get('noise_seed_sde', noise_seed+1)\r\n                c1 = options.get('c1', c1)\r\n                c2 = options.get('c2', c2)\r\n                c3 = options.get('c3', c3)\r\n                t_fn_formula = options.get('t_fn_formula', t_fn_formula)\r\n                sigma_fn_formula = options.get('sigma_fn_formula', sigma_fn_formula)\r\n                #unsampler_type = options.get('unsampler_type', unsampler_type)\r\n\r\n            if guides is not None:\r\n                guide_mode, rescale_floor, latent_guide_weight, latent_guide_weights, t_is, latent_guide, latent_guide_inv, latent_guide_mask, scheduler_, steps_, denoise_ = guides\r\n                \"\"\"if scheduler == \"constant\": \r\n                    latent_guide_weights = initialize_or_scale(latent_guide_weights, latent_guide_weight, max_steps).to(default_dtype)\r\n                    latent_guide_weights = F.pad(latent_guide_weights, (0, max_steps), value=0.0)\"\"\"\r\n                if scheduler_ != \"constant\":\r\n                    latent_guide_weights = get_sigmas(model, scheduler_, steps_, denoise_).to(default_dtype)\r\n            latent_guide_weights = initialize_or_scale(latent_guide_weights, latent_guide_weight, max_steps).to(default_dtype)\r\n            latent_guide_weights = F.pad(latent_guide_weights, (0, max_steps), value=0.0)\r\n            \r\n            if shift >= 0:\r\n                if isinstance(model.model.model_config, comfy.supported_models.SD3):\r\n                    model = ModelSamplingSD3().patch(model, shift)[0] \r\n                elif isinstance(model.model.model_config, comfy.supported_models.AuraFlow):\r\n                    model = ModelSamplingAuraFlow().patch_aura(model, shift)[0] \r\n                elif isinstance(model.model.model_config, comfy.supported_models.Stable_Cascade_C):\r\n                    model = ModelSamplingStableCascade().patch(model, shift)[0] \r\n            if shift >= 0 and base_shift >= 0:\r\n                if isinstance(model.model.model_config, comfy.supported_models.Flux) or isinstance(model.model.model_config, comfy.supported_models.FluxSchnell):\r\n                    model = ModelSamplingFlux().patch(model, shift, base_shift, latent_image['samples'].shape[3], latent_image['samples'].shape[2])[0] \r\n\r\n            latent = latent_image\r\n            latent_image_dtype = latent_image['samples'].dtype\r\n\r\n            if positive is None:\r\n                positive = [[\r\n                    torch.zeros((1, 154, 4096)),\r\n                    {'pooled_output': torch.zeros((1, 2048))}\r\n                    ]]\r\n            \r\n            if negative is None:\r\n                negative = [[\r\n                    torch.zeros((1, 154, 4096)),\r\n                    {'pooled_output': torch.zeros((1, 2048))}\r\n                    ]]\r\n                \r\n            if denoise_alt < 0:\r\n                d_noise = denoise_alt = -denoise_alt\r\n            if options is not None:\r\n                d_noise = options.get('d_noise', d_noise)\r\n            \r\n            if sigmas is not None:\r\n                sigmas = sigmas.clone().to(default_dtype)\r\n            else: \r\n                sigmas = get_sigmas(model, scheduler, steps, denoise).to(default_dtype)\r\n            sigmas *= denoise_alt\r\n            \r\n        \r\n            if sampler_mode.startswith(\"unsample\"): \r\n                null = torch.tensor([0.0], device=sigmas.device, dtype=sigmas.dtype)\r\n                sigmas = torch.flip(sigmas, dims=[0])\r\n                sigmas = torch.cat([sigmas, null])\r\n            elif sampler_mode.startswith(\"resample\"):\r\n                null = torch.tensor([0.0], device=sigmas.device, dtype=sigmas.dtype)\r\n                sigmas = torch.cat([null, sigmas])\r\n                sigmas = torch.cat([sigmas, null])\r\n            \r\n            if sampler_mode.startswith(\"unsample_\"):\r\n                unsampler_type = sampler_mode.split(\"_\", 1)[1]\r\n            elif sampler_mode.startswith(\"resample_\"):\r\n                unsampler_type = sampler_mode.split(\"_\", 1)[1]\r\n            else:\r\n                unsampler_type = \"\"\r\n                \r\n            x = latent_image[\"samples\"].clone().to(default_dtype) \r\n            if latent_image is not None:\r\n                if \"samples_fp64\" in latent_image:\r\n                    if latent_image['samples'].shape == latent_image['samples_fp64'].shape:\r\n                        if torch.norm(latent_image['samples'] - latent_image['samples_fp64']) < 0.01:\r\n                            x = latent_image[\"samples_fp64\"].clone()\r\n                \r\n            if latent_noise is not None:\r\n                latent_noise[\"samples\"] = latent_noise[\"samples\"].clone().to(default_dtype)  \r\n            if latent_noise_match is not None:\r\n                latent_noise_match[\"samples\"] = latent_noise_match[\"samples\"].clone().to(default_dtype)\r\n\r\n            if truncate_conditioning == \"true\" or truncate_conditioning == \"true_and_zero_neg\":\r\n                if positive is not None:\r\n                    positive[0][0] = positive[0][0].clone().to(default_dtype)\r\n                    positive[0][1][\"pooled_output\"] = positive[0][1][\"pooled_output\"].clone().to(default_dtype)\r\n                if negative is not None:\r\n                    negative[0][0] = negative[0][0].clone().to(default_dtype)\r\n                    negative[0][1][\"pooled_output\"] = negative[0][1][\"pooled_output\"].clone().to(default_dtype)\r\n                c = []\r\n                for t in positive:\r\n                    d = t[1].copy()\r\n                    pooled_output = d.get(\"pooled_output\", None)\r\n                    if pooled_output is not None:\r\n                        d[\"pooled_output\"] = d[\"pooled_output\"][:, :2048]\r\n                        n = [t[0][:, :154, :4096], d]\r\n                    c.append(n)\r\n                positive = c\r\n                \r\n                c = []\r\n                for t in negative:\r\n                    d = t[1].copy()\r\n                    pooled_output = d.get(\"pooled_output\", None)\r\n                    if pooled_output is not None:\r\n                        if truncate_conditioning == \"true_and_zero_neg\":\r\n                            d[\"pooled_output\"] = torch.zeros((1,2048), dtype=t[0].dtype, device=t[0].device)\r\n                            n = [torch.zeros((1,154,4096), dtype=t[0].dtype, device=t[0].device), d]\r\n                        else:\r\n                            d[\"pooled_output\"] = d[\"pooled_output\"][:, :2048]\r\n                            n = [t[0][:, :154, :4096], d]\r\n                    c.append(n)\r\n                negative = c\r\n            \r\n            sigmin = model.model.model_sampling.sigma_min\r\n            sigmax = model.model.model_sampling.sigma_max\r\n\r\n            if noise_type_init == \"none\":\r\n                noise = torch.zeros_like(x)\r\n            elif latent_noise is None:\r\n                noise_sampler_init = NOISE_GENERATOR_CLASSES_SIMPLE.get(noise_type_init)(x=x, seed=seed, sigma_min=sigmin, sigma_max=sigmax)\r\n            \r\n                if noise_type_init == \"fractal\":\r\n                    noise_sampler_init.alpha = alpha_init\r\n                    noise_sampler_init.k = k_init\r\n                    noise_sampler_init.scale = 0.1\r\n                noise = noise_sampler_init(sigma=sigmax, sigma_next=sigmin)\r\n            else:\r\n                noise = latent_noise[\"samples\"]\r\n\r\n            if noise_is_latent: #add noise and latent together and normalize --> noise\r\n                noise += x.cpu()\r\n                noise.sub_(noise.mean()).div_(noise.std())\r\n\r\n            if noise_normalize and noise.std() > 0:\r\n                noise.sub_(noise.mean()).div_(noise.std())\r\n            noise *= noise_stdev\r\n            noise = (noise - noise.mean()) + noise_mean\r\n            \r\n            if latent_noise_match:\r\n                for i in range(latent_noise_match[\"samples\"].shape[1]):\r\n                    noise[0][i] = (noise[0][i] - noise[0][i].mean())\r\n                    noise[0][i] = (noise[0][i]) + latent_noise_match[\"samples\"][0][i].mean()\r\n\r\n            noise_mask = latent[\"noise_mask\"] if \"noise_mask\" in latent else None\r\n\r\n            x0_output = {}\r\n\r\n            callback = latent_preview.prepare_callback(model, sigmas.shape[-1] - 1, x0_output)\r\n\r\n            disable_pbar = False\r\n            \r\n            if noise_type_sde == \"none\":\r\n                eta_var = eta = 0.0\r\n                noise_type_sde = \"gaussian\"\r\n            if noise_mode_sde == \"hard_var\":\r\n                eta_var = eta\r\n                eta = 0.0\r\n            \r\n            if cfg < 0:\r\n                cfgpp = -cfg\r\n                cfg = 1.0\r\n                \r\n            sampler = comfy.samplers.ksampler(\"legacy_rk\", {\"eta\": eta, \"eta_var\": eta_var, \"s_noise\": s_noise, \"d_noise\": d_noise, \"alpha\": alpha_sde, \"k\": k_sde, \"c1\": c1, \"c2\": c2, \"c3\": c3, \"cfgpp\": cfgpp, \"MULTISTEP\": multistep, \r\n                                                     \"noise_sampler_type\": noise_type_sde, \"noise_mode\": noise_mode_sde, \"noise_seed\": noise_seed_sde, \"rk_type\": sampler_name, \"implicit_sampler_name\": implicit_sampler_name,\r\n                                                            \"exp_mode\": exp_mode, \"t_fn_formula\": t_fn_formula, \"sigma_fn_formula\": sigma_fn_formula, \"implicit_steps\": implicit_steps,\r\n                                                            \"latent_guide\": latent_guide, \"latent_guide_inv\": latent_guide_inv, \"mask\": latent_guide_mask, \r\n                                                            \"latent_guide_weights\": latent_guide_weights, \"guide_mode\": guide_mode, #\"unsampler_type\": unsampler_type,\r\n                                                            \"LGW_MASK_RESCALE_MIN\": rescale_floor, \"sigmas_override\": sigmas_override})\r\n\r\n            samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, x.clone(), \r\n                                                 noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed)\r\n\r\n            out = latent.copy()\r\n            out[\"samples\"] = samples\r\n            if \"x0\" in x0_output:\r\n                out_denoised = latent.copy()\r\n                out_denoised[\"samples\"] = model.model.process_latent_out(x0_output[\"x0\"].cpu())\r\n            else:\r\n                out_denoised = out\r\n            \r\n            out[\"samples_fp64\"] = out[\"samples\"].clone()\r\n            out[\"samples\"] = out[\"samples\"].to(latent_image_dtype)\r\n            \r\n            out_denoised[\"samples_fp64\"] = out_denoised[\"samples\"].clone()\r\n            out_denoised[\"samples\"] = out_denoised[\"samples\"].to(latent_image_dtype)\r\n\r\n            return ( out, out_denoised, )\r\n\r\n\r\n\r\n\r\n\r\nclass Legacy_SamplerRK:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {#\"momentum\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False}),\r\n                     \"eta\": (\"FLOAT\", {\"default\": 0.25, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\r\n                     \"eta_var\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Calculate variance-corrected noise amount (overrides eta/noise_mode settings). Cannot be used at very low sigma values; reverts to eta/noise_mode for final steps.\"}),\r\n                     \"s_noise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Ratio of calculated noise amount actually added after each step. >1.0 will leave extra noise behind, <1.0 will remove more noise than it adds.\"}),\r\n                     \"d_noise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Ratio of calculated noise amount actually added after each step. >1.0 will leave extra noise behind, <1.0 will remove more noise than it adds.\"}),\r\n\r\n                     \"noise_mode\": ([\"hard\", \"hard_sq\", \"soft\", \"softer\", \"exp\"], {\"default\": 'hard', \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\r\n                     \"noise_sampler_type\": (NOISE_GENERATOR_NAMES, {\"default\": \"brownian\"}),\r\n                     \"alpha\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.1, \"round\": False, \"tooltip\": \"Fractal noise mode: <0 = extra high frequency noise, >0 = extra low frequency noise, 0 = white noise.\"}),\r\n                     \"k\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":2.0, \"round\": False, \"tooltip\": \"Fractal noise mode: all that matters is positive vs. negative. Effect unclear.\"}),\r\n                     \"noise_seed\": (\"INT\", {\"default\": -1, \"min\": -1, \"max\": 0xffffffffffffffff, \"tooltip\": \"Seed for the SDE noise that is added after each step if eta or eta_var are non-zero. If set to -1, it will use the increment the seed most recently used by the workflow.\"}),\r\n                     \"rk_type\": (RK_SAMPLER_NAMES, {\"default\": \"res_2m\"}), \r\n                     \"exp_mode\": (\"BOOLEAN\", {\"default\": False, \"tooltip\": \"Convert linear RK methods to exponential form.\"}), \r\n                     \"multistep\": (\"BOOLEAN\", {\"default\": False, \"tooltip\": \"For samplers ending in S only. Reduces cost by one model call per step by reusing the previous step as the current predictor step.\"}),\r\n                     \"implicit_steps\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 100, \"step\":1, \"tooltip\": \"Number of implicit Runge-Kutta refinement steps to run after each explicit step.\"}),\r\n                     \"cfgpp\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, \"tooltip\": \"CFG++ scale. Use in place of, or with, CFG. Currently only working with RES, DPMPP, and DDIM samplers.\"}),\r\n                     \"latent_guide_weight\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False}),\r\n                     #\"guide_mode\": ([\"hard_light\", \"mean_std\", \"mean\", \"std\", \"noise_mean\", \"blend\", \"inversion\"], {\"default\": 'mean', \"tooltip\": \"The mode used. noise_mean and inversion are currently for test purposes only.\"}),\r\n                     \"guide_mode\": ([\"hard_light\", \"mean_std\", \"mean\", \"std\", \"blend\",], {\"default\": 'mean', \"tooltip\": \"The mode used. noise_mean and inversion are currently for test purposes only.\"}),\r\n                     #\"guide_mode\": ([\"hard_light\", \"blend\", \"mean_std\", \"mean\", \"std\"], {\"default\": 'mean', \"tooltip\": \"The mode used.\"}),\r\n                     \"rescale_floor\": (\"BOOLEAN\", {\"default\": True, \"tooltip\": \"Latent_guide_weight(s) control the minimum value for the latent_guide_mask. If false, they control the maximum value.\"}),\r\n                    },\r\n                    \"optional\":\r\n                    {\r\n                        \"latent_guide\": (\"LATENT\", ),\r\n                        \"latent_guide_inv\": (\"LATENT\", ),\r\n\r\n                        \"latent_guide_mask\": (\"MASK\", ),\r\n                        \"latent_guide_weights\": (\"SIGMAS\", ),\r\n                        \"sigmas_override\": (\"SIGMAS\", ),\r\n                    }  \r\n                    \r\n               }\r\n    RETURN_TYPES = (\"SAMPLER\",)\r\n    CATEGORY = \"RES4LYF/legacy/samplers\"\r\n\r\n    FUNCTION = \"get_sampler\"\r\n    DEPRECATED = True\r\n\r\n    def get_sampler(self, eta=0.25, eta_var=0.0, d_noise=1.0, s_noise=1.0, alpha=-1.0, k=1.0, cfgpp=0.0, multistep=False, noise_sampler_type=\"brownian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"dormand-prince\", \r\n                    exp_mode=False, t_fn_formula=None, sigma_fn_formula=None, implicit_steps=0,\r\n                    latent_guide=None, latent_guide_inv=None, latent_guide_weight=0.0, guide_mode=\"hard_light\", latent_guide_weights=None, latent_guide_mask=None, rescale_floor=True, sigmas_override=None,\r\n                    ):\r\n        sampler_name = \"legacy_rk\"\r\n\r\n        if latent_guide is None and latent_guide_inv is None:\r\n            latent_guide_weight = 0.0\r\n\r\n        steps = 10000\r\n        latent_guide_weights = initialize_or_scale(latent_guide_weights, latent_guide_weight, steps)\r\n            \r\n        latent_guide_weights = F.pad(latent_guide_weights, (0, 10000), value=0.0)\r\n\r\n        sampler = comfy.samplers.ksampler(sampler_name, {\"eta\": eta, \"eta_var\": eta_var, \"s_noise\": s_noise, \"d_noise\": d_noise, \"alpha\": alpha, \"k\": k, \"cfgpp\": cfgpp, \"MULTISTEP\": multistep, \"noise_sampler_type\": noise_sampler_type, \"noise_mode\": noise_mode, \"noise_seed\": noise_seed, \"rk_type\": rk_type, \r\n                                                         \"exp_mode\": exp_mode, \"t_fn_formula\": t_fn_formula, \"sigma_fn_formula\": sigma_fn_formula, \"implicit_steps\": implicit_steps,\r\n                                                         \"latent_guide\": latent_guide, \"latent_guide_inv\": latent_guide_inv, \"mask\": latent_guide_mask, \"latent_guide_weight\": latent_guide_weight, \"latent_guide_weights\": latent_guide_weights, \"guide_mode\": guide_mode,\r\n                                                         \"LGW_MASK_RESCALE_MIN\": rescale_floor, \"sigmas_override\": sigmas_override})\r\n        return (sampler, )\r\n\r\n\r\n\r\n\r\nclass Legacy_ClownsharKSamplerGuides:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {\"guide_mode\": ([\"hard_light\", \"mean_std\", \"mean\", \"std\", \"blend\"], {\"default\": 'blend', \"tooltip\": \"The mode used.\"}),\r\n                     \"latent_guide_weight\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False}),\r\n                    \"scheduler\": ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"steps\": (\"INT\", {\"default\": 30, \"min\": 1, \"max\": 10000}),\r\n                    \"denoise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False}),\r\n                     \"rescale_floor\": (\"BOOLEAN\", {\"default\": False, \"tooltip\": \"If true, latent_guide_weight(s) primarily affect the masked areas. If false, they control the unmasked areas.\"}),\r\n                    },\r\n                    \"optional\": \r\n                    {\r\n                        \"latent_guide\": (\"LATENT\", ),\r\n                        \"latent_guide_inv\": (\"LATENT\", ),\r\n                        \"latent_guide_mask\": (\"MASK\", ),\r\n                        \"latent_guide_weights\": (\"SIGMAS\", ),\r\n                    }  \r\n               }\r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    CATEGORY = \"RES4LYF/legacy/samplers\"\r\n\r\n    FUNCTION = \"get_sampler\"\r\n    DEPRECATED = True\r\n\r\n    def get_sampler(self, model=None, scheduler=\"constant\", steps=30, denoise=1.0, latent_guide=None, latent_guide_inv=None, latent_guide_weight=0.0, guide_mode=\"blend\", latent_guide_weights=None, latent_guide_mask=None, rescale_floor=True, t_is=None,\r\n                    ):\r\n        default_dtype = torch.float64\r\n        \r\n        max_steps = 10000\r\n        \r\n        #if scheduler != \"constant\": \r\n        #    latent_guide_weights = get_sigmas(model, scheduler, steps, latent_guide_weight).to(default_dtype)\r\n            \r\n        if scheduler == \"constant\": \r\n            latent_guide_weights = initialize_or_scale(None, latent_guide_weight, steps).to(default_dtype)\r\n            latent_guide_weights = F.pad(latent_guide_weights, (0, max_steps), value=0.0)\r\n        \r\n        if latent_guide is not None:\r\n            x = latent_guide[\"samples\"].clone().to(default_dtype) \r\n        if latent_guide_inv is not None:\r\n            x = latent_guide_inv[\"samples\"].clone().to(default_dtype) \r\n\r\n        guides = (guide_mode, rescale_floor, latent_guide_weight, latent_guide_weights, t_is, latent_guide, latent_guide_inv, latent_guide_mask, scheduler, steps, denoise)\r\n        return (guides, )\r\n\r\n\r\n\r\n\r\n\r\nclass Legacy_SharkSampler:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {\"model\": (\"MODEL\",),\r\n                    \"add_noise\": (\"BOOLEAN\", {\"default\": True}),\r\n                    \"noise_normalize\": (\"BOOLEAN\", {\"default\": True}),\r\n                    \"noise_stdev\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, }),\r\n                    \"noise_mean\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, }),\r\n                    \"noise_is_latent\": (\"BOOLEAN\", {\"default\": False}),\r\n                    \"noise_type\": (NOISE_GENERATOR_NAMES, {\"default\": \"gaussian\"}),\r\n                    \"alpha\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.1, \"round\": False, }),\r\n                    \"k\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":2.0, \"round\": False, }),\r\n                    \"noise_seed\": (\"INT\", {\"default\": 0, \"min\": -1, \"max\": 0xffffffffffffffff}),\r\n                    \"sampler_mode\": (['standard', 'unsample', 'resample'],),\r\n                    #\"scheduler\": (comfy.samplers.SCHEDULER_NAMES, ),\r\n                    \"scheduler\": (get_res4lyf_scheduler_list(),),\r\n\r\n                    \"steps\": (\"INT\", {\"default\": 30, \"min\": 1, \"max\": 10000}),\r\n                    \"denoise\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 10000, \"step\":0.01}),\r\n                    \"cfg\": (\"FLOAT\", {\"default\": 5.0, \"min\": 0.0, \"max\": 100.0, \"step\":0.5, \"round\": False, }),\r\n                    \"truncate_conditioning\": (['false', 'true', 'true_and_zero_neg'], ),\r\n                    \r\n                    \"positive\": (\"CONDITIONING\", ),\r\n                    \"negative\": (\"CONDITIONING\", ),\r\n                    \"sampler\": (\"SAMPLER\", ),\r\n                    \"latent_image\": (\"LATENT\", ),               \r\n                     },\r\n                \"optional\": \r\n                    {\r\n                    \"sigmas\": (\"SIGMAS\", ),\r\n                    \"latent_noise\": (\"LATENT\", ),\r\n                    \"latent_noise_match\": (\"LATENT\",),\r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"LATENT\",\"LATENT\",\"LATENT\",\"LATENT\")\r\n    RETURN_NAMES = (\"output\", \"denoised\", \"output_fp64\", \"denoised_fp64\")\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    CATEGORY = \"RES4LYF/legacy/samplers\"\r\n    DEPRECATED = True\r\n    \r\n    def main(self, model, add_noise, noise_stdev, noise_mean, noise_normalize, noise_is_latent, noise_type, noise_seed, cfg, truncate_conditioning, alpha, k, positive, negative, sampler,\r\n             latent_image, sampler_mode, scheduler, steps, denoise, sigmas=None, latent_noise=None, latent_noise_match=None,): \r\n\r\n            latent = latent_image\r\n            latent_image_dtype = latent_image['samples'].dtype\r\n            \r\n            default_dtype = torch.float64\r\n            \r\n            if positive is None:\r\n                positive = [[\r\n                    torch.zeros((1, 154, 4096)),  # blah[0][0], a tensor of shape (1, 154, 4096)\r\n                    {'pooled_output': torch.zeros((1, 2048))}\r\n                    ]]\r\n            if negative is None:\r\n                negative = [[\r\n                    torch.zeros((1, 154, 4096)),  # blah[0][0], a tensor of shape (1, 154, 4096)\r\n                    {'pooled_output': torch.zeros((1, 2048))}\r\n                    ]]\r\n                \r\n            if denoise < 0:\r\n                sampler.extra_options['d_noise'] = -denoise\r\n                denoise = 1.0\r\n            if sigmas is not None:\r\n                sigmas = sigmas.clone().to(default_dtype)\r\n            else: \r\n                sigmas = get_sigmas(model, scheduler, steps, denoise).to(default_dtype)\r\n                \r\n            #sigmas = sigmas.clone().to(torch.float64)\r\n        \r\n            if sampler_mode == \"unsample\": \r\n                null = torch.tensor([0.0], device=sigmas.device, dtype=sigmas.dtype)\r\n                sigmas = torch.flip(sigmas, dims=[0])\r\n                sigmas = torch.cat([sigmas, null])\r\n            elif sampler_mode == \"resample\":\r\n                null = torch.tensor([0.0], device=sigmas.device, dtype=sigmas.dtype)\r\n                sigmas = torch.cat([null, sigmas])\r\n                sigmas = torch.cat([sigmas, null])\r\n                \r\n            if latent_image is not None:\r\n                x = latent_image[\"samples\"].clone().to(default_dtype) \r\n                #x = {\"samples\": x}\r\n                \r\n            if latent_noise is not None:\r\n                latent_noise[\"samples\"] = latent_noise[\"samples\"].clone().to(default_dtype)  \r\n            if latent_noise_match is not None:\r\n                latent_noise_match[\"samples\"] = latent_noise_match[\"samples\"].clone().to(default_dtype)\r\n\r\n            if truncate_conditioning == \"true\" or truncate_conditioning == \"true_and_zero_neg\":\r\n                if positive is not None:\r\n                    positive[0][0] = positive[0][0].clone().to(default_dtype)\r\n                    positive[0][1][\"pooled_output\"] = positive[0][1][\"pooled_output\"].clone().to(default_dtype)\r\n                c = []\r\n                for t in positive:\r\n                    d = t[1].copy()\r\n                    pooled_output = d.get(\"pooled_output\", None)\r\n                    if pooled_output is not None:\r\n                        d[\"pooled_output\"] = d[\"pooled_output\"][:, :2048]\r\n                        n = [t[0][:, :154, :4096], d]\r\n                    c.append(n)\r\n                positive = c\r\n                \r\n                c = []\r\n                for t in negative:\r\n                    if negative is not None:\r\n                        negative[0][0] = negative[0][0].clone().to(default_dtype)\r\n                        negative[0][1][\"pooled_output\"] = negative[0][1][\"pooled_output\"].clone().to(default_dtype)\r\n                    d = t[1].copy()\r\n                    pooled_output = d.get(\"pooled_output\", None)\r\n                    if pooled_output is not None:\r\n                        if truncate_conditioning == \"true_and_zero_neg\":\r\n                            d[\"pooled_output\"] = torch.zeros((1,2048), dtype=t[0].dtype, device=t[0].device)\r\n                            n = [torch.zeros((1,154,4096), dtype=t[0].dtype, device=t[0].device), d]\r\n                        else:\r\n                            d[\"pooled_output\"] = d[\"pooled_output\"][:, :2048]\r\n                            n = [t[0][:, :154, :4096], d]\r\n                    c.append(n)\r\n                negative = c\r\n            \r\n            sigmin = model.model.model_sampling.sigma_min\r\n            sigmax = model.model.model_sampling.sigma_max\r\n\r\n            if noise_seed == -1:\r\n                seed = torch.initial_seed() + 1\r\n            else:\r\n                seed = noise_seed\r\n                torch.manual_seed(noise_seed)\r\n            \r\n            noise_sampler = NOISE_GENERATOR_CLASSES.get(noise_type)(x=x, seed=seed, sigma_min=sigmin, sigma_max=sigmax)\r\n            \r\n            if noise_type == \"fractal\":\r\n                noise_sampler.alpha = alpha\r\n                noise_sampler.k = k\r\n                noise_sampler.scale = 0.1\r\n        \r\n            if not add_noise:\r\n                noise = torch.zeros_like(x)\r\n            elif latent_noise is None:\r\n                noise = noise_sampler(sigma=sigmax, sigma_next=sigmin)\r\n            else:\r\n                noise = latent_noise[\"samples\"]\r\n\r\n            if noise_is_latent: #add noise and latent together and normalize --> noise\r\n                noise += x.cpu()\r\n                noise.sub_(noise.mean()).div_(noise.std())\r\n\r\n            if noise_normalize:\r\n                noise.sub_(noise.mean()).div_(noise.std())\r\n            noise *= noise_stdev\r\n            noise = (noise - noise.mean()) + noise_mean\r\n            \r\n            if latent_noise_match:\r\n                for i in range(latent_noise_match[\"samples\"].shape[1]):\r\n                    noise[0][i] = (noise[0][i] - noise[0][i].mean())\r\n                    noise[0][i] = (noise[0][i]) + latent_noise_match[\"samples\"][0][i].mean()\r\n\r\n                \r\n            noise_mask = latent[\"noise_mask\"] if \"noise_mask\" in latent else None\r\n\r\n            x0_output = {}\r\n\r\n            callback = latent_preview.prepare_callback(model, sigmas.shape[-1] - 1, x0_output)\r\n\r\n            disable_pbar = False\r\n \r\n            samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, x, \r\n                                                 noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed)\r\n\r\n            out = latent.copy()\r\n            out[\"samples\"] = samples\r\n            if \"x0\" in x0_output:\r\n                out_denoised = latent.copy()\r\n                out_denoised[\"samples\"] = model.model.process_latent_out(x0_output[\"x0\"].cpu())\r\n            else:\r\n                out_denoised = out\r\n                \r\n            out_orig_dtype = out['samples'].clone().to(latent_image_dtype)\r\n            out_denoised_orig_dtype = out_denoised['samples'].clone().to(latent_image_dtype)\r\n                \r\n            return ( {'samples': out_orig_dtype}, {'samples': out_denoised_orig_dtype}, out, out_denoised,)\r\n            "
  },
  {
    "path": "legacy/models.py",
    "content": "# Code adapted from https://github.com/comfyanonymous/ComfyUI/\r\n\r\nimport comfy.samplers\r\nimport comfy.sample\r\nimport comfy.sampler_helpers\r\nimport comfy.utils\r\nfrom comfy.cli_args import args\r\nfrom comfy_extras.nodes_model_advanced import ModelSamplingSD3, ModelSamplingFlux, ModelSamplingAuraFlow, ModelSamplingStableCascade\r\n\r\n\r\nimport torch\r\n\r\nimport folder_paths\r\nimport os\r\nimport json\r\nimport math\r\n\r\nimport comfy.model_management\r\n    \r\nfrom .flux.model  import ReFlux\r\nfrom .flux.layers import SingleStreamBlock as ReSingleStreamBlock, DoubleStreamBlock as ReDoubleStreamBlock\r\n\r\nfrom comfy.ldm.flux.model import Flux\r\nfrom comfy.ldm.flux.layers import SingleStreamBlock, DoubleStreamBlock\r\n\r\nfrom .helper import get_orthogonal, get_cosine_similarity\r\nfrom ..res4lyf import RESplain\r\n\r\n\r\nclass ReFluxPatcher:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n            \"model\": (\"MODEL\",),\r\n            \"enable\": (\"BOOLEAN\", {\"default\": True}),\r\n           }\r\n        }\r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n\r\n    CATEGORY = \"RES4LYF/model_patches\"\r\n    FUNCTION = \"main\"\r\n\r\n    def main(self, model, enable=True):\r\n        m = model #.clone()\r\n        \r\n        if enable:\r\n            m.model.diffusion_model.__class__ = ReFlux\r\n            m.model.diffusion_model.threshold_inv = False\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.double_blocks):\r\n                block.__class__ = ReDoubleStreamBlock\r\n                block.idx = i\r\n\r\n            for i, block in enumerate(m.model.diffusion_model.single_blocks):\r\n                block.__class__ = ReSingleStreamBlock\r\n                block.idx = i\r\n        else:\r\n            m.model.diffusion_model.__class__ = Flux\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.double_blocks):\r\n                block.__class__ = DoubleStreamBlock\r\n                block.idx = i\r\n\r\n            for i, block in enumerate(m.model.diffusion_model.single_blocks):\r\n                block.__class__ = SingleStreamBlock\r\n                block.idx = i\r\n        \r\n        return (m,)\r\n    \r\nimport types\r\n\r\n\r\nclass FluxOrthoCFGPatcher:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n            \"model\": (\"MODEL\",),\r\n            \"enable\": (\"BOOLEAN\", {\"default\": True}),\r\n            \"ortho_T5\": (\"BOOLEAN\", {\"default\": True}),\r\n            \"ortho_clip_L\": (\"BOOLEAN\", {\"default\": True}),\r\n            \"zero_clip_L\": (\"BOOLEAN\", {\"default\": True}),\r\n           }\r\n        }\r\n        \r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n\r\n    CATEGORY = \"RES4LYF/model_patches\"\r\n    FUNCTION = \"main\"\r\n    \r\n    original_forward = Flux.forward\r\n\r\n    @staticmethod\r\n    def new_forward(self, x, timestep, context, y, guidance, control=None, transformer_options={}, **kwargs):\r\n\r\n        for _ in range(500):\r\n            if self.ortho_T5 and get_cosine_similarity(context[0], context[1]) != 0:\r\n                context[0] = get_orthogonal(context[0], context[1])\r\n            if self.ortho_clip_L and get_cosine_similarity(y[0], y[1]) != 0:\r\n                y[0] = get_orthogonal(y[0].unsqueeze(0), y[1].unsqueeze(0)).squeeze(0)\r\n                \r\n        RESplain(\"postcossim1: \", get_cosine_similarity(context[0], context[1]))\r\n        RESplain(\"postcossim2: \", get_cosine_similarity(y[0], y[1]))\r\n        \r\n        if self.zero_clip_L:\r\n            y[0] = torch.zeros_like(y[0])\r\n        \r\n        return FluxOrthoCFGPatcher.original_forward(self, x, timestep, context, y, guidance, control, transformer_options, **kwargs)\r\n\r\n    def main(self, model, enable=True, ortho_T5=True, ortho_clip_L=True, zero_clip_L=True):\r\n        m = model.clone()\r\n\r\n        if enable:\r\n            m.model.diffusion_model.ortho_T5 = ortho_T5\r\n            m.model.diffusion_model.ortho_clip_L = ortho_clip_L\r\n            m.model.diffusion_model.zero_clip_L = zero_clip_L\r\n            Flux.forward = types.MethodType(FluxOrthoCFGPatcher.new_forward, m.model.diffusion_model)\r\n        else:\r\n            Flux.forward = FluxOrthoCFGPatcher.original_forward\r\n\r\n        return (m,)\r\n    \r\n    \r\n    \r\n    \r\nclass FluxGuidanceDisable:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n            \"model\": (\"MODEL\",),\r\n            \"disable\": (\"BOOLEAN\", {\"default\": True}),\r\n            \"zero_clip_L\": (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n        \r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/model_patches\"\r\n\r\n    original_forward = Flux.forward\r\n\r\n    @staticmethod\r\n    def new_forward(self, x, timestep, context, y, guidance, control=None, transformer_options={}, **kwargs):\r\n\r\n        y = torch.zeros_like(y)\r\n        \r\n        return FluxGuidanceDisable.original_forward(self, x, timestep, context, y, guidance, control, transformer_options, **kwargs)\r\n\r\n    def main(self, model, disable=True, zero_clip_L=True):\r\n        m = model.clone()\r\n        if disable:\r\n            m.model.diffusion_model.params.guidance_embed = False\r\n        else:\r\n            m.model.diffusion_model.params.guidance_embed = True\r\n            \r\n        #m.model.diffusion_model.zero_clip_L = zero_clip_L\r\n        if zero_clip_L:\r\n            Flux.forward = types.MethodType(FluxGuidanceDisable.new_forward, m.model.diffusion_model)\r\n\r\n        return (m,)\r\n\r\n\r\n\r\ndef time_snr_shift_exponential(alpha, t):\r\n    return math.exp(alpha) / (math.exp(alpha) + (1 / t - 1) ** 1.0)\r\n\r\ndef time_snr_shift_linear(alpha, t):\r\n    if alpha == 1.0:\r\n        return t\r\n    return alpha * t / (1 + (alpha - 1) * t)\r\n\r\nclass ModelSamplingAdvanced:\r\n    # this is used to set the \"shift\" using either exponential scaling (default for SD3.5M and Flux) or linear scaling (default for SD3.5L and SD3 2B beta)\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n                    \"model\": (\"MODEL\",),\r\n                    \"scaling\": ([\"exponential\", \"linear\"], {\"default\": 'exponential'}), \r\n                    \"shift\": (\"FLOAT\", {\"default\": 3.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False}),\r\n                    #\"base_shift\": (\"FLOAT\", {\"default\": 3.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False}),\r\n                    }\r\n               }\r\n    \r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    \r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/model_shift\"\r\n\r\n    def sigma_exponential(self, timestep):\r\n        return time_snr_shift_exponential(self.timestep_shift, timestep / self.multiplier)\r\n\r\n    def sigma_linear(self, timestep):\r\n        return time_snr_shift_linear(self.timestep_shift, timestep / self.multiplier)\r\n\r\n    def main(self, model, scaling, shift):\r\n        m = model.clone()\r\n        \r\n        self.timestep_shift = shift\r\n        self.multiplier = 1000\r\n        timesteps = 1000\r\n        sampling_base = None\r\n        \r\n        if isinstance(m.model.model_config, comfy.supported_models.Flux) or isinstance(m.model.model_config, comfy.supported_models.FluxSchnell):\r\n            self.multiplier = 1\r\n            timesteps = 10000\r\n            sampling_base = comfy.model_sampling.ModelSamplingFlux\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        elif isinstance(m.model.model_config, comfy.supported_models.AuraFlow):\r\n            self.multiplier = 1\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n\r\n        elif isinstance(m.model.model_config, comfy.supported_models.HunyuanVideo):\r\n            self.multiplier = 1000\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n\r\n        elif isinstance(m.model.model_config, comfy.supported_models.CosmosT2V) or isinstance(m.model.model_config, comfy.supported_models.CosmosI2V):\r\n            self.multiplier = 1\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingContinuousEDM\r\n            sampling_type = comfy.model_sampling.CONST\r\n\r\n        elif isinstance(m.model.model_config, comfy.supported_models.LTXV):\r\n            self.multiplier = 1000\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingFlux\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        elif isinstance(m.model.model_config, comfy.supported_models.SD3):\r\n            self.multiplier = 1000\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n\r\n        if sampling_base is None:\r\n            raise ValueError(\"Model not supported by ModelSamplingAdvanced\")\r\n\r\n        class ModelSamplingAdvanced(sampling_base, sampling_type):\r\n            pass\r\n\r\n        m.object_patches['model_sampling'] = m.model.model_sampling = ModelSamplingAdvanced(m.model.model_config)\r\n\r\n        m.model.model_sampling.__dict__['shift']      = self.timestep_shift\r\n        m.model.model_sampling.__dict__['multiplier'] = self.multiplier\r\n\r\n        s_range = torch.arange(1, timesteps + 1, 1).to(torch.float64)\r\n        if scaling == \"exponential\": \r\n            ts = self.sigma_exponential((s_range / timesteps) * self.multiplier)\r\n        elif scaling == \"linear\": \r\n            ts = self.sigma_linear((s_range / timesteps) * self.multiplier)\r\n\r\n        m.model.model_sampling.register_buffer('sigmas', ts)\r\n        m.object_patches['model_sampling'].sigmas = m.model.model_sampling.sigmas\r\n        \r\n        return (m,)\r\n\r\nclass ModelSamplingAdvancedResolution:\r\n    # this is used to set the \"shift\" using either exponential scaling (default for SD3.5M and Flux) or linear scaling (default for SD3.5L and SD3 2B beta)\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n                    \"model\": (\"MODEL\",),\r\n                    \"scaling\": ([\"exponential\", \"linear\"], {\"default\": 'exponential'}), \r\n                    \"max_shift\": (\"FLOAT\", {\"default\": 1.35, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False}),\r\n                    \"base_shift\": (\"FLOAT\", {\"default\": 0.85, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False}),\r\n                    \"latent_image\": (\"LATENT\",),\r\n                }\r\n               }\r\n    \r\n    RETURN_TYPES = (\"MODEL\",)\r\n    FUNCTION = \"main\"\r\n    \r\n    CATEGORY = \"RES4LYF/model_shift\"\r\n\r\n    def sigma_exponential(self, timestep):\r\n        return time_snr_shift_exponential(self.timestep_shift, timestep / self.multiplier)\r\n\r\n    def sigma_linear(self, timestep):\r\n        return time_snr_shift_linear(self.timestep_shift, timestep / self.multiplier)\r\n\r\n    def main(self, model, scaling, max_shift, base_shift, latent_image):\r\n        m = model.clone()\r\n        height, width = latent_image['samples'].shape[2:]\r\n        \r\n        x1 = 256\r\n        x2 = 4096\r\n        mm = (max_shift - base_shift) / (x2 - x1)\r\n        b = base_shift - mm * x1\r\n        shift = (width * height / (8 * 8 * 2 * 2)) * mm + b\r\n        \r\n        self.timestep_shift = shift\r\n        self.multiplier = 1000\r\n        timesteps = 1000\r\n        \r\n        if isinstance(m.model.model_config, comfy.supported_models.Flux) or isinstance(m.model.model_config, comfy.supported_models.FluxSchnell):\r\n            self.multiplier = 1\r\n            timesteps = 10000\r\n            sampling_base = comfy.model_sampling.ModelSamplingFlux\r\n            sampling_type = comfy.model_sampling.CONST\r\n\r\n            \r\n        elif isinstance(m.model.model_config, comfy.supported_models.AuraFlow):\r\n            self.multiplier = 1\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        elif isinstance(m.model.model_config, comfy.supported_models.SD3):\r\n            self.multiplier = 1000\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n\r\n        class ModelSamplingAdvanced(sampling_base, sampling_type):\r\n            pass\r\n\r\n        m.object_patches['model_sampling'] = m.model.model_sampling = ModelSamplingAdvanced(m.model.model_config)\r\n\r\n        m.model.model_sampling.__dict__['shift']      = self.timestep_shift\r\n        m.model.model_sampling.__dict__['multiplier'] = self.multiplier\r\n\r\n        s_range = torch.arange(1, timesteps + 1, 1).to(torch.float64)\r\n        if scaling == \"exponential\": \r\n            ts = self.sigma_exponential((s_range / timesteps) * self.multiplier)\r\n        elif scaling == \"linear\": \r\n            ts = self.sigma_linear((s_range / timesteps) * self.multiplier)\r\n\r\n        m.model.model_sampling.register_buffer('sigmas', ts)\r\n        m.object_patches['model_sampling'].sigmas = m.model.model_sampling.sigmas\r\n        \r\n        return (m,)\r\n    \r\n    \r\nclass UNetSave:\r\n    def __init__(self):\r\n        self.output_dir = folder_paths.get_output_directory()\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \"model\": (\"MODEL\",),\r\n                              \"filename_prefix\": (\"STRING\", {\"default\": \"models/ComfyUI\"}),},\r\n                \"hidden\": {\"prompt\": \"PROMPT\", \"extra_pnginfo\": \"EXTRA_PNGINFO\"},}\r\n    RETURN_TYPES = ()\r\n    FUNCTION = \"save\"\r\n    OUTPUT_NODE = True\r\n\r\n    CATEGORY = \"RES4LYF/model_merging\"\r\n    DESCRIPTION = \"Save a .safetensors containing only the model data.\"\r\n\r\n    def save(self, model, filename_prefix, prompt=None, extra_pnginfo=None):\r\n        save_checkpoint(model, clip=None, vae=None, filename_prefix=filename_prefix, output_dir=self.output_dir, prompt=prompt, extra_pnginfo=extra_pnginfo)\r\n        return {}\r\n\r\n\r\ndef save_checkpoint(model, clip=None, vae=None, clip_vision=None, filename_prefix=None, output_dir=None, prompt=None, extra_pnginfo=None):\r\n    full_output_folder, filename, counter, subfolder, filename_prefix = folder_paths.get_save_image_path(filename_prefix, output_dir)\r\n    prompt_info = \"\"\r\n    if prompt is not None:\r\n        prompt_info = json.dumps(prompt)\r\n\r\n    metadata = {}\r\n\r\n    enable_modelspec = True\r\n    if isinstance(model.model, comfy.model_base.SDXL):\r\n        if isinstance(model.model, comfy.model_base.SDXL_instructpix2pix):\r\n            metadata[\"modelspec.architecture\"] = \"stable-diffusion-xl-v1-edit\"\r\n        else:\r\n            metadata[\"modelspec.architecture\"] = \"stable-diffusion-xl-v1-base\"\r\n    elif isinstance(model.model, comfy.model_base.SDXLRefiner):\r\n        metadata[\"modelspec.architecture\"] = \"stable-diffusion-xl-v1-refiner\"\r\n    elif isinstance(model.model, comfy.model_base.SVD_img2vid):\r\n        metadata[\"modelspec.architecture\"] = \"stable-video-diffusion-img2vid-v1\"\r\n    elif isinstance(model.model, comfy.model_base.SD3):\r\n        metadata[\"modelspec.architecture\"] = \"stable-diffusion-v3-medium\" #TODO: other SD3 variants\r\n    else:\r\n        enable_modelspec = False\r\n\r\n    if enable_modelspec:\r\n        metadata[\"modelspec.sai_model_spec\"] = \"1.0.0\"\r\n        metadata[\"modelspec.implementation\"] = \"sgm\"\r\n        metadata[\"modelspec.title\"] = \"{} {}\".format(filename, counter)\r\n\r\n    #TODO:\r\n    # \"stable-diffusion-v1\", \"stable-diffusion-v1-inpainting\", \"stable-diffusion-v2-512\",\r\n    # \"stable-diffusion-v2-768-v\", \"stable-diffusion-v2-unclip-l\", \"stable-diffusion-v2-unclip-h\",\r\n    # \"v2-inpainting\"\r\n\r\n    extra_keys = {}\r\n    model_sampling = model.get_model_object(\"model_sampling\")\r\n    if isinstance(model_sampling, comfy.model_sampling.ModelSamplingContinuousEDM):\r\n        if isinstance(model_sampling, comfy.model_sampling.V_PREDICTION):\r\n            extra_keys[\"edm_vpred.sigma_max\"] = torch.tensor(model_sampling.sigma_max).float()\r\n            extra_keys[\"edm_vpred.sigma_min\"] = torch.tensor(model_sampling.sigma_min).float()\r\n\r\n    if model.model.model_type == comfy.model_base.ModelType.EPS:\r\n        metadata[\"modelspec.predict_key\"] = \"epsilon\"\r\n    elif model.model.model_type == comfy.model_base.ModelType.V_PREDICTION:\r\n        metadata[\"modelspec.predict_key\"] = \"v\"\r\n\r\n    if not args.disable_metadata:\r\n        metadata[\"prompt\"] = prompt_info\r\n        if extra_pnginfo is not None:\r\n            for x in extra_pnginfo:\r\n                metadata[x] = json.dumps(extra_pnginfo[x])\r\n\r\n    output_checkpoint = f\"{filename}_{counter:05}_.safetensors\"\r\n    output_checkpoint = os.path.join(full_output_folder, output_checkpoint)\r\n\r\n    sd_save_checkpoint(output_checkpoint, model, clip, vae, clip_vision, metadata=metadata, extra_keys=extra_keys)\r\n\r\n\r\ndef sd_save_checkpoint(output_path, model, clip=None, vae=None, clip_vision=None, metadata=None, extra_keys={}):\r\n    clip_sd = None\r\n    load_models = [model]\r\n    if clip is not None:\r\n        load_models.append(clip.load_model())\r\n        clip_sd = clip.get_sd()\r\n\r\n    comfy.model_management.load_models_gpu(load_models, force_patch_weights=True)\r\n    clip_vision_sd = clip_vision.get_sd() if clip_vision is not None else None\r\n    vae_sd = vae.get_sd() if vae is not None else None                             #THIS ALLOWS SAVING UNET ONLY\r\n    sd = model.model.state_dict_for_saving(clip_sd, vae_sd, clip_vision_sd)\r\n    for k in extra_keys:\r\n        sd[k] = extra_keys[k]\r\n\r\n    for k in sd:\r\n        t = sd[k]\r\n        if not t.is_contiguous():\r\n            sd[k] = t.contiguous()\r\n\r\n    comfy.utils.save_torch_file(sd, output_path, metadata=metadata)\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass TorchCompileModelFluxAdvanced: #adapted from https://github.com/kijai/ComfyUI-KJNodes\r\n    def __init__(self):\r\n        self._compiled = False\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n                    \"model\": (\"MODEL\",),\r\n                    \"backend\": ([\"inductor\", \"cudagraphs\"],),\r\n                    \"fullgraph\": (\"BOOLEAN\", {\"default\": False, \"tooltip\": \"Enable full graph mode\"}),\r\n                    \"mode\": ([\"default\", \"max-autotune\", \"max-autotune-no-cudagraphs\", \"reduce-overhead\"], {\"default\": \"default\"}),\r\n                    \"double_blocks\": (\"STRING\", {\"default\": \"0-18\", \"multiline\": True}),\r\n                    \"single_blocks\": (\"STRING\", {\"default\": \"0-37\", \"multiline\": True}),\r\n                    \"dynamic\": (\"BOOLEAN\", {\"default\": False, \"tooltip\": \"Enable dynamic mode\"}),\r\n                }}\r\n    RETURN_TYPES = (\"MODEL\",)\r\n    FUNCTION = \"patch\"\r\n\r\n    CATEGORY = \"RES4LYF/model_patches\"\r\n    EXPERIMENTAL = True\r\n\r\n    def parse_blocks(self, blocks_str):\r\n        blocks = []\r\n        for part in blocks_str.split(','):\r\n            part = part.strip()\r\n            if '-' in part:\r\n                start, end = map(int, part.split('-'))\r\n                blocks.extend(range(start, end + 1))\r\n            else:\r\n                blocks.append(int(part))\r\n        return blocks\r\n\r\n    def patch(self, model, backend, mode, fullgraph, single_blocks, double_blocks, dynamic):\r\n        single_block_list = self.parse_blocks(single_blocks)\r\n        double_block_list = self.parse_blocks(double_blocks)\r\n        m = model.clone()\r\n        diffusion_model = m.get_model_object(\"diffusion_model\")\r\n        \r\n        if not self._compiled:\r\n            try:\r\n                for i, block in enumerate(diffusion_model.double_blocks):\r\n                    if i in double_block_list:\r\n                        #print(\"Compiling double_block\", i)\r\n                        m.add_object_patch(f\"diffusion_model.double_blocks.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                for i, block in enumerate(diffusion_model.single_blocks):\r\n                    if i in single_block_list:\r\n                        #print(\"Compiling single block\", i)\r\n                        m.add_object_patch(f\"diffusion_model.single_blocks.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                self._compiled = True\r\n                compile_settings = {\r\n                    \"backend\": backend,\r\n                    \"mode\": mode,\r\n                    \"fullgraph\": fullgraph,\r\n                    \"dynamic\": dynamic,\r\n                }\r\n                setattr(m.model, \"compile_settings\", compile_settings)\r\n            except:\r\n                raise RuntimeError(\"Failed to compile model\")\r\n        \r\n        return (m, )\r\n        # rest of the layers that are not patched\r\n        # diffusion_model.final_layer = torch.compile(diffusion_model.final_layer, mode=mode, fullgraph=fullgraph, backend=backend)\r\n        # diffusion_model.guidance_in = torch.compile(diffusion_model.guidance_in, mode=mode, fullgraph=fullgraph, backend=backend)\r\n        # diffusion_model.img_in = torch.compile(diffusion_model.img_in, mode=mode, fullgraph=fullgraph, backend=backend)\r\n        # diffusion_model.time_in = torch.compile(diffusion_model.time_in, mode=mode, fullgraph=fullgraph, backend=backend)\r\n        # diffusion_model.txt_in = torch.compile(diffusion_model.txt_in, mode=mode, fullgraph=fullgraph, backend=backend)\r\n        # diffusion_model.vector_in = torch.compile(diffusion_model.vector_in, mode=mode, fullgraph=fullgraph, backend=backend)"
  },
  {
    "path": "legacy/noise_classes.py",
    "content": "import torch\r\nfrom torch import nn, Tensor, Generator, lerp\r\nfrom torch.nn.functional import unfold\r\nimport torch.nn.functional as F\r\nfrom typing import Callable, Tuple\r\nfrom math import pi\r\nfrom comfy.k_diffusion.sampling import BrownianTreeNoiseSampler\r\nfrom torch.distributions import StudentT, Laplace\r\nimport numpy as np\r\nimport pywt\r\nimport functools\r\nfrom ..res4lyf import RESplain\r\n\r\n# Set this to \"True\" if you have installed OpenSimplex. Recommended to install without dependencies due to conflicting packages: pip3 install opensimplex --no-deps \r\nOPENSIMPLEX_ENABLE = False\r\n\r\nif OPENSIMPLEX_ENABLE:\r\n    from opensimplex import OpenSimplex\r\n\r\nclass PrecisionTool:\r\n    def __init__(self, cast_type='fp64'):\r\n        self.cast_type = cast_type\r\n\r\n    def cast_tensor(self, func):\r\n        @functools.wraps(func)\r\n        def wrapper(*args, **kwargs):\r\n            if self.cast_type not in ['fp64', 'fp32', 'fp16']:\r\n                return func(*args, **kwargs)\r\n\r\n            target_device = None\r\n            for arg in args:\r\n                if torch.is_tensor(arg):\r\n                    target_device = arg.device\r\n                    break\r\n            if target_device is None:\r\n                for v in kwargs.values():\r\n                    if torch.is_tensor(v):\r\n                        target_device = v.device\r\n                        break\r\n            \r\n        # recursively zs_recast tensors in nested dictionaries\r\n            def cast_and_move_to_device(data):\r\n                if torch.is_tensor(data):\r\n                    if self.cast_type == 'fp64':\r\n                        return data.to(torch.float64).to(target_device)\r\n                    elif self.cast_type == 'fp32':\r\n                        return data.to(torch.float32).to(target_device)\r\n                    elif self.cast_type == 'fp16':\r\n                        return data.to(torch.float16).to(target_device)\r\n                elif isinstance(data, dict):\r\n                    return {k: cast_and_move_to_device(v) for k, v in data.items()}\r\n                return data\r\n\r\n            new_args = [cast_and_move_to_device(arg) for arg in args]\r\n            new_kwargs = {k: cast_and_move_to_device(v) for k, v in kwargs.items()}\r\n            \r\n            return func(*new_args, **new_kwargs)\r\n        return wrapper\r\n\r\n    def set_cast_type(self, new_value):\r\n        if new_value in ['fp64', 'fp32', 'fp16']:\r\n            self.cast_type = new_value\r\n        else:\r\n            self.cast_type = 'fp64'\r\n\r\nprecision_tool = PrecisionTool(cast_type='fp64')\r\n\r\n\r\ndef noise_generator_factory(cls, **fixed_params):\r\n    def create_instance(**kwargs):\r\n        params = {**fixed_params, **kwargs}\r\n        return cls(**params)\r\n    return create_instance\r\n\r\ndef like(x):\r\n    return {'size': x.shape, 'dtype': x.dtype, 'layout': x.layout, 'device': x.device}\r\n\r\ndef scale_to_range(x, scaled_min = -1.73, scaled_max = 1.73): #1.73 is roughly the square root of 3\r\n    return scaled_min + (x - x.min()) * (scaled_max - scaled_min) / (x.max() - x.min())\r\n\r\ndef normalize(x):\r\n     return (x - x.mean())/ x.std()\r\n\r\nclass NoiseGenerator:\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None):\r\n        self.seed = seed\r\n\r\n        if x is not None:\r\n            self.x      = x\r\n            self.size   = x.shape\r\n            self.dtype  = x.dtype\r\n            self.layout = x.layout\r\n            self.device = x.device\r\n        else:   \r\n            self.x      = torch.zeros(size, dtype, layout, device)\r\n\r\n        # allow overriding parameters imported from latent 'x' if specified\r\n        if size is not None:\r\n            self.size   = size\r\n        if dtype is not None:\r\n            self.dtype  = dtype\r\n        if layout is not None:\r\n            self.layout = layout\r\n        if device is not None:\r\n            self.device = device\r\n\r\n        self.sigma_max = sigma_max.to(device) if isinstance(sigma_max, torch.Tensor) else sigma_max\r\n        self.sigma_min = sigma_min.to(device) if isinstance(sigma_min, torch.Tensor) else sigma_min\r\n\r\n        self.last_seed = seed\r\n        \r\n        if generator is None:\r\n            self.generator = torch.Generator(device=self.device).manual_seed(seed)\r\n        else:\r\n            self.generator = generator\r\n\r\n    def __call__(self):\r\n        raise NotImplementedError(\"This method got clownsharked!\")\r\n    \r\n    def update(self, **kwargs):\r\n        \r\n        if not isinstance(self, BrownianNoiseGenerator):\r\n            self.last_seed += 1\r\n        \r\n        updated_values = []\r\n        for attribute_name, value in kwargs.items():\r\n            if value is not None:\r\n                setattr(self, attribute_name, value)\r\n            updated_values.append(getattr(self, attribute_name))\r\n        return tuple(updated_values)\r\n\r\n\r\n\r\nclass BrownianNoiseGenerator(NoiseGenerator):\r\n    def __call__(self, *, sigma=None, sigma_next=None, **kwargs):\r\n        return BrownianTreeNoiseSampler(self.x, self.sigma_min, self.sigma_max, seed=self.seed, cpu = self.device.type=='cpu')(sigma, sigma_next)\r\n\r\n\r\n\r\nclass FractalNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                alpha=0.0, k=1.0, scale=0.1): \r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(alpha=alpha, k=k, scale=scale)\r\n\r\n    def __call__(self, *, alpha=None, k=None, scale=None, **kwargs):\r\n        self.update(alpha=alpha, k=k, scale=scale)\r\n        if len(self.size) == 5:\r\n            b, c, t, h, w = self.size\r\n        else:\r\n            b, c, h, w = self.size\r\n        \r\n        noise = torch.normal(mean=0.0, std=1.0, size=self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n        \r\n        y_freq = torch.fft.fftfreq(h, 1/h, device=self.device)\r\n        x_freq = torch.fft.fftfreq(w, 1/w, device=self.device)\r\n\r\n        if len(self.size) == 5:\r\n            t_freq = torch.fft.fftfreq(t, 1/t, device=self.device)\r\n            freq = torch.sqrt(y_freq[:, None, None]**2 + x_freq[None, :, None]**2 + t_freq[None, None, :]**2).clamp(min=1e-10)\r\n        else:\r\n            freq = torch.sqrt(y_freq[:, None]**2 + x_freq[None, :]**2).clamp(min=1e-10)\r\n        \r\n        spectral_density = self.k / torch.pow(freq, self.alpha * self.scale)\r\n        spectral_density[0, 0] = 0\r\n\r\n        noise_fft = torch.fft.fftn(noise)\r\n        modified_fft = noise_fft * spectral_density\r\n        noise = torch.fft.ifftn(modified_fft).real\r\n\r\n        return noise / torch.std(noise)\r\n    \r\n    \r\n\r\nclass SimplexNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                 scale=0.01):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.noise = OpenSimplex(seed=seed)\r\n        self.scale = scale\r\n        \r\n    def __call__(self, *, scale=None, **kwargs):\r\n        self.update(scale=scale)\r\n        \r\n        if len(self.size) == 5:\r\n            b, c, t, h, w = self.size\r\n        else:\r\n            b, c, h, w = self.size\r\n\r\n        noise_array = self.noise.noise3array(np.arange(w),np.arange(h),np.arange(c))\r\n        self.noise = OpenSimplex(seed=self.noise.get_seed()+1)\r\n        \r\n        noise_tensor = torch.from_numpy(noise_array).to(self.device)\r\n        noise_tensor = torch.unsqueeze(noise_tensor, dim=0)\r\n        if len(self.size) == 5:\r\n            noise_tensor = torch.unsqueeze(noise_tensor, dim=0)\r\n        \r\n        return noise_tensor / noise_tensor.std()\r\n        #return normalize(scale_to_range(noise_tensor))\r\n\r\n\r\n\r\nclass HiresPyramidNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                 discount=0.7, mode='nearest-exact'):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(discount=discount, mode=mode)\r\n\r\n    def __call__(self, *, discount=None, mode=None, **kwargs):\r\n        self.update(discount=discount, mode=mode)\r\n\r\n        if len(self.size) == 5:\r\n            b, c, t, h, w = self.size\r\n            orig_h, orig_w, orig_t = h, w, t\r\n            u = nn.Upsample(size=(orig_h, orig_w, orig_t), mode=self.mode).to(self.device)\r\n        else:\r\n            b, c, h, w = self.size\r\n            orig_h, orig_w = h, w\r\n            orig_t = t = 1\r\n            u = nn.Upsample(size=(orig_h, orig_w), mode=self.mode).to(self.device)\r\n\r\n        noise = ((torch.rand(size=self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator) - 0.5) * 2 * 1.73)\r\n\r\n        for i in range(4):\r\n            r = torch.rand(1, device=self.device, generator=self.generator).item() * 2 + 2\r\n            h, w = min(orig_h * 15, int(h * (r ** i))), min(orig_w * 15, int(w * (r ** i)))\r\n            if len(self.size) == 5:\r\n                t = min(orig_t * 15, int(t * (r ** i)))\r\n                new_noise = torch.randn((b, c, t, h, w), dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n            else:\r\n                new_noise = torch.randn((b, c, h, w), dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n\r\n            upsampled_noise = u(new_noise)\r\n            noise += upsampled_noise * self.discount ** i\r\n            \r\n            if h >= orig_h * 15 or w >= orig_w * 15 or t >= orig_t * 15:\r\n                break  # if resolution is too high\r\n        \r\n        return noise / noise.std()\r\n\r\n\r\n\r\nclass PyramidNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                 discount=0.8, mode='nearest-exact'):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(discount=discount, mode=mode)\r\n\r\n    def __call__(self, *, discount=None, mode=None, **kwargs):\r\n        self.update(discount=discount, mode=mode)\r\n\r\n        x = torch.zeros(self.size, dtype=self.dtype, layout=self.layout, device=self.device)\r\n\r\n        if len(self.size) == 5:\r\n            b, c, t, h, w = self.size\r\n            orig_h, orig_w, orig_t = h, w, t\r\n        else:\r\n            b, c, h, w = self.size\r\n            orig_h, orig_w = h, w\r\n\r\n        r = 1\r\n        for i in range(5):\r\n            r *= 2\r\n\r\n            if len(self.size) == 5:\r\n                scaledSize = (b, c, t * r, h * r, w * r)\r\n                origSize = (orig_h, orig_w, orig_t)\r\n            else:\r\n                scaledSize = (b, c, h * r, w * r)\r\n                origSize = (orig_h, orig_w)\r\n\r\n            x += torch.nn.functional.interpolate(\r\n                torch.normal(mean=0, std=0.5 ** i, size=scaledSize, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator),\r\n                size=origSize, mode=self.mode\r\n            ) * self.discount ** i\r\n        return x / x.std()\r\n\r\n\r\n\r\nclass InterpolatedPyramidNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                 discount=0.7, mode='nearest-exact'):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(discount=discount, mode=mode)\r\n\r\n\r\n    def __call__(self, *, discount=None, mode=None, **kwargs):\r\n        self.update(discount=discount, mode=mode)\r\n\r\n        if len(self.size) == 5:\r\n            b, c, t, h, w = self.size\r\n            orig_t, orig_h, orig_w = t, h, w\r\n        else:\r\n            b, c, h, w = self.size\r\n            orig_h, orig_w = h, w\r\n            t = orig_t = 1\r\n\r\n        noise = ((torch.rand(size=self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator) - 0.5) * 2 * 1.73)\r\n        multipliers = [1]\r\n\r\n        for i in range(4):\r\n            r = torch.rand(1, device=self.device, generator=self.generator).item() * 2 + 2\r\n            h, w = min(orig_h * 15, int(h * (r ** i))), min(orig_w * 15, int(w * (r ** i)))\r\n            \r\n            if len(self.size) == 5:\r\n                t = min(orig_t * 15, int(t * (r ** i)))\r\n                new_noise = torch.randn((b, c, t, h, w), dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n                upsampled_noise = nn.functional.interpolate(new_noise, size=(orig_t, orig_h, orig_w), mode=self.mode)\r\n            else:\r\n                new_noise = torch.randn((b, c, h, w), dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n                upsampled_noise = nn.functional.interpolate(new_noise, size=(orig_h, orig_w), mode=self.mode)\r\n\r\n            noise += upsampled_noise * self.discount ** i\r\n            multipliers.append(self.discount ** i)\r\n            \r\n            if h >= orig_h * 15 or w >= orig_w * 15 or (len(self.size) == 5 and t >= orig_t * 15):\r\n                break  # if resolution is too high\r\n        \r\n        noise = noise / sum([m ** 2 for m in multipliers]) ** 0.5 \r\n        return noise / noise.std()\r\n\r\n\r\n\r\nclass CascadeBPyramidNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                 levels=10, mode='nearest', size_range=[1,16]):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(epsilon=x, levels=levels, mode=mode, size_range=size_range)\r\n\r\n    def __call__(self, *, levels=10, mode='nearest', size_range=[1,16], **kwargs):\r\n        self.update(levels=levels, mode=mode)\r\n\r\n        b, c, h, w = self.size\r\n\r\n        epsilon = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n        multipliers = [1]\r\n        for i in range(1, levels):\r\n            m = 0.75 ** i\r\n\r\n            h, w = int(epsilon.size(-2) // (2 ** i)), int(epsilon.size(-2) // (2 ** i))\r\n            if size_range is None or (size_range[0] <= h <= size_range[1] or size_range[0] <= w <= size_range[1]):\r\n                offset = torch.randn(epsilon.size(0), epsilon.size(1), h, w, device=self.device, generator=self.generator)\r\n                epsilon = epsilon + torch.nn.functional.interpolate(offset, size=epsilon.shape[-2:], mode=self.mode) * m\r\n                multipliers.append(m)\r\n\r\n            if h <= 1 or w <= 1:\r\n                break\r\n        epsilon = epsilon / sum([m ** 2 for m in multipliers]) ** 0.5 #divides the epsilon tensor by the square root of the sum of the squared multipliers.\r\n\r\n        return epsilon\r\n\r\n\r\nclass UniformNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                 mean=0.0, scale=1.73):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(mean=mean, scale=scale)\r\n\r\n    def __call__(self, *, mean=None, scale=None, **kwargs):\r\n        self.update(mean=mean, scale=scale)\r\n\r\n        noise = torch.rand(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n\r\n        return self.scale * 2 * (noise - 0.5) + self.mean\r\n\r\nclass GaussianNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                 mean=0.0, std=1.0):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(mean=mean, std=std)\r\n\r\n    def __call__(self, *, mean=None, std=None, **kwargs):\r\n        self.update(mean=mean, std=std)\r\n\r\n        noise = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n\r\n        return (noise - noise.mean()) / noise.std()\r\n\r\nclass GaussianBackwardsNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                 mean=0.0, std=1.0):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(mean=mean, std=std)\r\n\r\n    def __call__(self, *, mean=None, std=None, **kwargs):\r\n        self.update(mean=mean, std=std)\r\n        RESplain(\"GaussianBackwards last seed:\", self.generator.initial_seed())\r\n        self.generator.manual_seed(self.generator.initial_seed() - 1)\r\n        noise = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)\r\n\r\n        return (noise - noise.mean()) / noise.std()\r\n\r\nclass LaplacianNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                 loc=0, scale=1.0):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(loc=loc, scale=scale)\r\n\r\n    def __call__(self, *, loc=None, scale=None, **kwargs):\r\n        self.update(loc=loc, scale=scale)\r\n\r\n        # b, c, h, w = self.size\r\n        # orig_h, orig_w = h, w\r\n\r\n        noise = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator) / 4.0\r\n\r\n        rng_state = torch.random.get_rng_state()\r\n        torch.manual_seed(self.generator.initial_seed())\r\n        laplacian_noise = Laplace(loc=self.loc, scale=self.scale).rsample(self.size).to(self.device)\r\n        self.generator.manual_seed(self.generator.initial_seed() + 1)\r\n        torch.random.set_rng_state(rng_state)\r\n\r\n        noise += laplacian_noise\r\n        return noise / noise.std()\r\n\r\nclass StudentTNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                 loc=0, scale=0.2, df=1):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(loc=loc, scale=scale, df=df)\r\n\r\n    def __call__(self, *, loc=None, scale=None, df=None, **kwargs):\r\n        self.update(loc=loc, scale=scale, df=df)\r\n\r\n        # b, c, h, w = self.size\r\n        # orig_h, orig_w = h, w\r\n\r\n        rng_state = torch.random.get_rng_state()\r\n        torch.manual_seed(self.generator.initial_seed())\r\n\r\n        noise = StudentT(loc=self.loc, scale=self.scale, df=self.df).rsample(self.size)\r\n\r\n        s = torch.quantile(noise.flatten(start_dim=1).abs(), 0.75, dim=-1)\r\n        \r\n        if len(self.size) == 5:\r\n            s = s.reshape(*s.shape, 1, 1, 1, 1)\r\n        else:\r\n            s = s.reshape(*s.shape, 1, 1, 1)\r\n\r\n        noise = noise.clamp(-s, s)\r\n        noise_latent = torch.copysign(torch.pow(torch.abs(noise), 0.5), noise).to(self.device)\r\n\r\n        self.generator.manual_seed(self.generator.initial_seed() + 1)\r\n        torch.random.set_rng_state(rng_state)\r\n        return (noise_latent - noise_latent.mean()) / noise_latent.std()\r\n\r\nclass WaveletNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                 wavelet='haar'):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(wavelet=wavelet)\r\n\r\n    def __call__(self, *, wavelet=None, **kwargs):\r\n        self.update(wavelet=wavelet)\r\n\r\n        # b, c, h, w = self.size\r\n        # orig_h, orig_w = h, w\r\n\r\n        # noise for spatial dimensions only\r\n        coeffs = pywt.wavedecn(torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator).to(self.device), wavelet=self.wavelet, mode='periodization')\r\n        noise = pywt.waverecn(coeffs, wavelet=self.wavelet, mode='periodization')\r\n        noise_tensor = torch.tensor(noise, dtype=self.dtype, device=self.device)\r\n\r\n        noise_tensor = (noise_tensor - noise_tensor.mean()) / noise_tensor.std()\r\n        return noise_tensor\r\n\r\nclass PerlinNoiseGenerator(NoiseGenerator):\r\n    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, \r\n                 detail=0.0):\r\n        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)\r\n        self.update(detail=detail)\r\n\r\n    @staticmethod\r\n    def get_positions(block_shape: Tuple[int, int]) -> Tensor:\r\n        bh, bw = block_shape\r\n        positions = torch.stack(\r\n            torch.meshgrid(\r\n                [(torch.arange(b) + 0.5) / b for b in (bw, bh)],\r\n                indexing=\"xy\",\r\n            ),\r\n            -1,\r\n        ).view(1, bh, bw, 1, 1, 2)\r\n        return positions\r\n\r\n    @staticmethod\r\n    def unfold_grid(vectors: Tensor) -> Tensor:\r\n        batch_size, _, gpy, gpx = vectors.shape\r\n        return (\r\n            unfold(vectors, (2, 2))\r\n            .view(batch_size, 2, 4, -1)\r\n            .permute(0, 2, 3, 1)\r\n            .view(batch_size, 4, gpy - 1, gpx - 1, 2)\r\n        )\r\n\r\n    @staticmethod\r\n    def smooth_step(t: Tensor) -> Tensor:\r\n        return t * t * (3.0 - 2.0 * t)\r\n\r\n    @staticmethod\r\n    def perlin_noise_tensor(\r\n        self,\r\n        vectors: Tensor, positions: Tensor, step: Callable = None\r\n    ) -> Tensor:\r\n        if step is None:\r\n            step = self.smooth_step\r\n\r\n        batch_size = vectors.shape[0]\r\n        # grid height, grid width\r\n        gh, gw = vectors.shape[2:4]\r\n        # block height, block width\r\n        bh, bw = positions.shape[1:3]\r\n\r\n        for i in range(2):\r\n            if positions.shape[i + 3] not in (1, vectors.shape[i + 2]):\r\n                raise Exception(\r\n                    f\"Blocks shapes do not match: vectors ({vectors.shape[1]}, {vectors.shape[2]}), positions {gh}, {gw})\"\r\n                )\r\n\r\n        if positions.shape[0] not in (1, batch_size):\r\n            raise Exception(\r\n                f\"Batch sizes do not match: vectors ({vectors.shape[0]}), positions ({positions.shape[0]})\"\r\n            )\r\n\r\n        vectors = vectors.view(batch_size, 4, 1, gh * gw, 2)\r\n        positions = positions.view(positions.shape[0], bh * bw, -1, 2)\r\n\r\n        step_x = step(positions[..., 0])\r\n        step_y = step(positions[..., 1])\r\n\r\n        row0 = lerp(\r\n            (vectors[:, 0] * positions).sum(dim=-1),\r\n            (vectors[:, 1] * (positions - positions.new_tensor((1, 0)))).sum(dim=-1),\r\n            step_x,\r\n        )\r\n        row1 = lerp(\r\n            (vectors[:, 2] * (positions - positions.new_tensor((0, 1)))).sum(dim=-1),\r\n            (vectors[:, 3] * (positions - positions.new_tensor((1, 1)))).sum(dim=-1),\r\n            step_x,\r\n        )\r\n        noise = lerp(row0, row1, step_y)\r\n        return (\r\n            noise.view(\r\n                batch_size,\r\n                bh,\r\n                bw,\r\n                gh,\r\n                gw,\r\n            )\r\n            .permute(0, 3, 1, 4, 2)\r\n            .reshape(batch_size, gh * bh, gw * bw)\r\n        )\r\n\r\n    def perlin_noise(\r\n        self,\r\n        grid_shape: Tuple[int, int],\r\n        out_shape: Tuple[int, int],\r\n        batch_size: int = 1,\r\n        generator: Generator = None,\r\n        *args,\r\n        **kwargs,\r\n    ) -> Tensor:\r\n        gh, gw = grid_shape         # grid height and width\r\n        oh, ow = out_shape        # output height and width\r\n        bh, bw = oh // gh, ow // gw        # block height and width\r\n\r\n        if oh != bh * gh:\r\n            raise Exception(f\"Output height {oh} must be divisible by grid height {gh}\")\r\n        if ow != bw * gw != 0:\r\n            raise Exception(f\"Output width {ow} must be divisible by grid width {gw}\")\r\n\r\n        angle = torch.empty(\r\n            [batch_size] + [s + 1 for s in grid_shape], device=self.device, *args, **kwargs\r\n        ).uniform_(to=2.0 * pi, generator=self.generator)\r\n        # random vectors on grid points\r\n        vectors = self.unfold_grid(torch.stack((torch.cos(angle), torch.sin(angle)), dim=1))\r\n        # positions inside grid cells [0, 1)\r\n        positions = self.get_positions((bh, bw)).to(vectors)\r\n        return self.perlin_noise_tensor(self, vectors, positions).squeeze(0)\r\n\r\n    def __call__(self, *, detail=None, **kwargs):\r\n        self.update(detail=detail) #currently unused\r\n\r\n        if len(self.size) == 5:\r\n            b, c, t, h, w = self.size\r\n            noise = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator) / 2.0\r\n            \r\n            for tt in range(t):\r\n                for i in range(2):\r\n                    perlin_slice = self.perlin_noise((h, w), (h, w), batch_size=c, generator=self.generator).to(self.device)\r\n                    perlin_expanded = perlin_slice.unsqueeze(0).unsqueeze(2)\r\n                    time_slice = noise[:, :, tt:tt+1, :, :]\r\n                    noise[:, :, tt:tt+1, :, :] += perlin_expanded\r\n        else:\r\n            b, c, h, w = self.size\r\n            #orig_h, orig_w = h, w\r\n\r\n            noise = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator) / 2.0\r\n            for i in range(2):\r\n                noise += self.perlin_noise((h, w), (h, w), batch_size=c, generator=self.generator).to(self.device)\r\n                \r\n        return noise / noise.std()\r\n    \r\nfrom functools import partial\r\n\r\nNOISE_GENERATOR_CLASSES = {\r\n    \"fractal\": FractalNoiseGenerator,\r\n\r\n    \"gaussian\": GaussianNoiseGenerator,\r\n    \"gaussian_backwards\": GaussianBackwardsNoiseGenerator,\r\n    \"uniform\": UniformNoiseGenerator,\r\n    \"pyramid-cascade_B\": CascadeBPyramidNoiseGenerator,\r\n    \"pyramid-interpolated\": InterpolatedPyramidNoiseGenerator,\r\n    \"pyramid-bilinear\": noise_generator_factory(PyramidNoiseGenerator, mode='bilinear'),\r\n    \"pyramid-bicubic\": noise_generator_factory(PyramidNoiseGenerator, mode='bicubic'),   \r\n    \"pyramid-nearest\": noise_generator_factory(PyramidNoiseGenerator, mode='nearest'),  \r\n    \"hires-pyramid-bilinear\": noise_generator_factory(HiresPyramidNoiseGenerator, mode='bilinear'),\r\n    \"hires-pyramid-bicubic\": noise_generator_factory(HiresPyramidNoiseGenerator, mode='bicubic'),   \r\n    \"hires-pyramid-nearest\": noise_generator_factory(HiresPyramidNoiseGenerator, mode='nearest'),  \r\n    \"brownian\": BrownianNoiseGenerator,\r\n    \"laplacian\": LaplacianNoiseGenerator,\r\n    \"studentt\": StudentTNoiseGenerator,\r\n    \"wavelet\": WaveletNoiseGenerator,\r\n    \"perlin\": PerlinNoiseGenerator,\r\n}\r\n\r\n\r\nNOISE_GENERATOR_CLASSES_SIMPLE = {\r\n    \"none\": GaussianNoiseGenerator,\r\n    \"brownian\": BrownianNoiseGenerator,\r\n    \"gaussian\": GaussianNoiseGenerator,\r\n    \"gaussian_backwards\": GaussianBackwardsNoiseGenerator,\r\n    \"laplacian\": LaplacianNoiseGenerator,\r\n    \"perlin\": PerlinNoiseGenerator,\r\n    \"studentt\": StudentTNoiseGenerator,\r\n    \"uniform\": UniformNoiseGenerator,\r\n    \"wavelet\": WaveletNoiseGenerator,\r\n    \"brown\": noise_generator_factory(FractalNoiseGenerator, alpha=2.0),\r\n    \"pink\": noise_generator_factory(FractalNoiseGenerator, alpha=1.0),\r\n    \"white\": noise_generator_factory(FractalNoiseGenerator, alpha=0.0),\r\n    \"blue\": noise_generator_factory(FractalNoiseGenerator, alpha=-1.0),\r\n    \"violet\": noise_generator_factory(FractalNoiseGenerator, alpha=-2.0),\r\n    \"hires-pyramid-bicubic\": noise_generator_factory(HiresPyramidNoiseGenerator, mode='bicubic'),   \r\n    \"hires-pyramid-bilinear\": noise_generator_factory(HiresPyramidNoiseGenerator, mode='bilinear'),\r\n    \"hires-pyramid-nearest\": noise_generator_factory(HiresPyramidNoiseGenerator, mode='nearest'),  \r\n    \"pyramid-bicubic\": noise_generator_factory(PyramidNoiseGenerator, mode='bicubic'),   \r\n    \"pyramid-bilinear\": noise_generator_factory(PyramidNoiseGenerator, mode='bilinear'),\r\n    \"pyramid-nearest\": noise_generator_factory(PyramidNoiseGenerator, mode='nearest'),  \r\n    \"pyramid-interpolated\": InterpolatedPyramidNoiseGenerator,\r\n    \"pyramid-cascade_B\": CascadeBPyramidNoiseGenerator,\r\n}\r\n\r\nif OPENSIMPLEX_ENABLE:\r\n    NOISE_GENERATOR_CLASSES.update({\r\n        \"simplex\": SimplexNoiseGenerator,\r\n    })\r\n\r\nNOISE_GENERATOR_NAMES = tuple(NOISE_GENERATOR_CLASSES.keys())\r\nNOISE_GENERATOR_NAMES_SIMPLE = tuple(NOISE_GENERATOR_CLASSES_SIMPLE.keys())\r\n\r\n\r\n@precision_tool.cast_tensor\r\ndef prepare_noise(latent_image, seed, noise_type, noise_inds=None, alpha=1.0, k=1.0): # adapted from comfy/sample.py: https://github.com/comfyanonymous/ComfyUI\r\n    #optional arg skip can be used to skip and discard x number of noise generations for a given seed\r\n    noise_func = NOISE_GENERATOR_CLASSES.get(noise_type)(x=latent_image, seed=seed, sigma_min=0.0291675, sigma_max=14.614642)\r\n\r\n    if noise_type == \"fractal\":\r\n        noise_func.alpha = alpha\r\n        noise_func.k = k\r\n\r\n    # from here until return is very similar to comfy/sample.py \r\n    if noise_inds is None:\r\n        return noise_func(sigma=14.614642, sigma_next=0.0291675)\r\n\r\n    unique_inds, inverse = np.unique(noise_inds, return_inverse=True)\r\n    noises = []\r\n    for i in range(unique_inds[-1]+1):\r\n        noise = noise_func(size = [1] + list(latent_image.size())[1:], dtype=latent_image.dtype, layout=latent_image.layout, device=latent_image.device)\r\n        if i in unique_inds:\r\n            noises.append(noise)\r\n    noises = [noises[i] for i in inverse]\r\n    noises = torch.cat(noises, axis=0)\r\n    return noises\r\n\r\n"
  },
  {
    "path": "legacy/noise_sigmas_timesteps_scaling.py",
    "content": "import torch\r\n#from..noise_classes import *\r\nimport comfy.model_patcher\r\nfrom .helper import has_nested_attr\r\n\r\ndef get_alpha_ratio_from_sigma_up(sigma_up, sigma_next, eta, sigma_max=1.0):\r\n    if sigma_up >= sigma_next and sigma_next > 0:\r\n        print(\"Maximum VPSDE noise level exceeded: falling back to hard noise mode.\")\r\n        # Values below are the theoretical max, but break with exponential integrator stepsize calcs:\r\n        #sigma_up = sigma_next\r\n        #alpha_ratio = sigma_max - sigma_next\r\n        #sigma_down = 0 * sigma_next\r\n        #return alpha_ratio, sigma_up, sigma_down\r\n        if eta >= 1:\r\n            sigma_up = sigma_next * 0.9999 #avoid sqrt(neg_num) later \r\n        else:\r\n            sigma_up = sigma_next * eta \r\n        \r\n    sigma_signal = sigma_max - sigma_next\r\n    sigma_residual = torch.sqrt(sigma_next**2 - sigma_up**2)\r\n\r\n    alpha_ratio = sigma_signal + sigma_residual\r\n    sigma_down = sigma_residual / alpha_ratio\r\n    return alpha_ratio, sigma_up, sigma_down\r\n\r\ndef get_alpha_ratio_from_sigma_down(sigma_down, sigma_next, eta, sigma_max=1.0):\r\n    alpha_ratio = (1 - sigma_next) / (1 - sigma_down) \r\n    sigma_up = (sigma_next ** 2 - sigma_down ** 2 * alpha_ratio ** 2) ** 0.5 \r\n    \r\n    if sigma_up >= sigma_next: # \"clamp\" noise level to max if max exceeded\r\n        alpha_ratio, sigma_up, sigma_down = get_alpha_ratio_from_sigma_up(sigma_up, sigma_next, eta, sigma_max)\r\n      \r\n    return alpha_ratio, sigma_up, sigma_down\r\n  \r\ndef get_ancestral_step_RF_var(sigma, sigma_next, eta, sigma_max=1.0):\r\n    dtype = sigma.dtype #calculate variance adjusted sigma up... sigma_up = sqrt(dt)\r\n\r\n    sigma, sigma_next = sigma.to(torch.float64), sigma_next.to(torch.float64) # float64 is very important to avoid numerical precision issues\r\n\r\n    sigma_diff = (sigma - sigma_next).abs() + 1e-10 \r\n    sigma_up = torch.sqrt(sigma_diff).to(torch.float64) * eta\r\n\r\n    sigma_down_num = (sigma_next**2 - sigma_up**2).to(torch.float64)\r\n    sigma_down = torch.sqrt(sigma_down_num) / ((1 - sigma_next).to(torch.float64) + torch.sqrt(sigma_down_num).to(torch.float64))\r\n\r\n    alpha_ratio = (1 - sigma_next).to(torch.float64) / (1 - sigma_down).to(torch.float64)\r\n    \r\n    return sigma_up.to(dtype),  sigma_down.to(dtype), alpha_ratio.to(dtype)\r\n  \r\ndef get_ancestral_step_RF_lorentzian(sigma, sigma_next, eta, sigma_max=1.0):\r\n    dtype = sigma.dtype\r\n    alpha = 1 / ((sigma.to(torch.float64))**2 + 1)\r\n    sigma_up = eta * (1 - alpha) ** 0.5\r\n    alpha_ratio, sigma_up, sigma_down = get_alpha_ratio_from_sigma_up(sigma_up, sigma_next, eta, sigma_max)\r\n    return sigma_up.to(dtype),  sigma_down.to(dtype), alpha_ratio.to(dtype)\r\n\r\ndef get_ancestral_step_EPS(sigma, sigma_next, eta=1.):\r\n    # Calculates the noise level (sigma_down) to step down to and the amount of noise to add (sigma_up) when doing an ancestral sampling step.\r\n    alpha_ratio = torch.full_like(sigma, 1.0)\r\n    \r\n    if not eta or not sigma_next:\r\n        return torch.full_like(sigma, 0.0), sigma_next, alpha_ratio\r\n      \r\n    sigma_up = min(sigma_next, eta * (sigma_next ** 2 * (sigma ** 2 - sigma_next ** 2) / sigma ** 2) ** 0.5)\r\n    sigma_down = (sigma_next ** 2 - sigma_up ** 2) ** 0.5\r\n    \r\n    return sigma_up, sigma_down, alpha_ratio\r\n\r\ndef get_ancestral_step_RF_sinusoidal(sigma_next, eta, sigma_max=1.0):\r\n    sigma_up = eta * sigma_next * torch.sin(torch.pi * sigma_next) ** 2 \r\n    alpha_ratio, sigma_up, sigma_down = get_alpha_ratio_from_sigma_up(sigma_up, sigma_next, eta, sigma_max)\r\n    return sigma_up, sigma_down, alpha_ratio\r\n\r\ndef get_ancestral_step_RF_softer(sigma, sigma_next, eta, sigma_max=1.0):\r\n    # math adapted from get_ancestral_step_EPS to work with RF\r\n    sigma_down = sigma_next * torch.sqrt(1 - (eta**2 * (sigma**2 - sigma_next**2)) / sigma**2)\r\n    alpha_ratio, sigma_up, sigma_down = get_alpha_ratio_from_sigma_down(sigma_down, sigma_next, eta, sigma_max)\r\n    return sigma_up, sigma_down, alpha_ratio\r\n\r\ndef get_ancestral_step_RF_soft(sigma, sigma_next, eta, sigma_max=1.0):\r\n    \"\"\"Calculates the noise level (sigma_down) to step down to and the amount of noise to add (sigma_up) when doing a rectified flow sampling step, \r\n    and a mixing ratio (alpha_ratio) for scaling the latent during noise addition. Scale is to shape the sigma_down curve.\"\"\"\r\n    down_ratio = (1 - eta) + eta * ((sigma_next) / sigma)\r\n    sigma_down = down_ratio * sigma_next\r\n    alpha_ratio, sigma_up, sigma_down = get_alpha_ratio_from_sigma_down(sigma_down, sigma_next, eta, sigma_max)\r\n    return sigma_up, sigma_down, alpha_ratio\r\n\r\ndef get_ancestral_step_RF_soft_linear(sigma, sigma_next, eta, sigma_max=1.0):\r\n    sigma_down = sigma_next + eta * (sigma_next - sigma)\r\n    if sigma_down < 0:\r\n        return torch.full_like(sigma, 0.), sigma_next, torch.full_like(sigma, 1.)\r\n    alpha_ratio, sigma_up, sigma_down = get_alpha_ratio_from_sigma_down(sigma_down, sigma_next, eta, sigma_max)\r\n\r\n    return sigma_up, sigma_down, alpha_ratio\r\n\r\ndef get_ancestral_step_RF_exp(sigma, sigma_next, eta, sigma_max=1.0): # TODO: fix black image issue with linear RK\r\n    h = -torch.log(sigma_next/sigma)\r\n    sigma_up = sigma_next * (1 - (-2*eta*h).exp())**0.5 \r\n    alpha_ratio, sigma_up, sigma_down = get_alpha_ratio_from_sigma_up(sigma_up, sigma_next, eta, sigma_max)\r\n    return sigma_up, sigma_down, alpha_ratio\r\n\r\ndef get_ancestral_step_RF_sqrd(sigma, sigma_next, eta, sigma_max=1.0):\r\n    sigma_hat = sigma * (1 + eta)\r\n    sigma_up = (sigma_hat ** 2 - sigma ** 2) ** .5\r\n    alpha_ratio, sigma_up, sigma_down = get_alpha_ratio_from_sigma_up(sigma_up, sigma_next, eta, sigma_max)\r\n    return sigma_up, sigma_down, alpha_ratio\r\n  \r\ndef get_ancestral_step_RF_hard(sigma_next, eta, sigma_max=1.0):\r\n    sigma_up = sigma_next * eta \r\n    alpha_ratio, sigma_up, sigma_down = get_alpha_ratio_from_sigma_up(sigma_up, sigma_next, eta, sigma_max)\r\n    return sigma_up, sigma_down, alpha_ratio\r\n\r\ndef get_vpsde_step_RF(sigma, sigma_next, eta, sigma_max=1.0):\r\n    dt = sigma - sigma_next\r\n    sigma_up = eta * sigma * dt**0.5\r\n    alpha_ratio = 1 - dt * (eta**2/4) * (1 + sigma)\r\n    sigma_down = sigma_next - (eta/4)*sigma*(1-sigma)*(sigma - sigma_next)\r\n    return sigma_up, sigma_down, alpha_ratio\r\n  \r\ndef get_fuckery_step_RF(sigma, sigma_next, eta, sigma_max=1.0):\r\n    sigma_down = (1-eta) * sigma_next\r\n    sigma_up = torch.sqrt(sigma_next**2 - sigma_down**2)\r\n    alpha_ratio = torch.ones_like(sigma_next)\r\n    return sigma_up, sigma_down, alpha_ratio\r\n\r\n\r\ndef get_res4lyf_step_with_model(model, sigma, sigma_next, eta=0.0, noise_mode=\"hard\"):\r\n  \r\n    su, sd, alpha_ratio = torch.zeros_like(sigma), sigma_next.clone(), torch.ones_like(sigma)\r\n    \r\n    if has_nested_attr(model, \"inner_model.inner_model.model_sampling\"):\r\n        model_sampling = model.inner_model.inner_model.model_sampling\r\n    elif has_nested_attr(model, \"model.model_sampling\"):\r\n        model_sampling = model.model.model_sampling\r\n    \r\n    if isinstance(model_sampling, comfy.model_sampling.CONST):\r\n        sigma_var = (-1 + torch.sqrt(1 + 4 * sigma)) / 2                      # sigma_var = (torch.sqrt(1 + 4 * sigma) - 1) / 2       sigma_var = ((4*sigma+1)**0.5 - 1) / 2\r\n        if noise_mode == \"hard_var\" and eta > 0.0 and sigma_next > sigma_var:\r\n            su, sd, alpha_ratio = get_ancestral_step_RF_var(sigma, sigma_next, eta)\r\n        else:\r\n            if   noise_mode == \"soft\":\r\n                su, sd, alpha_ratio = get_ancestral_step_RF_soft(sigma, sigma_next, eta)\r\n            elif noise_mode == \"softer\":\r\n                su, sd, alpha_ratio = get_ancestral_step_RF_softer(sigma, sigma_next, eta)\r\n            elif noise_mode == \"hard_sq\": \r\n                su, sd, alpha_ratio = get_ancestral_step_RF_sqrd(sigma, sigma_next, eta)\r\n            elif noise_mode == \"sinusoidal\":\r\n                su, sd, alpha_ratio = get_ancestral_step_RF_sinusoidal(sigma_next, eta)\r\n            elif noise_mode == \"exp\": \r\n                su, sd, alpha_ratio = get_ancestral_step_RF_exp(sigma, sigma_next, eta)\r\n            elif noise_mode == \"soft-linear\": \r\n                su, sd, alpha_ratio = get_ancestral_step_RF_soft_linear(sigma, sigma_next, eta) \r\n            elif noise_mode == \"lorentzian\":\r\n                su, sd, alpha_ratio = get_ancestral_step_RF_lorentzian(sigma, sigma_next, eta)\r\n            elif noise_mode == \"vpsde\":\r\n                su, sd, alpha_ratio = get_vpsde_step_RF(sigma, sigma_next, eta)\r\n            elif noise_mode == \"fuckery\":\r\n                su, sd, alpha_ratio = get_fuckery_step_RF(sigma, sigma_next, eta)\r\n            else: #elif noise_mode == \"hard\": #fall back to hard noise from hard_var\r\n                su, sd, alpha_ratio = get_ancestral_step_RF_hard(sigma_next, eta)\r\n    else:\r\n        alpha_ratio = torch.full_like(sigma, 1.0)\r\n        if noise_mode == \"hard_sq\":\r\n            sd = sigma_next\r\n            sigma_hat = sigma * (1 + eta)\r\n            su = (sigma_hat ** 2 - sigma ** 2) ** .5\r\n            sigma = sigma_hat\r\n        elif noise_mode == \"hard\":\r\n            su = eta * sigma_next\r\n            sd = (sigma_next ** 2 - su ** 2) ** 0.5\r\n        elif noise_mode == \"exp\":\r\n            h = -torch.log(sigma_next/sigma)\r\n            su = sigma_next * (1 - (-2*eta*h).exp())**0.5 \r\n            sd = (sigma_next ** 2 - su ** 2) ** 0.5\r\n        else: #if noise_mode == \"soft\" or noise_mode == \"softer\": \r\n            su = min(sigma_next, eta * (sigma_next ** 2 * (sigma ** 2 - sigma_next ** 2) / sigma ** 2) ** 0.5)\r\n            #su, sd, alpha_ratio = get_ancestral_step_EPS(sigma, sigma_next, eta)\r\n            \r\n    \r\n    su = torch.nan_to_num(su, 0.0)\r\n    sd = torch.nan_to_num(sd, float(sigma_next))\r\n    alpha_ratio = torch.nan_to_num(alpha_ratio, 1.0)\r\n    \r\n    return su, sigma, sd, alpha_ratio\r\n\r\n\r\n\r\nNOISE_MODE_NAMES = [\"none\",\r\n                    \"hard_sq\",\r\n                    \"hard\",\r\n                    #\"hard_down\",\r\n                    \"lorentzian\", \r\n                    \"soft\", \r\n                    \"soft-linear\",\r\n                    \"softer\",\r\n                    \"eps\",\r\n                    \"sinusoidal\",\r\n                    \"exp\", \r\n                    \"vpsde\",\r\n                    #\"fuckery\",\r\n                    \"hard_var\", \r\n                    ]\r\n\r\n\r\n\r\ndef get_res4lyf_half_step3(sigma, sigma_next, c2=0.5, c3=1.0, t_fn=None, sigma_fn=None, t_fn_formula=\"\", sigma_fn_formula=\"\", ):\r\n\r\n    t_fn_x     = eval(f\"lambda sigma: {t_fn_formula}\", {\"torch\": torch}) if t_fn_formula else t_fn\r\n    sigma_fn_x = eval(f\"lambda t: {sigma_fn_formula}\", {\"torch\": torch}) if sigma_fn_formula else sigma_fn\r\n        \r\n    t_x, t_next_x = t_fn_x(sigma), t_fn_x(sigma_next)\r\n    h_x = t_next_x - t_x\r\n\r\n    s2 = t_x + h_x * c2\r\n    s3 = t_x + h_x * c3\r\n    sigma_2 = sigma_fn_x(s2)\r\n    sigma_3 = sigma_fn_x(s3)\r\n\r\n    h = t_fn(sigma_next) - t_fn(sigma)\r\n    c2 = (t_fn(sigma_2) - t_fn(sigma)) / h    \r\n    c3 = (t_fn(sigma_3) - t_fn(sigma)) / h    \r\n    \r\n    return c2, c3\r\n\r\n\r\n"
  },
  {
    "path": "legacy/phi_functions.py",
    "content": "import torch\r\nimport math\r\nfrom typing import Optional\r\n\r\n\r\n# Remainder solution\r\ndef _phi(j, neg_h):\r\n    remainder = torch.zeros_like(neg_h)\r\n\r\n    for k in range(j): \r\n        remainder += (neg_h)**k / math.factorial(k)\r\n    phi_j_h = ((neg_h).exp() - remainder) / (neg_h)**j\r\n\r\n    return phi_j_h\r\n\r\ndef calculate_gamma(c2, c3):\r\n    return (3*(c3**3) - 2*c3) / (c2*(2 - 3*c2))\r\n\r\n# Exact analytic solution originally calculated by Clybius. https://github.com/Clybius/ComfyUI-Extra-Samplers/tree/main\r\ndef _gamma(n: int,) -> int:\r\n    \"\"\"\r\n    https://en.wikipedia.org/wiki/Gamma_function\r\n    for every positive integer n,\r\n    Γ(n) = (n-1)!\r\n    \"\"\"\r\n    return math.factorial(n-1)\r\n\r\ndef _incomplete_gamma(s: int, x: float, gamma_s: Optional[int] = None) -> float:\r\n    \"\"\"\r\n    https://en.wikipedia.org/wiki/Incomplete_gamma_function#Special_values\r\n    if s is a positive integer,\r\n    Γ(s, x) = (s-1)!*∑{k=0..s-1}(x^k/k!)\r\n    \"\"\"\r\n    if gamma_s is None:\r\n        gamma_s = _gamma(s)\r\n\r\n    sum_: float = 0\r\n    # {k=0..s-1} inclusive\r\n    for k in range(s):\r\n        numerator: float = x**k\r\n        denom: int = math.factorial(k)\r\n        quotient: float = numerator/denom\r\n        sum_ += quotient\r\n    incomplete_gamma_: float = sum_ * math.exp(-x) * gamma_s\r\n    return incomplete_gamma_\r\n\r\ndef phi(j: int, neg_h: float, ):\r\n    \"\"\"\r\n    For j={1,2,3}: you could alternatively use Kat's phi_1, phi_2, phi_3 which perform fewer steps\r\n\r\n    Lemma 1\r\n    https://arxiv.org/abs/2308.02157\r\n    ϕj(-h) = 1/h^j*∫{0..h}(e^(τ-h)*(τ^(j-1))/((j-1)!)dτ)\r\n\r\n    https://www.wolframalpha.com/input?i=integrate+e%5E%28%CF%84-h%29*%28%CF%84%5E%28j-1%29%2F%28j-1%29%21%29d%CF%84\r\n    = 1/h^j*[(e^(-h)*(-τ)^(-j)*τ(j))/((j-1)!)]{0..h}\r\n    https://www.wolframalpha.com/input?i=integrate+e%5E%28%CF%84-h%29*%28%CF%84%5E%28j-1%29%2F%28j-1%29%21%29d%CF%84+between+0+and+h\r\n    = 1/h^j*((e^(-h)*(-h)^(-j)*h^j*(Γ(j)-Γ(j,-h)))/(j-1)!)\r\n    = (e^(-h)*(-h)^(-j)*h^j*(Γ(j)-Γ(j,-h))/((j-1)!*h^j)\r\n    = (e^(-h)*(-h)^(-j)*(Γ(j)-Γ(j,-h))/(j-1)!\r\n    = (e^(-h)*(-h)^(-j)*(Γ(j)-Γ(j,-h))/Γ(j)\r\n    = (e^(-h)*(-h)^(-j)*(1-Γ(j,-h)/Γ(j))\r\n\r\n    requires j>0\r\n    \"\"\"\r\n    assert j > 0\r\n    gamma_: float = _gamma(j)\r\n    incomp_gamma_: float = _incomplete_gamma(j, neg_h, gamma_s=gamma_)\r\n    phi_: float = math.exp(neg_h) * neg_h**-j * (1-incomp_gamma_/gamma_)\r\n    return phi_\r\n\r\n\r\n\r\nclass Phi:\r\n    def __init__(self, h, c, analytic_solution=False): \r\n        self.h = h\r\n        self.c = c\r\n        self.cache = {}  \r\n        if analytic_solution:\r\n            self.phi_f = phi\r\n        else:\r\n            self.phi_f = _phi  # remainder method\r\n\r\n    def __call__(self, j, i=-1):\r\n        if (j, i) in self.cache:\r\n            return self.cache[(j, i)]\r\n\r\n        if i < 0:\r\n            c = 1\r\n        else:\r\n            c = self.c[i - 1]\r\n            if c == 0:\r\n                self.cache[(j, i)] = 0\r\n                return 0\r\n\r\n        if j == 0:\r\n            result = torch.exp(-self.h * c)\r\n        else:\r\n            result = self.phi_f(j, -self.h * c)\r\n\r\n        self.cache[(j, i)] = result\r\n\r\n        return result\r\n"
  },
  {
    "path": "legacy/rk_coefficients.py",
    "content": "import torch\r\nimport copy\r\nimport math\r\n\r\nfrom .deis_coefficients import get_deis_coeff_list\r\nfrom .phi_functions import *\r\nfrom .helper import get_extra_options_kv\r\n\r\n\r\nfrom itertools import permutations #, combinations\r\nimport random\r\n\r\nRK_SAMPLER_NAMES = [\"none\",\r\n                    \"res_2m\",\r\n                    \"res_3m\",\r\n                    \"res_2s\", \r\n                    \"res_3s\",\r\n                    \"res_3s_alt\",\r\n                    \"res_3s_cox_matthews\",\r\n                    \"res_3s_lie\",\r\n                    \"res_3s_strehmel_weiner\",\r\n                    \"res_4s_krogstad\",\r\n                    \"res_4s_strehmel_weiner\",\r\n                    \"res_4s_cox_matthews\",\r\n                    \"res_4s_munthe-kaas\",\r\n                    \"res_5s\",\r\n                    \"res_6s\",\r\n                    \"res_8s\",\r\n\r\n                    \"res_10s\",\r\n                    \"res_15s\",\r\n                    \"res_16s\",\r\n                    \r\n                    \"etdrk2_2s\",\r\n                    \"etdrk3_a_3s\",\r\n                    \"etdrk3_b_3s\",\r\n\r\n                    #\"etdrk4_4s\"\r\n\r\n                    \"deis_2m\",\r\n                    \"deis_3m\", \r\n                    \"deis_4m\",\r\n                    \r\n                    \"ralston_2s\",\r\n                    \"ralston_3s\",\r\n                    \"ralston_4s\", \r\n                    \r\n                    \"dpmpp_2m\",\r\n                    \"dpmpp_3m\",\r\n                    \"dpmpp_2s\",\r\n                    \"dpmpp_sde_2s\",\r\n                    \"dpmpp_3s\",\r\n                    \r\n                    \"lawson4_4s\",\r\n                    \"genlawson41_4s\",\r\n                    \"modgenlawson41_4s\",\r\n\r\n\r\n\r\n                    \"midpoint_2s\",\r\n                    \"heun_2s\", \r\n                    \"heun_3s\", \r\n                    \r\n                    \"houwen-wray_3s\",\r\n                    \"kutta_3s\", \r\n                    \"ssprk3_3s\",\r\n                    \r\n                    \"rk38_4s\",\r\n                    \"rk4_4s\", \r\n                    \"rk5_7s\",\r\n                    \"rk6_7s\",\r\n\r\n                    \"bogacki-shampine_4s\",\r\n                    \"bogacki-shampine_7s\",\r\n\r\n                    \"dormand-prince_6s\", \r\n                    \"dormand-prince_13s\", \r\n\r\n                    \"tsi_7s\",\r\n                    #\"verner_robust_16s\",\r\n\r\n                    \"ddim\",\r\n                    \"buehler\",\r\n                    ]\r\n\r\n\r\nIRK_SAMPLER_NAMES = [\"none\",\r\n                    \"explicit_diagonal\", \r\n                    \"explicit_full\",\r\n                    \r\n                    \"irk_exp_diag_2s\",\r\n                    \r\n                    \"gauss-legendre_2s\",\r\n                    \"gauss-legendre_3s\", \r\n                    \"gauss-legendre_4s\",\r\n                    \"gauss-legendre_5s\",\r\n                    \r\n                    \"radau_ia_2s\",\r\n                    \"radau_ia_3s\",\r\n                    \"radau_iia_2s\",\r\n                    \"radau_iia_3s\",\r\n                    \r\n                    \"lobatto_iiia_2s\",\r\n                    \"lobatto_iiia_3s\",\r\n                    \"lobatto_iiib_2s\",\r\n                    \"lobatto_iiib_3s\",\r\n                    \"lobatto_iiic_2s\",\r\n                    \"lobatto_iiic_3s\",\r\n                    \"lobatto_iiic_star_2s\",\r\n                    \"lobatto_iiic_star_3s\",\r\n                    \"lobatto_iiid_2s\",\r\n                    \"lobatto_iiid_3s\",\r\n                    \r\n                    \"kraaijevanger_spijker_2s\",\r\n                    \"qin_zhang_2s\",\r\n                    \r\n                    \"pareschi_russo_2s\",\r\n                    \"pareschi_russo_alt_2s\",\r\n                    \r\n                    \"crouzeix_2s\",\r\n                    \"crouzeix_3s\",\r\n                    ]\r\n\r\nalpha_crouzeix = (2/(3**0.5)) * math.cos(math.pi / 18)\r\n\r\nrk_coeff = {\r\n    \"gauss-legendre_5s\": (\r\n    [\r\n        [4563950663 / 32115191526, \r\n         (310937500000000 / 2597974476091533 + 45156250000 * (739**0.5) / 8747388808389), \r\n         (310937500000000 / 2597974476091533 - 45156250000 * (739**0.5) / 8747388808389),\r\n         (5236016175 / 88357462711 + 709703235 * (739**0.5) / 353429850844),\r\n         (5236016175 / 88357462711 - 709703235 * (739**0.5) / 353429850844)],\r\n         \r\n        [(4563950663 / 32115191526 - 38339103 * (739**0.5) / 6250000000),\r\n         (310937500000000 / 2597974476091533 + 9557056475401 * (739**0.5) / 3498955523355600000),\r\n         (310937500000000 / 2597974476091533 - 14074198220719489 * (739**0.5) / 3498955523355600000),\r\n         (5236016175 / 88357462711 + 5601362553163918341 * (739**0.5) / 2208936567775000000000),\r\n         (5236016175 / 88357462711 - 5040458465159165409 * (739**0.5) / 2208936567775000000000)],\r\n         \r\n        [(4563950663 / 32115191526 + 38339103 * (739**0.5) / 6250000000),\r\n         (310937500000000 / 2597974476091533 + 14074198220719489 * (739**0.5) / 3498955523355600000),\r\n         (310937500000000 / 2597974476091533 - 9557056475401 * (739**0.5) / 3498955523355600000),\r\n         (5236016175 / 88357462711 + 5040458465159165409 * (739**0.5) / 2208936567775000000000),\r\n         (5236016175 / 88357462711 - 5601362553163918341 * (739**0.5) / 2208936567775000000000)],\r\n         \r\n        [(4563950663 / 32115191526 - 38209 * (739**0.5) / 7938810),\r\n         (310937500000000 / 2597974476091533 - 359369071093750 * (739**0.5) / 70145310854471391),\r\n         (310937500000000 / 2597974476091533 - 323282178906250 * (739**0.5) / 70145310854471391),\r\n         (5236016175 / 88357462711 - 470139 * (739**0.5) / 1413719403376),\r\n         (5236016175 / 88357462711 - 44986764863 * (739**0.5) / 21205791050640)],\r\n         \r\n        [(4563950663 / 32115191526 + 38209 * (739**0.5) / 7938810),\r\n         (310937500000000 / 2597974476091533 + 359369071093750 * (739**0.5) / 70145310854471391),\r\n         (310937500000000 / 2597974476091533 + 323282178906250 * (739**0.5) / 70145310854471391),\r\n         (5236016175 / 88357462711 + 44986764863 * (739**0.5) / 21205791050640),\r\n         (5236016175 / 88357462711 + 470139 * (739**0.5) / 1413719403376)],\r\n    ],\r\n    [\r\n        \r\n        [4563950663 / 16057595763,\r\n         621875000000000 / 2597974476091533,\r\n         621875000000000 / 2597974476091533,\r\n         10472032350 / 88357462711,\r\n         10472032350 / 88357462711]\r\n    ],\r\n    [\r\n        1 / 2,\r\n        1 / 2 - 99 * (739**0.5) / 10000,\r\n        1 / 2 + 99 * (739**0.5) / 10000,\r\n        1 / 2 - (739**0.5) / 60,\r\n        1 / 2 + (739**0.5) / 60\r\n    ]\r\n    ),\r\n    \"gauss-legendre_4s\": (\r\n        [\r\n            [1/4, 1/4 - 15**0.5 / 6, 1/4 + 15**0.5 / 6, 1/4],            \r\n            [1/4 + 15**0.5 / 6, 1/4, 1/4 - 15**0.5 / 6, 1/4],          \r\n            [1/4, 1/4 + 15**0.5 / 6, 1/4, 1/4 - 15**0.5 / 6],            \r\n            [1/4 - 15**0.5 / 6, 1/4, 1/4 + 15**0.5 / 6, 1/4],       \r\n        ],\r\n        [    \r\n            [1/8, 3/8, 3/8, 1/8]                                        \r\n        ],\r\n        [\r\n            1/2 - 15**0.5 / 10,                                     \r\n            1/2 + 15**0.5 / 10,                                         \r\n            1/2 + 15**0.5 / 10,                                        \r\n            1/2 - 15**0.5 / 10                                         \r\n        ]\r\n    ),\r\n    \"gauss-legendre_3s\": (\r\n        [\r\n            [5/36, 2/9 - 15**0.5 / 15, 5/36 - 15**0.5 / 30],\r\n            [5/36 + 15**0.5 / 24, 2/9, 5/36 - 15**0.5 / 24],\r\n            [5/36 + 15**0.5 / 30, 2/9 + 15**0.5 / 15, 5/36],\r\n        ],\r\n        [\r\n            [5/18, 4/9, 5/18]\r\n        ],\r\n        [1/2 - 15**0.5 / 10, 1/2, 1/2 + 15**0.5 / 10]\r\n    ),\r\n    \"gauss-legendre_2s\": (\r\n        [\r\n            [1/4, 1/4 - 3**0.5 / 6],\r\n            [1/4 + 3**0.5 / 6, 1/4],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [1/2 - 3**0.5 / 6, 1/2 + 3**0.5 / 6]\r\n    ),\r\n    \r\n    \r\n    \"radau_iia_4s\": (\r\n        [    \r\n            [],\r\n            [],\r\n            [],\r\n            [],\r\n        ],\r\n        [\r\n            [1/4, 1/4, 1/4, 1/4],\r\n        ],\r\n        [(1/11)*(4-6**0.5), (1/11)*(4+6**0.5), 1/2, 1]\r\n    ),\r\n    \r\n    \r\n    \"radau_iia_3s\": (\r\n        [    \r\n            [11/45 - 7*6**0.5 / 360, 37/225 - 169*6**0.5 / 1800, -2/225 + 6**0.5 / 75],\r\n            [37/225 + 169*6**0.5 / 1800, 11/45 + 7*6**0.5 / 360, -2/225 - 6**0.5 / 75],\r\n            [4/9 - 6**0.5 / 36, 4/9 + 6**0.5 / 36, 1/9],\r\n        ],\r\n        [\r\n            [4/9 - 6**0.5 / 36, 4/9 + 6**0.5 / 36, 1/9],\r\n        ],\r\n        [2/5 - 6**0.5 / 10, 2/5 + 6**0.5 / 10, 1.]\r\n    ),\r\n    \"radau_iia_2s\": (\r\n        [    \r\n            [5/12, -1/12],\r\n            [3/4, 1/4],\r\n        ],\r\n        [\r\n            [3/4, 1/4],\r\n        ],\r\n        [1/3, 1]\r\n    ),\r\n    \"radau_ia_3s\": (\r\n        [    \r\n            [1/9, (-1-6**0.5)/18, (-1+6**0.5)/18],\r\n            [1/9, 11/45 + 7*6**0.5/360, 11/45-43*6**0.5/360],\r\n            [1/9, 11/45-43*6**0.5/360, 11/45 + 7*6**0.5/360],\r\n        ],\r\n        [\r\n            [1/9, 4/9 + 6**0.5/36, 4/9 - 6**0.5/36],\r\n        ],\r\n        [0, 3/5-6**0.5/10, 3/5+6**0.5/10]\r\n    ),\r\n    \"radau_ia_2s\": (\r\n        [    \r\n            [1/4, -1/4],\r\n            [1/4, 5/12],\r\n        ],\r\n        [\r\n            [1/4, 3/4],\r\n        ],\r\n        [0, 2/3]\r\n    ),\r\n    \r\n    \"lobatto_iiia_3s\": (\r\n        [    \r\n            [0, 0, 0],\r\n            [5/24, 1/3, -1/24],\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1]\r\n    ),\r\n    \"lobatto_iiia_2s\": (\r\n        [    \r\n            [0, 0],\r\n            [1/2, 1/2],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [0, 1]\r\n    ),\r\n    \r\n    \r\n    \r\n    \"lobatto_iiib_3s\": (\r\n        [    \r\n            [1/6, -1/6, 0],\r\n            [1/6, 1/3, 0],\r\n            [1/6, 5/6, 0],\r\n        ],\r\n        [\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1]\r\n    ),\r\n    \"lobatto_iiib_2s\": (\r\n        [    \r\n            [1/2, 0],\r\n            [1/2, 0],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [0, 1]\r\n    ),\r\n\r\n    \"lobatto_iiic_3s\": (\r\n        [    \r\n            [1/6, -1/3, 1/6],\r\n            [1/6, 5/12, -1/12],\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1]\r\n    ),\r\n    \"lobatto_iiic_2s\": (\r\n        [    \r\n            [1/2, -1/2],\r\n            [1/2, 1/2],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [0, 1]\r\n    ),\r\n    \r\n\r\n    \"lobatto_iiic_star_3s\": (\r\n        [    \r\n            [0, 0, 0],\r\n            [1/4, 1/4, 0],\r\n            [0, 1, 0],\r\n        ],\r\n        [\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1]\r\n    ),\r\n    \"lobatto_iiic_star_2s\": (\r\n        [    \r\n            [0, 0],\r\n            [1, 0],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [0, 1]\r\n    ),\r\n    \r\n    \"lobatto_iiid_3s\": (\r\n        [    \r\n            [1/6, 0, -1/6],\r\n            [1/12, 5/12, 0],\r\n            [1/2, 1/3, 1/6],\r\n        ],\r\n        [\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1]\r\n    ),\r\n    \"lobatto_iiid_2s\": (\r\n        [    \r\n            [1/2, 1/2],\r\n            [-1/2, 1/2],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [0, 1]\r\n    ),\r\n    \r\n\r\n    \r\n    \"kraaijevanger_spijker_2s\": (\r\n        [    \r\n            [1/2, 0],\r\n            [-1/2, 2],\r\n        ],\r\n        [\r\n            [-1/2, 3/2],\r\n        ],\r\n        [1/2, 3/2]\r\n    ),\r\n    \r\n    \"qin_zhang_2s\": (\r\n        [    \r\n            [1/4, 0],\r\n            [1/2, 1/4],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [1/4, 3/4]\r\n    ),\r\n\r\n    \"pareschi_russo_2s\": (\r\n        [    \r\n            [(1-2**0.5/2), 0],\r\n            [1-2*(1-2**0.5/2), (1-2**0.5/2)],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [(1-2**0.5/2), 1-(1-2**0.5/2)]\r\n    ),\r\n\r\n\r\n    \"pareschi_russo_alt_2s\": (\r\n        [    \r\n            [(1-2**0.5/2), 0],\r\n            [1-(1-2**0.5/2), (1-2**0.5/2)],\r\n        ],\r\n        [\r\n            [1-(1-2**0.5/2), (1-2**0.5/2)],\r\n        ],\r\n        [(1-2**0.5/2), 1]\r\n    ),\r\n\r\n    \"crouzeix_3s\": (\r\n        [\r\n            [(1+alpha_crouzeix)/2, 0, 0],\r\n            [-alpha_crouzeix/2, (1+alpha_crouzeix)/2, 0],\r\n            [1+alpha_crouzeix, -(1+2*alpha_crouzeix), (1+alpha_crouzeix)/2],\r\n        ],\r\n        [\r\n            [1/(6*alpha_crouzeix**2), 1-(1/(3*alpha_crouzeix**2)), 1/(6*alpha_crouzeix**2)],\r\n        ],\r\n        [(1+alpha_crouzeix)/2,   1/2,   (1-alpha_crouzeix)/2],\r\n    ),\r\n    \r\n    \"crouzeix_2s\": (\r\n        [\r\n            [1/2 + 3**0.5 / 6, 0],\r\n            [-(3**0.5 / 3), 1/2 + 3**0.5 / 6]\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [1/2 + 3**0.5 / 6, 1/2 - 3**0.5 / 6],\r\n    ),\r\n    \"verner_13s\": ( #verner9. some values are missing, need to revise\r\n        [\r\n            [],\r\n        ],\r\n        [\r\n            [],\r\n        ],\r\n        [\r\n            0.03462,\r\n            0.09702435063878045,\r\n            0.14553652595817068,\r\n            0.561,\r\n            0.22900791159048503,\r\n            0.544992088409515,\r\n            0.645,\r\n            0.48375,\r\n            0.06757,\r\n            0.25,\r\n            0.6590650618730999,\r\n            0.8206,\r\n            0.9012,\r\n        ]\r\n    ),\r\n    \"verner_robust_16s\": (\r\n        [\r\n            [],\r\n            [0.04],\r\n            [-0.01988527319182291, 0.11637263332969652],\r\n            [0.0361827600517026, 0, 0.10854828015510781],\r\n            [2.272114264290177, 0, -8.526886447976398, 6.830772183686221],\r\n            [0.050943855353893744, 0, 0, 0.1755865049809071, 0.007022961270757467],\r\n            [0.1424783668683285, 0, 0, -0.3541799434668684, 0.07595315450295101, 0.6765157656337123],\r\n            [0.07111111111111111, 0, 0, 0, 0, 0.3279909287605898, 0.24089796012829906],\r\n            [0.07125, 0, 0, 0, 0, 0.32688424515752457, 0.11561575484247544, -0.03375],\r\n            [0.0482267732246581, 0, 0, 0, 0, 0.039485599804954, 0.10588511619346581, -0.021520063204743093, -0.10453742601833482],\r\n            [-0.026091134357549235, 0, 0, 0, 0, 0.03333333333333333, -0.1652504006638105, 0.03434664118368617, 0.1595758283215209, 0.21408573218281934],\r\n            [-0.03628423396255658, 0, 0, 0, 0, -1.0961675974272087, 0.1826035504321331, 0.07082254444170683, -0.02313647018482431, 0.2711204726320933, 1.3081337494229808],\r\n            [-0.5074635056416975, 0, 0, 0, 0, -6.631342198657237, -0.2527480100908801, -0.49526123800360955, 0.2932525545253887, 1.440108693768281, 6.237934498647056, 0.7270192054526988],\r\n            [0.6130118256955932, 0, 0, 0, 0, 9.088803891640463, -0.40737881562934486, 1.7907333894903747, 0.714927166761755, -1.4385808578417227, -8.26332931206474, -1.537570570808865, 0.34538328275648716],\r\n            [-1.2116979103438739, 0, 0, 0, 0, -19.055818715595954, 1.263060675389875, -6.913916969178458, -0.6764622665094981, 3.367860445026608, 18.00675164312591, 6.83882892679428, -1.0315164519219504, 0.4129106232130623],\r\n            [2.1573890074940536, 0, 0, 0, 0, 23.807122198095804, 0.8862779249216555, 13.139130397598764, -2.604415709287715, -5.193859949783872, -20.412340711541507, -12.300856252505723, 1.5215530950085394],\r\n        ],\r\n        [\r\n            0.014588852784055396, 0, 0, 0, 0, 0, 0, 0.0020241978878893325, 0.21780470845697167,\r\n            0.12748953408543898, 0.2244617745463132, 0.1787254491259903, 0.07594344758096556,\r\n            0.12948458791975614, 0.029477447612619417, 0\r\n        ],\r\n        [\r\n            0, 0.04, 0.09648736013787361, 0.1447310402068104, 0.576, 0.2272326564618766,\r\n            0.5407673435381234, 0.64, 0.48, 0.06754, 0.25, 0.6770920153543243, 0.8115,\r\n            0.906, 1, 1\r\n        ],\r\n    ),\r\n\r\n    \"dormand-prince_13s\": (\r\n        [\r\n            [],\r\n            [1/18],\r\n            [1/48, 1/16],\r\n            [1/32, 0, 3/32],\r\n            [5/16, 0, -75/64, 75/64],\r\n            [3/80, 0, 0, 3/16, 3/20],\r\n            [29443841/614563906, 0, 0, 77736538/692538347, -28693883/1125000000, 23124283/1800000000],\r\n            [16016141/946692911, 0, 0, 61564180/158732637, 22789713/633445777, 545815736/2771057229, -180193667/1043307555],\r\n            [39632708/573591083, 0, 0, -433636366/683701615, -421739975/2616292301, 100302831/723423059, 790204164/839813087, 800635310/3783071287],\r\n            [246121993/1340847787, 0, 0, -37695042795/15268766246, -309121744/1061227803, -12992083/490766935, 6005943493/2108947869, 393006217/1396673457, 123872331/1001029789],\r\n            [-1028468189/846180014, 0, 0, 8478235783/508512852, 1311729495/1432422823, -10304129995/1701304382, -48777925059/3047939560, 15336726248/1032824649, -45442868181/3398467696, 3065993473/597172653],\r\n            [185892177/718116043, 0, 0, -3185094517/667107341, -477755414/1098053517, -703635378/230739211, 5731566787/1027545527, 5232866602/850066563, -4093664535/808688257, 3962137247/1805957418, 65686358/487910083],\r\n            [403863854/491063109, 0, 0, -5068492393/434740067, -411421997/543043805, 652783627/914296604, 11173962825/925320556, -13158990841/6184727034, 3936647629/1978049680, -160528059/685178525, 248638103/1413531060],\r\n        ],\r\n        [\r\n            [14005451/335480064, 0, 0, 0, 0, -59238493/1068277825, 181606767/758867731, 561292985/797845732, -1041891430/1371343529, 760417239/1151165299, 118820643/751138087, -528747749/2220607170, 1/4],\r\n        ],\r\n        [0, 1/18, 1/12, 1/8, 5/16, 3/8, 59/400, 93/200, 5490023248 / 9719169821, 13/20, 1201146811 / 1299019798, 1, 1],\r\n    ),\r\n    \"dormand-prince_6s\": (\r\n        [\r\n            [],\r\n            [1/5],\r\n            [3/40, 9/40],\r\n            [44/45, -56/15, 32/9],\r\n            [19372/6561, -25360/2187, 64448/6561, -212/729],\r\n            [9017/3168, -355/33, 46732/5247, 49/176, -5103/18656],\r\n        ],\r\n        [\r\n            [35/384, 0, 500/1113, 125/192, -2187/6784, 11/84],\r\n        ],\r\n        [0, 1/5, 3/10, 4/5, 8/9, 1],\r\n    ),\r\n    \"bogacki-shampine_7s\": ( #5th order\r\n        [\r\n            [],\r\n            [1/6],\r\n            [2/27, 4/27],\r\n            [183/1372, -162/343, 1053/1372],\r\n            [68/297, -4/11, 42/143, 1960/3861],\r\n            [597/22528, 81/352, 63099/585728, 58653/366080, 4617/20480],\r\n            [174197/959244, -30942/79937, 8152137/19744439, 666106/1039181, -29421/29068, 482048/414219],\r\n        ],\r\n        [\r\n            [587/8064, 0, 4440339/15491840, 24353/124800, 387/44800, 2152/5985, 7267/94080],\r\n        ],\r\n        [0, 1/6, 2/9, 3/7, 2/3, 3/4, 1] \r\n    ),\r\n    \"bogacki-shampine_4s\": ( #5th order\r\n        [\r\n            [],\r\n            [1/2],\r\n            [0, 3/4],\r\n            [2/9, 1/3, 4/9],\r\n        ],\r\n        [\r\n            [2/9, 1/3, 4/9, 0],\r\n        ],\r\n        [0, 1/2, 3/4, 1] \r\n    ),\r\n    \"tsi_7s\": ( #5th order \r\n        [\r\n            [],\r\n            [0.161],\r\n            [-0.008480655492356989, 0.335480655492357],\r\n            [2.8971530571054935, -6.359448489975075, 4.3622954328695815],\r\n            [5.325864828439257, -11.748883564062828, 7.4955393428898365, -0.09249506636175525],\r\n            [5.86145544294642, -12.92096931784711, 8.159367898576159, -0.071584973281401, -0.02826905039406838],\r\n            [0.09646076681806523, 0.01, 0.4798896504144996, 1.379008574103742, -3.290069515436081, 2.324710524099774],\r\n        ],\r\n        [\r\n            [0.09646076681806523, 0.01, 0.4798896504144996, 1.379008574103742, -3.290069515436081, 2.324710524099774, 0.0],\r\n        ],\r\n        [0.0, 0.161, 0.327, 0.9, 0.9800255409045097, 1.0, 1.0],\r\n    ),\r\n    \"rk6_7s\": ( #5th order\r\n        [\r\n            [],\r\n            [1/3],\r\n            [0, 2/3],\r\n            [1/12, 1/3, -1/12],\r\n            [-1/16, 9/8, -3/16, -3/8],\r\n            [0, 9/8, -3/8, -3/4, 1/2],\r\n            [9/44, -9/11, 63/44, 18/11, 0, -16/11],\r\n        ],\r\n        [\r\n            [11/120, 0, 27/40, 27/40, -4/15, -4/15, 11/120],\r\n        ],\r\n        [0, 1/3, 2/3, 1/3, 1/2, 1/2, 1],\r\n    ),\r\n    \"rk5_7s\": ( #5th order\r\n        [\r\n            [],\r\n            [1/5],\r\n            [3/40, 9/40],\r\n            [44/45, -56/15, 32/9],\r\n            [19372/6561, -25360/2187, 64448/6561, 212/729], #flipped 212 sign\r\n            [-9017/3168, -355/33, 46732/5247, 49/176, -5103/18656],\r\n            [35/384, 0, 500/1113, 125/192, -2187/6784, 11/84],\r\n        ],\r\n        [\r\n            [5179/57600, 0, 7571/16695, 393/640, -92097/339200, 187/2100, 1/40],\r\n        ],\r\n        [0, 1/5, 3/10, 4/5, 8/9, 1, 1],\r\n    ),\r\n    \"ssprk_4s\": ( #https://link.springer.com/article/10.1007/s41980-022-00731-x\r\n        [\r\n            [],\r\n            [1/2],\r\n            [1/2, 1/2],\r\n            [1/6, 1/6, 1/6],\r\n        ],\r\n        [\r\n            [1/6, 1/6, 1/6, 1/2],\r\n        ],\r\n        [0, 1/2, 1, 1/2],\r\n    ),\r\n\r\n    \"rk4_4s\": (\r\n        [\r\n            [],\r\n            [1/2],\r\n            [0, 1/2],\r\n            [0, 0, 1],\r\n        ],\r\n        [\r\n            [1/6, 1/3, 1/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1/2, 1],\r\n    ),\r\n    \"rk38_4s\": (\r\n        [\r\n            [],\r\n            [1/3],\r\n            [-1/3, 1],\r\n            [1, -1, 1],\r\n        ],\r\n        [\r\n            [1/8, 3/8, 3/8, 1/8],\r\n        ],\r\n        [0, 1/3, 2/3, 1],\r\n    ),\r\n    \"ralston_4s\": (\r\n        [\r\n            [],\r\n            [2/5],\r\n            [(-2889+1428 * 5**0.5)/1024,   (3785-1620 * 5**0.5)/1024],\r\n            [(-3365+2094 * 5**0.5)/6040,   (-975-3046 * 5**0.5)/2552,  (467040+203968*5**0.5)/240845],\r\n        ],\r\n        [\r\n            [(263+24*5**0.5)/1812, (125-1000*5**0.5)/3828, (3426304+1661952*5**0.5)/5924787, (30-4*5**0.5)/123],\r\n        ],\r\n        [0, 2/5, (14-3 * 5**0.5)/16, 1],\r\n    ),\r\n    \"heun_3s\": (\r\n        [\r\n            [],\r\n            [1/3],\r\n            [0, 2/3],\r\n        ],\r\n        [\r\n            [1/4, 0, 3/4],\r\n        ],\r\n        [0, 1/3, 2/3],\r\n    ),\r\n    \"kutta_3s\": (\r\n        [\r\n            [],\r\n            [1/2],\r\n            [-1, 2],\r\n        ],\r\n        [\r\n            [1/6, 2/3, 1/6],\r\n        ],\r\n        [0, 1/2, 1],\r\n    ),\r\n    \"ralston_3s\": (\r\n        [\r\n            [],\r\n            [1/2],\r\n            [0, 3/4],\r\n        ],\r\n        [\r\n            [2/9, 1/3, 4/9],\r\n        ],\r\n        [0, 1/2, 3/4],\r\n    ),\r\n    \"houwen-wray_3s\": (\r\n        [\r\n            [],\r\n            [8/15],\r\n            [1/4, 5/12],\r\n        ],\r\n        [\r\n            [1/4, 0, 3/4],\r\n        ],\r\n        [0, 8/15, 2/3],\r\n    ),\r\n    \"ssprk3_3s\": (\r\n        [\r\n            [],\r\n            [1],\r\n            [1/4, 1/4],\r\n        ],\r\n        [\r\n            [1/6, 1/6, 2/3],\r\n        ],\r\n        [0, 1, 1/2],\r\n    ),\r\n    \"midpoint_2s\": (\r\n        [\r\n            [],\r\n            [1/2],\r\n        ],\r\n        [\r\n            [0, 1],\r\n        ],\r\n        [0, 1/2],\r\n    ),\r\n    \"heun_2s\": (\r\n        [\r\n            [],\r\n            [1],\r\n        ],\r\n        [\r\n            [1/2, 1/2],\r\n        ],\r\n        [0, 1],\r\n    ),\r\n    \"ralston_2s\": (\r\n        [\r\n            [],\r\n            [2/3],\r\n        ],\r\n        [\r\n            [1/4, 3/4],\r\n        ],\r\n        [0, 2/3],\r\n    ),\r\n    \"buehler\": (\r\n        [\r\n            [],\r\n        ],\r\n        [\r\n            [1],\r\n        ],\r\n        [0],\r\n    ),\r\n}\r\n\r\n\r\n\r\n\r\ndef get_rk_methods(rk_type, h, c1=0.0, c2=0.5, c3=1.0, h_prev=None, h_prev2=None, step=0, sigmas=None, sigma=None, sigma_next=None, sigma_down=None, extra_options=None):\r\n    FSAL = False\r\n    multistep_stages = 0\r\n    \r\n    if rk_type.startswith((\"res\", \"dpmpp\", \"ddim\"   )): \r\n        h_no_eta = -torch.log(sigma_next/sigma)\r\n        h_prev_no_eta  = -torch.log(sigmas[step]  /sigmas[step-1]) if step >= 1 else None\r\n        h_prev2_no_eta = -torch.log(sigmas[step-1]/sigmas[step-2]) if step >= 2 else None\r\n    else:\r\n        h_no_eta = sigma_next - sigma\r\n        h_prev_no_eta  = sigmas[step]   - sigmas[step-1] if step >= 1 else None\r\n        h_prev2_no_eta = sigmas[step-1] - sigmas[step-2] if step >= 2 else None\r\n    \r\n    if type(c1) == torch.Tensor:\r\n        c1 = c1.item()\r\n    if type(c2) == torch.Tensor:\r\n        c2 = c2.item()\r\n    if type(c3) == torch.Tensor:\r\n        c3 = c3.item()\r\n    \r\n    if c1 == -1:\r\n        c1 = random.uniform(0, 1)\r\n    if c2 == -1:\r\n        c2 = random.uniform(0, 1)\r\n    if c3 == -1:\r\n        c3 = random.uniform(0, 1)\r\n        \r\n    if rk_type[:4] == \"deis\": \r\n        order = int(rk_type[-2])\r\n        if step < order:\r\n            if order == 4:\r\n                rk_type = \"res_3s\"\r\n                order = 3\r\n            elif order == 3:\r\n                rk_type = \"res_3s\"\r\n            elif order == 2:\r\n                rk_type = \"res_2s\"\r\n        else:\r\n            rk_type = \"deis\"\r\n            multistep_stages = order-1\r\n    \r\n    if rk_type[-2:] == \"2m\": #multistep method\r\n        rk_type = rk_type[:-2] + \"2s\"\r\n        if h_prev is not None: \r\n            multistep_stages = 1\r\n            c2 = (-h_prev / h).item()\r\n            #print(\"c2: \", c2, h_prev, h)\r\n            \r\n    if rk_type[-2:] == \"3m\": #multistep method\r\n        rk_type = rk_type[:-2] + \"3s\"\r\n        if h_prev2 is not None: \r\n            multistep_stages = 2\r\n            #print(\"3m\")\r\n            #c2 = (-h_prev2 / (h_prev + h)).item()\r\n            c2 = (-h_prev2 / h).item()\r\n            #c3 = (-h_prev / h).item()\r\n            c3 = (-(h_prev2 + h_prev) / h).item()\r\n            #print(c2, h_prev2, h_prev)\r\n            #print(c3, h_prev, h)\r\n    \r\n    if rk_type in rk_coeff:\r\n        a, b, ci = copy.deepcopy(rk_coeff[rk_type])\r\n        a, b, ci = rk_coeff[rk_type]\r\n        a = [row + [0] * (len(ci) - len(row)) for row in a]\r\n\r\n    match rk_type:\r\n        case \"deis\": \r\n            coeff_list = get_deis_coeff_list(sigmas, multistep_stages+1, deis_mode=\"rhoab\")\r\n            coeff_list = [[elem / h for elem in inner_list] for inner_list in coeff_list]\r\n            if multistep_stages == 1:\r\n                b1, b2 = coeff_list[step]\r\n                a = [\r\n                        [0, 0],\r\n                        [0, 0],\r\n                ]\r\n                b = [\r\n                        [b1, b2],\r\n                ]\r\n                ci = [0, 0]\r\n            if multistep_stages == 2:\r\n                b1, b2, b3 = coeff_list[step]\r\n                a = [\r\n                        [0, 0, 0],\r\n                        [0, 0, 0],\r\n                        [0, 0, 0],\r\n                ]\r\n                b = [\r\n                        [b1, b2, b3],\r\n                ]\r\n                ci = [0, 0, 0]\r\n            if multistep_stages == 3:\r\n                b1, b2, b3, b4 = coeff_list[step]\r\n                a = [\r\n                        [0, 0, 0, 0],\r\n                        [0, 0, 0, 0],\r\n                        [0, 0, 0, 0],\r\n                        [0, 0, 0, 0],\r\n                ]\r\n                b = [\r\n                    [b1, b2, b3, b4],\r\n                ]\r\n                ci = [0, 0, 0, 0]\r\n            if multistep_stages > 0:\r\n                for i in range(len(b[0])): \r\n                    b[0][i] *= ((sigma_down - sigma) / (sigma_next - sigma))\r\n\r\n        case \"dormand-prince_6s\":\r\n            FSAL = True\r\n\r\n        case \"ddim\":\r\n            b1 = phi(1, -h)\r\n            a = [\r\n                    [0],\r\n            ]\r\n            b = [\r\n                    [b1],\r\n            ]\r\n            ci = [0]\r\n\r\n        case \"res_2s\":\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n\r\n            ci = [0, c2]\r\n            φ = Phi(h, ci)\r\n            \r\n            a2_1 = c2 * φ(1,2)\r\n            b2 = φ(2)/c2\r\n            b1 = φ(1) - b2\r\n\r\n            a = [\r\n                    [0,0],\r\n                    [a2_1, 0],\r\n            ]\r\n            b = [\r\n                    [b1, b2],\r\n            ]\r\n\r\n            \r\n        case \"res_3s\":\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n            c3 = float(get_extra_options_kv(\"c3\", str(c3), extra_options))\r\n            \r\n            gamma = calculate_gamma(c2, c3)\r\n            a2_1 = c2 * phi(1, -h*c2)\r\n            a3_2 = gamma * c2 * phi(2, -h*c2) + (c3 ** 2 / c2) * phi(2, -h*c3) #phi_2_c3_h  # a32 from k2 to k3\r\n            a3_1 = c3 * phi(1, -h*c3) - a3_2 # a31 from k1 to k3\r\n            b3 = (1 / (gamma * c2 + c3)) * phi(2, -h)      \r\n            b2 = gamma * b3  #simplified version of: b2 = (gamma / (gamma * c2 + c3)) * phi_2_h  \r\n            b1 = phi(1, -h) - b2 - b3     \r\n            \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [a2_1, 0, 0],\r\n                    [a3_1, a3_2, 0],\r\n            ]\r\n            b = [\r\n                    [b1, b2, b3],\r\n            ]\r\n            ci = [c1, c2, c3]\r\n\r\n        case \"res_3s_alt\":\r\n            c2 = 1/3\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n            \r\n            c1,c2,c3 = 0, c2, 2/3\r\n            ci = [c1,c2,c3]\r\n            φ = Phi(h, ci)\r\n            \r\n            a = [\r\n                    [0, 0,                   0],\r\n                    [0, 0,                   0],\r\n                    [0, (4/(9*c2)) * φ(2,3), 0],\r\n            ]\r\n            b = [\r\n                    [0, 0, (1/c3)*φ(2)],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"res_3s_strehmel_weiner\": # weak 4th order, Krogstad\r\n            c2 = 1/2\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n\r\n            ci = [0,c2,1]\r\n            φ = Phi(h, ci)\r\n            \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [0, 0, 0],\r\n                    [0, (1/c2) * φ(2,3), 0],\r\n            ]\r\n            b = [\r\n                    [0, \r\n                     0,\r\n                     φ(2)],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n            \r\n        case \"res_3s_cox_matthews\": # Cox & Matthews; known as ETD3RK\r\n            c1,c2,c3 = 0,1/2,1\r\n            ci = [0,c2,1]\r\n            φ = Phi(h, ci)\r\n            \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [0, 0, 0],\r\n                    [0, (1/c2) * φ(1,3), 0],  # paper said 2 * φ(1,3), but this is the same and more consistent with res_3s_strehmel_weiner\r\n            ]\r\n            b = [\r\n                    [0, \r\n                     -8*φ(3) + 4*φ(2),\r\n                     4*φ(3) - φ(2)],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"res_3s_lie\": # Lie; known as ETD2CF3\r\n            c1,c2,c3 = 0, 1/3, 2/3\r\n            ci = [c1,c2,c3]\r\n            φ = Phi(h, ci)\r\n            \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [0, 0, 0],\r\n                    [0, (4/3)*φ(2,3), 0],  # paper said 2 * φ(1,3), but this is the same and more consistent with res_3s_strehmel_weiner\r\n            ]\r\n            b = [\r\n                    [0, \r\n                     6*φ(2) - 18*φ(3),\r\n                     (-3/2)*φ(2) + 9*φ(3)],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"res_4s_cox_matthews\": # weak 4th order, Cox & Matthews; unresolved issue, see below\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci)\r\n            \r\n            a2_1 = c2 * φ(1,2)\r\n            a3_2 = c3 * φ(1,3)\r\n            a4_1 = (1/2) * φ(1,3) * (φ(0,3) - 1) # φ(0,3) == torch.exp(-h*c3)\r\n            a4_3 = φ(1,3)\r\n            b1 = φ(1) - 3*φ(2) + 4*φ(3)\r\n            b2 = 2*φ(2) - 4*φ(3)\r\n            b3 = 2*φ(2) - 4*φ(3)\r\n            b4 = 4*φ(3) - φ(2)\r\n\r\n            a = [\r\n                    [0,    0,0,0],\r\n                    [a2_1, 0,0,0],\r\n                    [0, a3_2,0,0],\r\n                    [a4_1, 0,a4_3,0],\r\n            ]\r\n            b = [\r\n                    [b1, b2, b3, b4],\r\n            ]\r\n\r\n        case \"res_4s_munthe-kaas\": # unstable RKMK4t\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci)\r\n\r\n            a = [\r\n                    [0, 0,      0,        0],\r\n                    [c2*φ(1,2), 0,      0,        0],\r\n                    [(h/8)*φ(1,2), (1/2)*(1-h/4)*φ(1,2), 0,        0],\r\n                    [0, 0,      φ(1), 0],\r\n            ]\r\n            b = [\r\n                    [(1/6)*φ(1)*(1+h/2),\r\n                     (1/3)*φ(1),\r\n                     (1/3)*φ(1),\r\n                     (1/6)*φ(1)*(1-h/2)],\r\n            ]\r\n\r\n        case \"res_4s_krogstad\": # weak 4th order, Krogstad\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci)\r\n            \r\n            a = [\r\n                    [0, 0,      0,        0],\r\n                    [0, 0,      0,        0],\r\n                    [0, φ(2,3), 0,        0],\r\n                    [0, 0,      2*φ(2,4), 0],\r\n            ]\r\n            b = [\r\n                    [0, \r\n                     2*φ(2) - 4*φ(3),\r\n                     2*φ(2) - 4*φ(3),\r\n                     -φ(2)  + 4*φ(3)],\r\n            ]\r\n            \r\n            #a = [row + [0] * (len(ci) - len(row)) for row in a]\r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n            \r\n        case \"res_4s_strehmel_weiner\": # weak 4th order, Strehmel & Weiner\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci)\r\n            \r\n            a = [\r\n                    [0, 0,         0,        0],\r\n                    [0, 0,         0,        0],\r\n                    [0, c3*φ(2,3), 0,        0],\r\n                    [0, -2*φ(2,4), 4*φ(2,4), 0],\r\n            ]\r\n            b = [\r\n                    [0, \r\n                     0,\r\n                     4*φ(2) - 8*φ(3), \r\n                     -φ(2) +  4*φ(3)],\r\n            ]\r\n            \r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n           \r\n        case \"lawson4_4s\": \r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci)\r\n            \r\n            a2_1 = c2 * φ(0,2)\r\n            a3_2 = 1/2\r\n            a4_3 = φ(0,2)\r\n            \r\n            b1 = (1/6) * φ(0)\r\n            b2 = (1/3) * φ(0,2)\r\n            b3 = (1/3) * φ(0,2)\r\n            b4 = 1/6\r\n\r\n            a = [\r\n                    [0,    0,    0,    0],\r\n                    [a2_1, 0,    0,    0],\r\n                    [0,    a3_2, 0,    0],\r\n                    [0,    0,    a4_3, 0],\r\n            ]\r\n            b = [\r\n                    [b1,b2,b3,b4],\r\n            ]\r\n\r\n        case \"genlawson41_4s\": # GenLawson4 https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci)\r\n\r\n\r\n            a3_2 = 1/2\r\n            a4_3 = φ(0,2)\r\n            \r\n            b2 = (1/3) * φ(0,2)\r\n            b3 = (1/3) * φ(0,2)\r\n            b4 = 1/6\r\n\r\n            a = [\r\n                    [0, 0,        0, 0],\r\n                    [0, 0,          0,        0],\r\n                    [0, a3_2, 0,        0],\r\n                    [0, 0, a4_3, 0],\r\n            ]\r\n            b = [\r\n                    [0,\r\n                     b2,\r\n                     b3,\r\n                     b4,],\r\n            ]\r\n\r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n\r\n        case \"modgenlawson41_4s\": # GenLawson4 https://ora.ox.ac.uk/objects/uuid:cc001282-4285-4ca2-ad06-31787b540c61/files/m611df1a355ca243beb09824b70e5e774\r\n            c1,c2,c3,c4 = 0, 1/2, 1/2, 1\r\n            ci = [c1,c2,c3,c4]\r\n            φ = Phi(h, ci)\r\n\r\n\r\n            a3_2 = 1/2\r\n            a4_3 = φ(0,2)\r\n            \r\n            b2 = (1/3) * φ(0,2)\r\n            b3 = (1/3) * φ(0,2)\r\n            b4 = φ(2) - (1/3)*φ(0,2)\r\n\r\n            a = [\r\n                    [0, 0,        0, 0],\r\n                    [0, 0,          0,        0],\r\n                    [0, a3_2, 0,        0],\r\n                    [0, 0, a4_3, 0],\r\n            ]\r\n            b = [\r\n                    [0,\r\n                     b2,\r\n                     b3,\r\n                     b4,],\r\n            ]\r\n\r\n            a, b = gen_first_col_exp(a,b,ci,φ)\r\n\r\n\r\n        case \"etdrk2_2s\": # https://arxiv.org/pdf/2402.15142v1\r\n            c1,c2 = 0, 1\r\n            ci = [c1,c2]\r\n            φ = Phi(h, ci)   \r\n                     \r\n            a = [\r\n                    [0, 0],\r\n                    [φ(1), 0],\r\n            ]\r\n            b = [\r\n                    [φ(1)-φ(2), φ(2)],\r\n            ]\r\n\r\n        case \"etdrk3_a_3s\": # https://arxiv.org/pdf/2402.15142v1\r\n            c1,c2,c3 = 0, 1, 2/3\r\n            ci = [c1,c2,c3]\r\n            φ = Phi(h, ci)   \r\n            \r\n            a2_1 = c2*φ(1)\r\n            a3_2 = (4/9)*φ(2,3)\r\n            a3_1 = c3*φ(1,3) - a3_2\r\n            \r\n            b2 = φ(2) - (1/2)*φ(1)\r\n            b3 = (3/4) * φ(1)\r\n            b1 = φ(1) - b2 - b3 \r\n                     \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [a2_1, 0, 0],\r\n                    [a3_1, a3_2, 0 ]\r\n            ]\r\n            b = [\r\n                    [b1, b2, b3],\r\n            ]\r\n\r\n        case \"etdrk3_b_3s\": # https://arxiv.org/pdf/2402.15142v1\r\n            c1,c2,c3 = 0, 4/9, 2/3\r\n            ci = [c1,c2,c3]\r\n            φ = Phi(h, ci)   \r\n            \r\n            a2_1 = c2*φ(1,2)\r\n            a3_2 = φ(2,3)\r\n            a3_1 = c3*φ(1,3) - a3_2\r\n            \r\n            b2 = 0\r\n            b3 = (3/2) * φ(2)\r\n            b1 = φ(1) - b2 - b3 \r\n                     \r\n            a = [\r\n                    [0, 0, 0],\r\n                    [a2_1, 0, 0],\r\n                    [a3_1, a3_2, 0 ]\r\n            ]\r\n            b = [\r\n                    [b1, b2, b3],\r\n            ]\r\n\r\n\r\n        case \"dpmpp_2s\":\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n            \r\n            a2_1 =         c2   * phi(1, -h*c2)\r\n            b1 = (1 - 1/(2*c2)) * phi(1, -h)\r\n            b2 =     (1/(2*c2)) * phi(1, -h)\r\n\r\n            a = [\r\n                    [0, 0],\r\n                    [a2_1, 0],\r\n            ]\r\n            b = [\r\n                    [b1, b2],\r\n            ]\r\n            ci = [0, c2]\r\n            \r\n        case \"dpmpp_sde_2s\":\r\n            c2 = 1.0 #hardcoded to 1.0 to more closely emulate the configuration for k-diffusion's implementation\r\n            a2_1 =         c2   * phi(1, -h*c2)\r\n            b1 = (1 - 1/(2*c2)) * phi(1, -h)\r\n            b2 =     (1/(2*c2)) * phi(1, -h)\r\n\r\n            a = [\r\n                    [0, 0],\r\n                    [a2_1, 0],\r\n            ]\r\n            b = [\r\n                    [b1, b2],\r\n            ]\r\n            ci = [0, c2]\r\n\r\n        case \"dpmpp_3s\":\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n            c3 = float(get_extra_options_kv(\"c3\", str(c3), extra_options))\r\n            \r\n            a2_1 = c2 * phi(1, -h*c2)\r\n            a3_2 = (c3**2 / c2) * phi(2, -h*c3)\r\n            a3_1 = c3 * phi(1, -h*c3) - a3_2\r\n            b2 = 0\r\n            b3 = (1/c3) * phi(2, -h)\r\n            b1 = phi(1, -h) - b2 - b3\r\n\r\n            a = [\r\n                    [0, 0, 0],\r\n                    [a2_1, 0, 0],\r\n                    [a3_1, a3_2, 0],  \r\n            ]\r\n            b = [\r\n                    [b1, b2, b3],\r\n            ]\r\n            ci = [0, c2, c3]\r\n            \r\n        case \"res_5s\": #4th order\r\n                \r\n            c1, c2, c3, c4, c5 = 0, 1/2, 1/2, 1, 1/2\r\n            \r\n            a2_1 = c2 * phi(1, -h * c2)\r\n            \r\n            a3_2 = phi(2, -h * c3)\r\n            a3_1 = c3 * phi(1, -h * c3) - a3_2\r\n            #a3_1 = c3 * phi(1, -h * c3) - phi(2, -h * c3)\r\n\r\n            a4_2 = a4_3 = phi(2, -h * c4)\r\n            a4_1 = c4 * phi(1, -h * c4) - a4_2 - a4_3\r\n            #a4_1 = phi(1, -h * c4) - 2 * phi(2, -h * c4)\r\n            \r\n            a5_2 = a5_3 = 0.5 * phi(2, -h * c5) - phi(3, -h * c4) + 0.25 * phi(2, -h * c4) - 0.5 * phi(3, -h * c5)\r\n            a5_4 = 0.25 * phi(2, -h * c5) - a5_2\r\n            a5_1 = c5 * phi(1, -h * c5) - a5_2 - a5_3 - a5_4\r\n                    \r\n            b2 = b3 = 0\r\n            b4 = -phi(2, -h) + 4*phi(3, -h)\r\n            b5 = 4 * phi(2, -h) - 8 * phi(3, -h)\r\n            #b1 = phi(1, -h) - 3 * phi(2, -h) + 4 * phi(3, -h)\r\n            b1 = phi(1,-h) - b2 - b3 - b4 - b5\r\n\r\n            a = [\r\n                    [0, 0, 0, 0, 0],\r\n                    [a2_1, 0, 0, 0, 0],\r\n                    [a3_1, a3_2, 0, 0, 0],\r\n                    [a4_1, a4_2, a4_3, 0, 0],\r\n                    [a5_1, a5_2, a5_3, a5_4, 0],\r\n            ]\r\n            b = [\r\n                    [b1, b2, b3, b4, b5],\r\n            ]\r\n            ci = [0., 0.5, 0.5, 1., 0.5]\r\n            \r\n        case \"res_6s\": #4th order\r\n                \r\n            c1, c2, c3, c4, c5, c6 = 0, 1/2, 1/2, 1/3, 1/3, 5/6\r\n            ci = [c1, c2, c3, c4, c5, c6]\r\n            φ = Phi(h, ci)\r\n            \r\n            a2_1 = c2 * φ(1,2)\r\n            \r\n            a3_1 = 0\r\n            a3_2 = (c3**2 / c2) * φ(2,3)\r\n            \r\n            a4_1 = 0\r\n            a4_2 = (c4**2 / c2) * φ(2,4)\r\n            a4_3 = (c4**2 * φ(2,4) - a4_2 * c2) / c3\r\n            \r\n            a5_1 = 0\r\n            a5_2 = 0 #zero\r\n            a5_3 = (-c4 * c5**2 * φ(2,5) + 2*c5**3 * φ(3,5))   /   (c3 * (c3 - c4))\r\n            a5_4 = (-c3 * c5**2 * φ(2,5) + 2*c5**3 * φ(3,5))   /   (c4 * (c4 - c3))\r\n            \r\n            a6_1 = 0\r\n            a6_2 = 0 #zero\r\n            a6_3 = (-c4 * c6**2 * φ(2,6) + 2*c6**3 * φ(3,6))   /   (c3 * (c3 - c4))\r\n            a6_4 = (-c3 * c6**2 * φ(2,6) + 2*c6**3 * φ(3,6))   /   (c4 * (c4 - c3))\r\n            a6_5 = (c6**2 * φ(2,6) - a6_3*c3 - a6_4*c4)   /   c5\r\n            #a6_5_alt = (2*c6**3 * φ(3,6) - a6_3*c3**2 - a6_4*c4**2)   /   c5**2\r\n                    \r\n            b1 = 0\r\n            b2 = 0\r\n            b3 = 0\r\n            b4 = 0\r\n            b5 = (-c6*φ(2) + 2*φ(3)) / (c5 * (c5 - c6))\r\n            b6 = (-c5*φ(2) + 2*φ(3)) / (c6 * (c6 - c5))\r\n\r\n            a = [\r\n                    [0, 0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0, 0],\r\n                    [0, a3_2, 0, 0, 0, 0],\r\n                    [0, a4_2, a4_3, 0, 0, 0],\r\n                    [0, a5_2, a5_3, a5_4, 0, 0],\r\n                    [0, a6_2, a6_3, a6_4, a6_5, 0],\r\n            ]\r\n            b = [\r\n                    [0, b2, b3, b4, b5, b6],\r\n            ]\r\n             \r\n            for i in range(len(ci)): \r\n                a[i][0] = ci[i] * φ(1,i+1) - sum(a[i])\r\n            for i in range(len(b)): \r\n                b[i][0] =         φ(1)     - sum(b[i])\r\n\r\n        case \"res_8s\": #todo: add EKPRK5S8\r\n                \r\n            c1, c2, c3, c4, c5, c6, c7, c8 = 0, 1/2, 1/2, 1/4,    1/2, 1/5, 2/3, 1\r\n            ci = [c1, c2, c3, c4, c5, c6, c7, c8]\r\n            φ = Phi(h, ci, analytic_solution=True)\r\n            \r\n            a3_2 = (1/2) * φ(2,3)\r\n            \r\n            a4_3 = (1/8) * φ(2,4)\r\n\r\n            a5_3 = (-1/2) * φ(2,5) + 2 * φ(3,5)\r\n            a5_4 =      2 * φ(2,5) - 4 * φ(3,5)\r\n            \r\n            a6_4 = (8/25) * φ(2,6) - (32/125) * φ(3,6)\r\n            a6_5 = (2/25) * φ(2,6) -  (1/2)   * a6_4\r\n            \r\n            a7_4 = (-125/162)  * a6_4\r\n            a7_5 =  (125/1944) * a6_4 -  (16/27) * φ(2,7) + (320/81) * φ(3,7)\r\n            a7_6 = (3125/3888) * a6_4 + (100/27) * φ(2,7) - (800/81) * φ(3,7)\r\n            \r\n            Φ = (5/32)*a6_4 - (1/28)*φ(2,6) + (36/175)*φ(2,7) - (48/25)*φ(3,7) + (6/175)*φ(4,6) + (192/35)*φ(4,7) + 6*φ(4,8)\r\n            \r\n            a8_5 =  (208/3)*φ(3,8) -  (16/3) *φ(2,8) -      40*Φ\r\n            a8_6 = (-250/3)*φ(3,8) + (250/21)*φ(2,8) + (250/7)*Φ\r\n            a8_7 =      -27*φ(3,8) +  (27/14)*φ(2,8) + (135/7)*Φ\r\n            \r\n            b6 = (125/14)*φ(2) - (625/14)*φ(3) + (1125/14)*φ(4)\r\n            b7 = (-27/14)*φ(2) + (162/7) *φ(3) -  (405/7) *φ(4)\r\n            b8 =   (1/2) *φ(2) -  (13/2) *φ(3) +   (45/2) *φ(4)\r\n            \r\n            a2_1 = c2*φ(1,2) \r\n            a3_1 = c3*φ(1,3) - a3_2\r\n            a4_1 = c4*φ(1,4) - a4_3\r\n            a5_1 = c5*φ(1,5) - a5_3 - a5_4 \r\n            a6_1 = c6*φ(1,6) - a6_4 - a6_5\r\n            a7_1 = c7*φ(1,7) - a7_4 - a7_5 - a7_6\r\n            a8_1 = c8*φ(1,8) - a8_5 - a8_6 - a8_7 \r\n            b1   =    φ(1)   - b6 - b7 - b8\r\n            \r\n            a = [\r\n                    [0,    0, 0, 0, 0, 0, 0, 0],\r\n                    [a2_1, 0, 0, 0, 0, 0, 0, 0],\r\n                    \r\n                    [a3_1, a3_2, 0, 0, 0, 0, 0, 0],\r\n                    [a4_1, 0, a4_3, 0, 0, 0, 0, 0],\r\n                    \r\n                    [a5_1, 0, a5_3, a5_4, 0, 0, 0, 0],\r\n                    [a6_1, 0, 0, a6_4, a6_5, 0, 0, 0],\r\n                    \r\n                    [a7_1 , 0, 0, a7_4, a7_5, a7_6, 0,    0],\r\n                    [a8_1 , 0, 0, 0,    a8_5, a8_6, a8_7, 0],\r\n            ]\r\n            b = [\r\n                    [b1,   0, 0, 0, 0, b6, b7, b8],\r\n            ]\r\n             \r\n            a = [\r\n                    [0, 0, 0, 0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0, 0, 0, 0],\r\n                    \r\n                    [0, a3_2, 0, 0, 0, 0, 0, 0],\r\n                    [0, 0, a4_3, 0, 0, 0, 0, 0],\r\n                    \r\n                    [0, 0, a5_3, a5_4, 0, 0, 0, 0],\r\n                    [0, 0, 0, a6_4, a6_5, 0, 0, 0],\r\n                    \r\n                    [0 , 0, 0, a7_4, a7_5, a7_6, 0,    0],\r\n                    [0 , 0, 0, 0,    a8_5, a8_6, a8_7, 0],\r\n            ]\r\n            b = [\r\n                    [0,   0, 0, 0, 0, b6, b7, b8],\r\n            ]\r\n             \r\n            for i in range(len(a)): \r\n                a[i][0] = ci[i] * φ(1,i+1) - sum(a[i])\r\n            for i in range(len(b)): \r\n                b[i][0] =         φ(1)     - sum(b[i])\r\n            \r\n            \r\n\r\n        case \"res_10s\":\r\n                \r\n            c1, c2, c3, c4, c5, c6, c7, c8, c9, c10 = 0, 1/2, 1/2, 1/3, 1/2,     1/3, 1/4, 3/10, 3/4, 1\r\n            ci = [c1, c2, c3, c4, c5, c6, c7, c8, c9, c10]\r\n            φ = Phi(h, ci, analytic_solution=True)\r\n            \r\n            a3_2 = (c3**2 / c2) * φ(2,3)\r\n            \r\n            a4_2 = (c4**2 / c2) * φ(2,4)\r\n                        \r\n            b8 =  (c9*c10*φ(2) - 2*(c9+c10)*φ(3) + 6*φ(4))   /   (c8 * (c8-c9) * (c8-c10))\r\n            b9 =  (c8*c10*φ(2) - 2*(c8+c10)*φ(3) + 6*φ(4))   /   (c9 * (c9-c8) * (c9-c10))\r\n            \r\n            b10 = (c8*c9*φ(2)  - 2*(c8+c9) *φ(3) + 6*φ(4))   /   (c10 * (c10-c8) * (c10-c9))\r\n            \r\n            a = [\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, a3_2, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, a4_2, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    \r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n                    [0, 0, 0, 0, 0,      0, 0, 0, 0, 0],\r\n            ]\r\n            b = [\r\n                    [0, 0, 0, 0, 0,      0, 0, b8, b9, b10],\r\n            ]\r\n            \r\n            # a5_3, a5_4\r\n            # a6_3, a6_4\r\n            # a7_3, a7_4\r\n            for i in range(5, 8): # i=5,6,7   j,k ∈ {3, 4}, j != k\r\n                jk = [(3, 4), (4, 3)]\r\n                jk = list(permutations([3, 4], 2)) \r\n                for j,k in jk:\r\n                    a[i-1][j-1] = (-ci[i-1]**2 * ci[k-1] * φ(2,i)    +   2*ci[i-1]**3 * φ(3,i))   /   (ci[j-1] * (ci[j-1] - ci[k-1]))\r\n                \r\n            for i in range(8, 11): # i=8,9,10   j,k,l ∈ {5, 6, 7}, j != k != l      [    (5, 6, 7), (5, 7, 6),    (6, 5, 7), (6, 7, 5),    (7, 5, 6), (7, 6, 5)]    6 total coeff\r\n                jkl = list(permutations([5, 6, 7], 3)) \r\n                for j,k,l in jkl:\r\n                    a[i-1][j-1] = (ci[i-1]**2 * ci[k-1] * ci[l-1] * φ(2,i)   -   2*ci[i-1]**3 * (ci[k-1] + ci[l-1]) * φ(3,i)   +   6*ci[i-1]**4 * φ(4,i))    /    (ci[j-1] * (ci[j-1] - ci[k-1]) * (ci[j-1] - ci[l-1]))\r\n\r\n            for i in range(len(a)): \r\n                a[i][0] = ci[i] * φ(1,i+1) - sum(a[i])\r\n            for i in range(len(b)): \r\n                b[i][0] =         φ(1)     - sum(b[i])\r\n            \r\n            \r\n\r\n\r\n        case \"res_15s\":\r\n                \r\n            c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,c15 = 0, 1/2, 1/2, 1/3, 1/2,    1/5, 1/4, 18/25, 1/3, 3/10,    1/6, 90/103, 1/3, 3/10, 1/5\r\n            c1 = 0\r\n            c2 = c3 = c5 = 1/2\r\n            c4 = c9 = c13 = 1/3\r\n            c6 = c15 = 1/5\r\n            c7 = 1/4\r\n            c8 = 18/25\r\n            c10 = c14 = 3/10\r\n            c11 = 1/6\r\n            c12 = 90/103\r\n            c15 = 1/5\r\n            ci = [c1, c2, c3, c4, c5, c6, c7, c8, c9, c10, c11, c12, c13, c14, c15]\r\n            φ = Phi(h, ci, analytic_solution=True)\r\n            \r\n            a = [[0 for _ in range(15)] for _ in range(15)]\r\n            b = [[0 for _ in range(15)]]\r\n\r\n            for i in range(3, 5): # i=3,4     j=2\r\n                j=2\r\n                a[i-1][j-1] = (ci[i-1]**2 / ci[j-1]) * φ(j,i)\r\n            \r\n            \r\n            for i in range(5, 8): # i=5,6,7   j,k ∈ {3, 4}, j != k\r\n                jk = list(permutations([3, 4], 2)) \r\n                for j,k in jk:\r\n                    a[i-1][j-1] = (-ci[i-1]**2 * ci[k-1] * φ(2,i)    +   2*ci[i-1]**3 * φ(3,i))   /   prod_diff(ci[j-1], ci[k-1])\r\n\r\n            for i in range(8, 12): # i=8,9,10,11  j,k,l ∈ {5, 6, 7}, j != k != l      [    (5, 6, 7), (5, 7, 6),    (6, 5, 7), (6, 7, 5),    (7, 5, 6), (7, 6, 5)]    6 total coeff\r\n                jkl = list(permutations([5, 6, 7], 3)) \r\n                for j,k,l in jkl:\r\n                    a[i-1][j-1] = (ci[i-1]**2 * ci[k-1] * ci[l-1] * φ(2,i)   -   2*ci[i-1]**3 * (ci[k-1] + ci[l-1]) * φ(3,i)   +   6*ci[i-1]**4 * φ(4,i))    /    (ci[j-1] * (ci[j-1] - ci[k-1]) * (ci[j-1] - ci[l-1]))\r\n\r\n            for i in range(12,16): # i=12,13,14,15\r\n                jkld = list(permutations([8,9,10,11], 4)) \r\n                for j,k,l,d in jkld:\r\n                    numerator = -ci[i-1]**2  *  ci[d-1]*ci[k-1]*ci[l-1]  *  φ(2,i)     +     2*ci[i-1]**3  *  (ci[d-1]*ci[k-1] + ci[d-1]*ci[l-1] + ci[k-1]*ci[l-1])  *  φ(3,i)     -     6*ci[i-1]**4  *  (ci[d-1] + ci[k-1] + ci[l-1])  *  φ(4,i)     +     24*ci[i-1]**5  *  φ(5,i)\r\n                    a[i-1][j-1] = numerator / prod_diff(ci[j-1], ci[k-1], ci[l-1], ci[d-1])\r\n\r\n            \"\"\"ijkl = list(permutations([12,13,14,15], 4)) \r\n            for i,j,k,l in ijkl:\r\n                #numerator = -ci[j-1]*ci[k-1]*ci[l-1]*φ(2)   +   2*(ci[j-1]*ci[k-1]   +   ci[j-1]*ci[l-1]   +   ci[k-1]*ci[l-1])*φ(3)   -   6*(ci[j-1] + ci[k-1]   +   ci[l-1])*φ(4)   +   24*φ(5)\r\n                #b[0][i-1] = numerator / prod_diff(ci[i-1], ci[j-1], ci[k-1], ci[l-1])\r\n                for jjj in range (2, 6): # 2,3,4,5\r\n                    b[0][i-1] += mu_numerator(jjj, ci[j-1], ci[i-1], ci[k-1], ci[l-1]) * φ(jjj) \r\n                b[0][i-1] /= prod_diff(ci[i-1], ci[j-1], ci[k-1], ci[l-1])\"\"\"\r\n                    \r\n            ijkl = list(permutations([12,13,14,15], 4)) \r\n            for i,j,k,l in ijkl:\r\n                numerator = 0\r\n                for jjj in range(2, 6):  # 2, 3, 4, 5\r\n                    numerator += mu_numerator(jjj, ci[j-1], ci[i-1], ci[k-1], ci[l-1]) * φ(jjj)\r\n                #print(i,j,k,l)\r\n\r\n                b[0][i-1] = numerator / prod_diff(ci[i-1], ci[j-1], ci[k-1], ci[l-1])\r\n             \r\n             \r\n            ijkl = list(permutations([12, 13, 14, 15], 4))\r\n            selected_permutations = {} \r\n            sign = 1  \r\n\r\n            for i in range(12, 16):\r\n                results = []\r\n                for j, k, l, d in ijkl:\r\n                    if i != j and i != k and i != l and i != d:\r\n                        numerator = 0\r\n                        for jjj in range(2, 6):  # 2, 3, 4, 5\r\n                            numerator += mu_numerator(jjj, ci[j-1], ci[i-1], ci[k-1], ci[l-1]) * φ(jjj)\r\n                        theta_value = numerator / prod_diff(ci[i-1], ci[j-1], ci[k-1], ci[l-1])\r\n                        results.append((theta_value, (i, j, k, l, d)))\r\n\r\n                results.sort(key=lambda x: abs(x[0]))\r\n\r\n                for theta_value, permutation in results:\r\n                    if sign == 1 and theta_value > 0:\r\n                        selected_permutations[i] = (theta_value, permutation)\r\n                        sign *= -1  \r\n                        break\r\n                    elif sign == -1 and theta_value < 0:  \r\n                        selected_permutations[i] = (theta_value, permutation)\r\n                        sign *= -1 \r\n                        break\r\n\r\n            for i in range(12, 16):\r\n                if i in selected_permutations:\r\n                    theta_value, (i, j, k, l, d) = selected_permutations[i]\r\n                    b[0][i-1] = theta_value  \r\n                    \r\n            for i in selected_permutations:\r\n                theta_value, permutation = selected_permutations[i]\r\n                #print(f\"i={i}\")\r\n                #print(f\"  Selected Theta: {theta_value:.6f}, Permutation: {permutation}\")\r\n             \r\n             \r\n             \r\n            for i in range(len(a)): \r\n                a[i][0] = ci[i] * φ(1,i+1) - sum(a[i])\r\n            for i in range(len(b)): \r\n                b[i][0] =         φ(1)     - sum(b[i])\r\n            \r\n            \r\n\r\n        case \"res_16s\": # 6th order without weakened order conditions\r\n                \r\n            c1 = 0\r\n            c2 = c3 = c5 = c8 = c12 = 1/2\r\n            c4 = c11 = c15 = 1/3\r\n            c6 = c9 = c13 = 1/5\r\n            c7 = c10 = c14 = 1/4\r\n            c16 = 1\r\n            ci = [c1, c2, c3, c4, c5, c6, c7, c8, c9, c10, c11, c12, c13, c14, c15, c16]\r\n            φ = Phi(h, ci, analytic_solution=True)\r\n            \r\n            a3_2 = (1/2) * φ(2,3)\r\n\r\n            a = [[0 for _ in range(16)] for _ in range(16)]\r\n            b = [[0 for _ in range(16)]]\r\n\r\n            for i in range(3, 5): # i=3,4     j=2\r\n                j=2\r\n                a[i-1][j-1] = (ci[i-1]**2 / ci[j-1]) * φ(j,i)\r\n            \r\n            for i in range(5, 8): # i=5,6,7   j,k ∈ {3, 4}, j != k\r\n                jk = list(permutations([3, 4], 2)) \r\n                for j,k in jk:\r\n                    a[i-1][j-1] = (-ci[i-1]**2 * ci[k-1] * φ(2,i)    +   2*ci[i-1]**3 * φ(3,i))   /   prod_diff(ci[j-1], ci[k-1])\r\n                    \r\n            for i in range(8, 12): # i=8,9,10,11  j,k,l ∈ {5, 6, 7}, j != k != l      [    (5, 6, 7), (5, 7, 6),    (6, 5, 7), (6, 7, 5),    (7, 5, 6), (7, 6, 5)]    6 total coeff\r\n                jkl = list(permutations([5, 6, 7], 3)) \r\n                for j,k,l in jkl:\r\n                    a[i-1][j-1] = (ci[i-1]**2 * ci[k-1] * ci[l-1] * φ(2,i)   -   2*ci[i-1]**3 * (ci[k-1] + ci[l-1]) * φ(3,i)   +   6*ci[i-1]**4 * φ(4,i))    /    (ci[j-1] * (ci[j-1] - ci[k-1]) * (ci[j-1] - ci[l-1]))\r\n\r\n            for i in range(12,17): # i=12,13,14,15,16\r\n                jkld = list(permutations([8,9,10,11], 4)) \r\n                for j,k,l,d in jkld:\r\n                    numerator = -ci[i-1]**2  *  ci[d-1]*ci[k-1]*ci[l-1]  *  φ(2,i)     +     2*ci[i-1]**3  *  (ci[d-1]*ci[k-1] + ci[d-1]*ci[l-1] + ci[k-1]*ci[l-1])  *  φ(3,i)     -     6*ci[i-1]**4  *  (ci[d-1] + ci[k-1] + ci[l-1])  *  φ(4,i)     +     24*ci[i-1]**5  *  φ(5,i)\r\n                    a[i-1][j-1] = numerator / prod_diff(ci[j-1], ci[k-1], ci[l-1], ci[d-1])\r\n                     \r\n            \"\"\"ijdkl = list(permutations([12,13,14,15,16], 5)) \r\n            for i,j,d,k,l in ijdkl:\r\n                #numerator = -ci[j-1]*ci[k-1]*ci[l-1]*φ(2)   +   2*(ci[j-1]*ci[k-1]   +   ci[j-1]*ci[l-1]   +   ci[k-1]*ci[l-1])*φ(3)   -   6*(ci[j-1] + ci[k-1]   +   ci[l-1])*φ(4)   +   24*φ(5)\r\n                b[0][i-1] = theta(2, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(2)   +  theta(3, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(3)   +   theta(4, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(4)   +   theta(5, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(5)    +    theta(6, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(6)\r\n                #b[0][i-1] = numerator / prod_diff(ci[i-1], ci[j-1], ci[k-1], ci[l-1])\"\"\"\r\n                    \r\n                \r\n            ijdkl = list(permutations([12,13,14,15,16], 5)) \r\n            for i,j,d,k,l in ijdkl:\r\n                #numerator = -ci[j-1]*ci[k-1]*ci[l-1]*φ(2)   +   2*(ci[j-1]*ci[k-1]   +   ci[j-1]*ci[l-1]   +   ci[k-1]*ci[l-1])*φ(3)   -   6*(ci[j-1] + ci[k-1]   +   ci[l-1])*φ(4)   +   24*φ(5)\r\n                #numerator = theta_numerator(2, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(2)   +  theta_numerator(3, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(3)   +   theta_numerator(4, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(4)   +   theta_numerator(5, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(5)    +    theta_numerator(6, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(6)\r\n                #b[0][i-1] = numerator / (ci[i-1] *, ci[d-1], ci[j-1], ci[k-1], ci[l-1])\r\n                #b[0][i-1] = numerator / denominator(ci[i-1], ci[d-1], ci[j-1], ci[k-1], ci[l-1])\r\n                b[0][i-1] = theta(2, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(2)   +  theta(3, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(3)   +   theta(4, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(4)   +   theta(5, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1])*φ(5)    +    theta(6, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(6)\r\n\r\n            \r\n            ijdkl = list(permutations([12,13,14,15,16], 5)) \r\n            for i,j,d,k,l in ijdkl:\r\n                numerator = 0\r\n                for jjj in range(2, 7):  # 2, 3, 4, 5, 6\r\n                    numerator += theta_numerator(jjj, ci[d-1], ci[i-1], ci[k-1], ci[j-1], ci[l-1]) * φ(jjj)\r\n                #print(i,j,d,k,l)\r\n                b[0][i-1] = numerator / (ci[i-1] *   (ci[i-1] - ci[k-1])   *   (ci[i-1] - ci[j-1]   *   (ci[i-1] - ci[d-1])   *   (ci[i-1] - ci[l-1])))\r\n\r\n                \r\n            for i in range(len(a)): \r\n                a[i][0] = ci[i] * φ(1,i+1) - sum(a[i])\r\n            for i in range(len(b)): \r\n                b[i][0] =         φ(1)     - sum(b[i])\r\n            \r\n            \r\n            \r\n\r\n            \r\n        case \"irk_exp_diag_2s\":\r\n            c1 = 1/3\r\n            c2 = 2/3\r\n            c1 = float(get_extra_options_kv(\"c1\", str(c1), extra_options))\r\n            c2 = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n            \r\n            lam = (1 - torch.exp(-c1 * h)) / h\r\n            a2_1 = ( torch.exp(c2*h) - torch.exp(c1*h))    /    (h * torch.exp(2*c1*h))\r\n            b1 =  (1 + c2*h + torch.exp(h) * (-1 + h - c2*h)) / ((c1-c2) * h**2 * torch.exp(c1*h))\r\n            b2 = -(1 + c1*h - torch.exp(h) * ( 1 - h + c1*h)) / ((c1-c2) * h**2 * torch.exp(c2*h))\r\n\r\n            a = [\r\n                    [lam, 0],\r\n                    [a2_1, lam],\r\n            ]\r\n            b = [\r\n                    [b1, b2],\r\n            ]\r\n            ci = [c1, c2]\r\n\r\n    ci = ci[:]\r\n    if rk_type.startswith(\"lob\") == False:\r\n        ci.append(1)\r\n        \r\n    return a, b, ci, multistep_stages, FSAL\r\n\r\n\r\n\r\ndef gen_first_col_exp(a, b, c, φ):\r\n    for i in range(len(c)): \r\n        a[i][0] = c[i] * φ(1,i+1) - sum(a[i])\r\n    for i in range(len(b)): \r\n        b[i][0] =         φ(1)     - sum(b[i])\r\n    return a, b\r\n\r\ndef rho(j, ci, ck, cl):\r\n    if j == 2:\r\n        numerator = ck*cl\r\n    if j == 3:\r\n        numerator = (-2 * (ck + cl))\r\n    if j == 4:\r\n        numerator = 6\r\n    return numerator / denominator(ci, ck, cl)\r\n    \r\n    \r\ndef mu(j, cd, ci, ck, cl):\r\n    if j == 2:\r\n        numerator = -cd * ck * cl\r\n    if j == 3:\r\n        numerator = 2 * (cd * ck + cd * cl + ck * cl)\r\n    if j == 4:\r\n        numerator = -6 * (cd + ck + cl)\r\n    if j == 5:\r\n        numerator = 24\r\n    return numerator / denominator(ci, cd, ck, cl)\r\n\r\ndef mu_numerator(j, cd, ci, ck, cl):\r\n    if j == 2:\r\n        numerator = -cd * ck * cl\r\n    if j == 3:\r\n        numerator = 2 * (cd * ck + cd * cl + ck * cl)\r\n    if j == 4:\r\n        numerator = -6 * (cd + ck + cl)\r\n    if j == 5:\r\n        numerator = 24\r\n    return numerator #/ denominator(ci, cd, ck, cl)\r\n\r\n\r\n\r\ndef theta_numerator(j, cd, ci, ck, cj, cl):\r\n    if j == 2:\r\n        numerator = -cj * cd * ck * cl\r\n    if j == 3:\r\n        numerator = 2 * (cj * ck * cd + cj*ck*cl + ck*cd*cl + cd*cl*cj)\r\n    if j == 4:\r\n        numerator = -6*(cj*ck + cj*cd + cj*cl + ck*cd + ck*cl + cd*cl)\r\n    if j == 5:\r\n        numerator = 24 * (cj + ck + cl + cd)\r\n    if j == 6:\r\n        numerator = -120\r\n    return numerator # / denominator(ci, cj, ck, cl, cd)\r\n\r\n\r\ndef theta(j, cd, ci, ck, cj, cl):\r\n    if j == 2:\r\n        numerator = -cj * cd * ck * cl\r\n    if j == 3:\r\n        numerator = 2 * (cj * ck * cd + cj*ck*cl + ck*cd*cl + cd*cl*cj)\r\n    if j == 4:\r\n        numerator = -6*(cj*ck + cj*cd + cj*cl + ck*cd + ck*cl + cd*cl)\r\n    if j == 5:\r\n        numerator = 24 * (cj + ck + cl + cd)\r\n    if j == 6:\r\n        numerator = -120\r\n    return numerator / ( ci * (ci - cj) * (ci - ck) * (ci - cl) * (ci - cd))\r\n    return numerator / denominator(ci, cj, ck, cl, cd)\r\n\r\n\r\ndef prod_diff(cj, ck, cl=None, cd=None, cblah=None):\r\n    if cl is None and cd is None:\r\n        return cj * (cj - ck)\r\n    if cd is None:\r\n        return cj * (cj - ck) * (cj - cl)\r\n    else:\r\n        return cj * (cj - ck) * (cj - cl) * (cj - cd)\r\n\r\ndef denominator(ci, *args):\r\n    result = ci \r\n    for arg in args:\r\n        result *= (ci - arg)\r\n    return result\r\n\r\n\r\n\r\ndef check_condition_4_2(nodes):\r\n\r\n    c12, c13, c14, c15 = nodes\r\n\r\n    term_1 = (1 / 5) * (c12 + c13 + c14 + c15)\r\n    term_2 = (1 / 4) * (c12 * c13 + c12 * c14 + c12 * c15 + c13 * c14 + c13 * c15 + c14 * c15)\r\n    term_3 = (1 / 3) * (c12 * c13 * c14 + c12 * c13 * c15 + c12 * c14 * c15 + c13 * c14 * c15)\r\n    term_4 = (1 / 2) * (c12 * c13 * c14 * c15)\r\n\r\n    result = term_1 - term_2 + term_3 - term_4\r\n\r\n    return abs(result - (1 / 6)) < 1e-6  \r\n\r\n"
  },
  {
    "path": "legacy/rk_guide_func.py",
    "content": "import torch\r\nimport torch.nn.functional as F\r\nfrom typing import Tuple\r\n\r\nfrom einops import rearrange\r\n\r\nfrom .sigmas import get_sigmas\r\nfrom .latents import hard_light_blend, normalize_latent, initialize_or_scale\r\nfrom .rk_method import RK_Method\r\nfrom .helper import get_extra_options_kv, extra_options_flag, get_cosine_similarity, get_extra_options_list\r\n\r\n\r\nimport itertools\r\n\r\n\r\ndef normalize_inputs(x, y0, y0_inv, guide_mode,  extra_options):\r\n    \r\n    if guide_mode == \"epsilon_guide_mean_std_from_bkg\":\r\n        y0 = normalize_latent(y0, y0_inv)\r\n        \r\n    input_norm = get_extra_options_kv(\"input_norm\", \"\", extra_options)\r\n    input_std = float(get_extra_options_kv(\"input_std\", \"1.0\", extra_options))\r\n    \r\n    if input_norm == \"input_ch_mean_set_std_to\":\r\n        x = normalize_latent(x, set_std=input_std)\r\n\r\n    if input_norm == \"input_ch_set_std_to\":\r\n        x = normalize_latent(x, set_std=input_std, mean=False)\r\n            \r\n    if input_norm == \"input_mean_set_std_to\":\r\n        x = normalize_latent(x, set_std=input_std, channelwise=False)\r\n        \r\n    if input_norm == \"input_std_set_std_to\":\r\n        x = normalize_latent(x, set_std=input_std, mean=False, channelwise=False)\r\n    \r\n    return x, y0, y0_inv\r\n\r\n\r\nclass LatentGuide:\r\n    def __init__(self, guides, x, model, sigmas, UNSAMPLE, LGW_MASK_RESCALE_MIN, extra_options, device='cuda', dtype=torch.float64, max_steps=10000):\r\n        self.model    = model\r\n        self.sigma_min = model.inner_model.inner_model.model_sampling.sigma_min.to(dtype)\r\n        self.sigma_max = model.inner_model.inner_model.model_sampling.sigma_max.to(dtype)\r\n        self.sigmas   = sigmas\r\n        self.UNSAMPLE = UNSAMPLE\r\n        self.SAMPLE = (sigmas[0] > sigmas[1])\r\n        self.extra_options = extra_options\r\n        self.y0     = torch.zeros_like(x)\r\n        self.y0_inv = torch.zeros_like(x)\r\n        self.guide_mode = \"\"\r\n        self.mask = None\r\n        self.mask_inv = None\r\n        \r\n        self.latent_guide = None\r\n        self.latent_guide_inv = None\r\n\r\n        self.lgw_masks = []\r\n        self.lgw_masks_inv = []\r\n        self.lgw, self.lgw_inv = [torch.full_like(sigmas, 0.) for _ in range(2)]\r\n        \r\n        self.guide_cossim_cutoff_, self.guide_bkg_cossim_cutoff_ = 1.0, 1.0\r\n                \r\n        latent_guide_weight, latent_guide_weight_inv = 0.,0.\r\n        latent_guide_weights, latent_guide_weights_inv = None, None\r\n        latent_guide_weights = torch.zeros_like(sigmas)\r\n        latent_guide_weights_inv = torch.zeros_like(sigmas)\r\n        if guides is not None:\r\n            self.guide_mode, latent_guide_weight, latent_guide_weight_inv, latent_guide_weights, latent_guide_weights_inv, self.latent_guide, self.latent_guide_inv, latent_guide_mask, latent_guide_mask_inv, scheduler_, scheduler_inv_, steps_, steps_inv_, denoise_, denoise_inv_ = guides\r\n            \r\n            self.mask, self.mask_inv                                 = latent_guide_mask, latent_guide_mask_inv\r\n            self.guide_cossim_cutoff_, self.guide_bkg_cossim_cutoff_ = denoise_, denoise_inv_\r\n            \r\n            if latent_guide_weights == None:\r\n                latent_guide_weights = get_sigmas(model, scheduler_, steps_, 1.0).to(x.dtype)\r\n            \r\n            if latent_guide_weights_inv == None:\r\n                latent_guide_weights_inv = get_sigmas(model, scheduler_inv_, steps_inv_, 1.0).to(x.dtype)\r\n                \r\n            latent_guide_weights     = initialize_or_scale(latent_guide_weights,     latent_guide_weight,     max_steps).to(dtype)\r\n            latent_guide_weights_inv = initialize_or_scale(latent_guide_weights_inv, latent_guide_weight_inv, max_steps).to(dtype)\r\n                \r\n        latent_guide_weights     = F.pad(latent_guide_weights,     (0, max_steps), value=0.0)\r\n        latent_guide_weights_inv = F.pad(latent_guide_weights_inv, (0, max_steps), value=0.0)\r\n        \r\n        \r\n        if latent_guide_weights is not None:\r\n            self.lgw = latent_guide_weights.to(x.device)\r\n        if latent_guide_weights_inv is not None:\r\n            self.lgw_inv = latent_guide_weights_inv.to(x.device)\r\n            \r\n        self.mask, LGW_MASK_RESCALE_MIN = prepare_mask(x, self.mask, LGW_MASK_RESCALE_MIN)\r\n        if self.mask_inv is not None:\r\n            self.mask_inv, LGW_MASK_RESCALE_MIN = prepare_mask(x, self.mask_inv, LGW_MASK_RESCALE_MIN)\r\n        elif not self.SAMPLE:\r\n            self.mask_inv = (1-self.mask)\r\n            \r\n        for step in range(len(self.sigmas)-1):\r\n            lgw_mask, lgw_mask_inv = prepare_weighted_masks(self.mask, self.mask_inv, self.lgw[step], self.lgw_inv[step], self.latent_guide, self.latent_guide_inv, LGW_MASK_RESCALE_MIN)\r\n            self.lgw_masks.append(lgw_mask)\r\n            self.lgw_masks_inv.append(lgw_mask_inv)\r\n\r\n\r\n    def init_guides(self, x, noise_sampler, latent_guide=None, latent_guide_inv=None):\r\n        self.y0, self.y0_inv = torch.zeros_like(x), torch.zeros_like(x)\r\n        latent_guide = self.latent_guide if latent_guide is None else latent_guide\r\n        latent_guide_inv = self.latent_guide_inv if latent_guide_inv is None else latent_guide_inv\r\n\r\n\r\n        if latent_guide is not None:\r\n            if type(latent_guide) == dict:\r\n                latent_guide_samples = self.model.inner_model.inner_model.process_latent_in(latent_guide['samples']).clone().to(x.device)\r\n            else:\r\n                latent_guide_samples = latent_guide\r\n            if self.SAMPLE:\r\n                self.y0 = latent_guide_samples\r\n            elif self.UNSAMPLE: # and self.mask is not None:\r\n                x = (1-self.mask) * x + self.mask * latent_guide_samples\r\n            else:\r\n                x = latent_guide_samples\r\n\r\n        if latent_guide_inv is not None:\r\n            if type(latent_guide_inv) == dict:\r\n                latent_guide_inv_samples = self.model.inner_model.inner_model.process_latent_in(latent_guide_inv['samples']).clone().to(x.device)\r\n            else:\r\n                latent_guide_inv_samples = latent_guide_inv\r\n            if self.SAMPLE:\r\n                self.y0_inv = latent_guide_inv_samples\r\n            elif self.UNSAMPLE: # and self.mask is not None:\r\n                x = (1-self.mask_inv) * x + self.mask_inv * latent_guide_inv_samples #fixed old approach, which was mask, (1-mask)\r\n            else:\r\n                x = latent_guide_inv_samples   #THIS COULD LEAD TO WEIRD BEHAVIOR! OVERWRITING X WITH LG_INV AFTER SETTING TO LG above!\r\n                \r\n        if self.UNSAMPLE and not self.SAMPLE: #sigma_next > sigma:\r\n            self.y0 = noise_sampler(sigma=self.sigma_max, sigma_next=self.sigma_min)\r\n            self.y0 = (self.y0 - self.y0.mean()) / self.y0.std()\r\n            self.y0_inv = noise_sampler(sigma=self.sigma_max, sigma_next=self.sigma_min)\r\n            self.y0_inv = (self.y0_inv - self.y0_inv.mean()) / self.y0_inv.std()\r\n            \r\n        x, self.y0, self.y0_inv = normalize_inputs(x, self.y0, self.y0_inv, self.guide_mode, self.extra_options)\r\n\r\n        return x\r\n    \r\n\r\n\r\n    def process_guides_substep(self, x_0, x_, eps_, data_, row, step, sigma, sigma_next, sigma_down, s_, unsample_resample_scale, rk, rk_type, extra_options, frame_weights_grp=None):\r\n\r\n        y0 = self.y0\r\n        if self.y0.shape[0] > 1:\r\n            y0 = self.y0[min(step, self.y0.shape[0]-1)].unsqueeze(0)  \r\n        y0_inv = self.y0_inv\r\n        \r\n        lgw_mask = self.lgw_masks[step].clone()\r\n        lgw_mask_inv = self.lgw_masks_inv[step].clone() if self.lgw_masks_inv is not None else None\r\n        \r\n        lgw = self.lgw[step]\r\n        lgw_inv = self.lgw_inv[step]\r\n        \r\n        latent_guide = self.latent_guide\r\n        latent_guide_inv = self.latent_guide_inv\r\n        guide_mode = self.guide_mode\r\n        UNSAMPLE = self.UNSAMPLE\r\n\r\n        if x_0.dim() == 5 and frame_weights_grp is not None:\r\n            apply_frame_weights(lgw_mask, frame_weights_grp[0])\r\n            apply_frame_weights(lgw_mask_inv, frame_weights_grp[1])\r\n\r\n        if self.guide_mode: \r\n            data_norm   = data_[row] - data_[row].mean(dim=(-2,-1), keepdim=True)\r\n            y0_norm     = y0         -         y0.mean(dim=(-2,-1), keepdim=True)\r\n            y0_inv_norm = y0_inv     -     y0_inv.mean(dim=(-2,-1), keepdim=True)\r\n\r\n            y0_cossim     = get_cosine_similarity(data_norm*lgw_mask,     y0_norm    *lgw_mask)\r\n            y0_cossim_inv = get_cosine_similarity(data_norm*lgw_mask_inv, y0_inv_norm*lgw_mask_inv)\r\n            \r\n            if y0_cossim < self.guide_cossim_cutoff_ or y0_cossim_inv < self.guide_bkg_cossim_cutoff_:\r\n                lgw_mask_cossim, lgw_mask_cossim_inv = lgw_mask, lgw_mask_inv\r\n                if y0_cossim     >= self.guide_cossim_cutoff_:\r\n                    lgw_mask_cossim     = torch.zeros_like(lgw_mask)\r\n                if y0_cossim_inv >= self.guide_bkg_cossim_cutoff_:\r\n                    lgw_mask_cossim_inv = torch.zeros_like(lgw_mask_inv)\r\n                lgw_mask = lgw_mask_cossim\r\n                lgw_mask_inv = lgw_mask_cossim_inv\r\n            else:\r\n                return eps_, x_ \r\n        else:\r\n            return eps_, x_ \r\n        \r\n        if self.UNSAMPLE and RK_Method.is_exponential(rk_type):\r\n            if not (extra_options_flag(\"disable_power_unsample\", extra_options) or extra_options_flag(\"disable_power_resample\", extra_options)):\r\n                extra_options += \"\\npower_unsample\\npower_resample\\n\"\r\n            if not extra_options_flag(\"disable_lgw_scaling_substep_ch_mean_std\", extra_options):\r\n                extra_options += \"\\nsubstep_eps_ch_mean_std\\n\"\r\n                \r\n\r\n\r\n        s_in = x_0.new_ones([x_0.shape[0]])\r\n        eps_orig = eps_.clone()\r\n        \r\n        if extra_options_flag(\"dynamic_guides_mean_std\", extra_options):\r\n            y_shift, y_inv_shift = normalize_latent([y0, y0_inv], [data_, data_])\r\n            y0 = y_shift\r\n            if extra_options_flag(\"dynamic_guides_inv\", extra_options):\r\n                y0_inv = y_inv_shift\r\n\r\n        if extra_options_flag(\"dynamic_guides_mean\", extra_options):\r\n            y_shift, y_inv_shift = normalize_latent([y0, y0_inv], [data_, data_], std=False)\r\n            y0 = y_shift\r\n            if extra_options_flag(\"dynamic_guides_inv\", extra_options):\r\n                y0_inv = y_inv_shift\r\n\r\n        if \"data\" == guide_mode:\r\n            y0_tmp = y0.clone()\r\n            if latent_guide_inv is not None:\r\n                y0_tmp = (1-lgw_mask) * data_[row] + lgw_mask * y0\r\n                y0_tmp = (1-lgw_mask_inv) * y0_tmp + lgw_mask_inv * y0_inv\r\n            x_[row+1] = y0_tmp + eps_[row]\r\n            \r\n        if guide_mode == \"data_projection\":\r\n\r\n            d_lerp = data_[row]   +   lgw_mask * (y0-data_[row])   +   lgw_mask_inv * (y0_inv-data_[row])\r\n            \r\n            d_collinear_d_lerp = get_collinear(data_[row], d_lerp)  \r\n            d_lerp_ortho_d     = get_orthogonal(d_lerp, data_[row])  \r\n            \r\n            data_[row] = d_collinear_d_lerp + d_lerp_ortho_d\r\n            \r\n            x_[row+1] = data_[row] + eps_[row] * sigma\r\n            \r\n\r\n        elif \"epsilon\" in guide_mode:\r\n            if sigma > sigma_next:\r\n                    \r\n                tol_value = float(get_extra_options_kv(\"tol\", \"-1.0\", extra_options))\r\n                if tol_value >= 0 and (lgw > 0 or lgw_inv > 0):           \r\n                    for b, c in itertools.product(range(x_0.shape[0]), range(x_0.shape[1])):\r\n                        current_diff     = torch.norm(data_[row][b][c] - y0    [b][c])\r\n                        current_diff_inv = torch.norm(data_[row][b][c] - y0_inv[b][c])\r\n                        \r\n                        lgw_scaled     = torch.nan_to_num(1-(tol_value/current_diff),     0)\r\n                        lgw_scaled_inv = torch.nan_to_num(1-(tol_value/current_diff_inv), 0)\r\n                        \r\n                        lgw_tmp     = min(lgw    , lgw_scaled)\r\n                        lgw_tmp_inv = min(lgw_inv, lgw_scaled_inv)\r\n\r\n                        lgw_mask_clamp     = torch.clamp(lgw_mask,     max=lgw_tmp)\r\n                        lgw_mask_clamp_inv = torch.clamp(lgw_mask_inv, max=lgw_tmp_inv)\r\n                        \r\n                        eps_row, eps_row_inv = get_guide_epsilon_substep(x_0, x_, y0, y0_inv, s_, row, rk_type, b, c)\r\n                        eps_[row][b][c] = eps_[row][b][c] + lgw_mask_clamp[b][c] * (eps_row - eps_[row][b][c]) + lgw_mask_clamp_inv[b][c] * (eps_row_inv - eps_[row][b][c])\r\n\r\n\r\n                elif guide_mode == \"epsilon_projection\":\r\n                    eps_row, eps_row_inv = get_guide_epsilon_substep(x_0, x_, y0, y0_inv, s_, row, rk_type)\r\n                    \r\n                    if extra_options_flag(\"eps_proj_v2\", extra_options):\r\n                        \r\n                        eps_row_lerp_fg = eps_[row]   +   lgw_mask * (eps_row-eps_[row])\r\n                        eps_row_lerp_bg = eps_[row]   +   lgw_mask_inv * (eps_row_inv-eps_[row])\r\n                        \r\n                        eps_collinear_eps_lerp_fg = get_collinear(eps_[row], eps_row_lerp_fg)  \r\n                        eps_lerp_ortho_eps_fg     = get_orthogonal(eps_row_lerp_fg, eps_[row])  \r\n                        \r\n                        eps_collinear_eps_lerp_bg = get_collinear(eps_[row], eps_row_lerp_bg)  \r\n                        eps_lerp_ortho_eps_bg     = get_orthogonal(eps_row_lerp_bg, eps_[row])  \r\n                        \r\n                        eps_[row] = eps_[row] + lgw_mask * (eps_collinear_eps_lerp_fg + eps_lerp_ortho_eps_fg - eps_[row]) + lgw_mask_inv * (eps_collinear_eps_lerp_bg + eps_lerp_ortho_eps_bg - eps_[row]) \r\n                        \r\n                    elif extra_options_flag(\"eps_proj_v3\", extra_options):\r\n\r\n                        eps_collinear_eps_lerp_fg = get_collinear(eps_[row], eps_row)  \r\n                        eps_lerp_ortho_eps_fg     = get_orthogonal(eps_row, eps_[row])  \r\n                        \r\n                        eps_collinear_eps_lerp_bg = get_collinear(eps_[row], eps_row_inv)  \r\n                        eps_lerp_ortho_eps_bg     = get_orthogonal(eps_row_inv, eps_[row])  \r\n                        \r\n                        eps_[row] = eps_[row] + lgw_mask * (eps_collinear_eps_lerp_fg + eps_lerp_ortho_eps_fg - eps_[row]) + lgw_mask_inv * (eps_collinear_eps_lerp_bg + eps_lerp_ortho_eps_bg - eps_[row]) \r\n                       \r\n                    elif extra_options_flag(\"eps_proj_v5\", extra_options):\r\n\r\n                        eps2g_collin = get_collinear(eps_[row], eps_row)  \r\n                        g2eps_ortho  = get_orthogonal(eps_row, eps_[row])  \r\n                        \r\n                        g2eps_collin = get_collinear(eps_row, eps_[row])  \r\n                        eps2g_ortho  = get_orthogonal(eps_[row], eps_row)  \r\n                        \r\n                        eps2i_collin = get_collinear(eps_[row], eps_row_inv)  \r\n                        i2eps_ortho  = get_orthogonal(eps_row_inv, eps_[row])  \r\n                        \r\n                        i2eps_collin = get_collinear(eps_row_inv, eps_[row])  \r\n                        eps2i_ortho  = get_orthogonal(eps_[row], eps_row_inv)  \r\n                        \r\n                        #eps_[row] = (eps2g_collin+g2eps_ortho)   +   (g2eps_collin+eps2g_ortho)       +       (eps2i_collin+i2eps_ortho)   +   (i2eps_collin+eps2i_ortho)\r\n                        #eps_[row] = eps_[row] + lgw_mask * (eps2g_collin+g2eps_ortho)   +   (1-lgw_mask) * (g2eps_collin+eps2g_ortho)       +      lgw_mask_inv * (eps2i_collin+i2eps_ortho)   +   (1-lgw_mask_inv) * (i2eps_collin+eps2i_ortho)\r\n\r\n                        eps_[row] = lgw_mask * (eps2g_collin+g2eps_ortho)   -   lgw_mask * (g2eps_collin+eps2g_ortho)       +      lgw_mask_inv * (eps2i_collin+i2eps_ortho)   -   lgw_mask_inv * (i2eps_collin+eps2i_ortho)\r\n                        \r\n                        #eps_[row] = eps_[row] + lgw_mask * (eps_collinear_eps_lerp_fg + eps_lerp_ortho_eps_fg - eps_[row]) + lgw_mask_inv * (eps_collinear_eps_lerp_bg + eps_lerp_ortho_eps_bg - eps_[row]) \r\n                       \r\n                       \r\n                    elif extra_options_flag(\"eps_proj_v4a\", extra_options):\r\n                        eps_row_lerp = eps_[row]   +   lgw_mask * (eps_row-eps_[row])   +   lgw_mask_inv * (eps_row_inv-eps_[row])\r\n\r\n                        eps_collinear_eps_lerp = get_collinear(eps_[row], eps_row_lerp)\r\n                        eps_lerp_ortho_eps     = get_orthogonal(eps_row_lerp, eps_[row])\r\n\r\n                        eps_[row] = (1 - torch.clamp(lgw_mask + lgw_mask_inv, max=1.0)) * eps_[row]   +   torch.clamp((lgw_mask + lgw_mask_inv), max=1.0) * (eps_collinear_eps_lerp + eps_lerp_ortho_eps)\r\n\r\n\r\n                    elif extra_options_flag(\"eps_proj_v4b\", extra_options):\r\n                        eps_row_lerp = eps_[row]   +   lgw_mask * (eps_row-eps_[row])   +   lgw_mask_inv * (eps_row_inv-eps_[row])\r\n\r\n                        eps_collinear_eps_lerp = get_collinear(eps_[row], eps_row_lerp)\r\n                        eps_lerp_ortho_eps     = get_orthogonal(eps_row_lerp, eps_[row])\r\n\r\n                        eps_[row] = (1 - (lgw_mask + lgw_mask_inv)/2) * eps_[row]   +   ((lgw_mask + lgw_mask_inv)/2) * (eps_collinear_eps_lerp + eps_lerp_ortho_eps)\r\n\r\n                    elif extra_options_flag(\"eps_proj_v4c\", extra_options):\r\n                        eps_row_lerp = eps_[row]   +   lgw_mask * (eps_row-eps_[row])   +   lgw_mask_inv * (eps_row_inv-eps_[row])\r\n\r\n                        eps_collinear_eps_lerp = get_collinear(eps_[row], eps_row_lerp)\r\n                        eps_lerp_ortho_eps     = get_orthogonal(eps_row_lerp, eps_[row])\r\n\r\n                        lgw_mask_sum = (lgw_mask + lgw_mask_inv)\r\n\r\n\r\n                        eps_[row] = (1 - (lgw_mask + lgw_mask_inv)/2) * eps_[row]   +   ((lgw_mask + lgw_mask_inv)/2) * (eps_collinear_eps_lerp + eps_lerp_ortho_eps)\r\n\r\n                    elif extra_options_flag(\"eps_proj_v4e\", extra_options):\r\n                        eps_row_lerp = eps_[row]   +   lgw_mask * (eps_row-eps_[row])   +   lgw_mask_inv * (eps_row_inv-eps_[row])\r\n\r\n                        eps_collinear_eps_lerp = get_collinear(eps_[row], eps_row_lerp)\r\n                        eps_lerp_ortho_eps     = get_orthogonal(eps_row_lerp, eps_[row])\r\n\r\n                        eps_sum = eps_collinear_eps_lerp + eps_lerp_ortho_eps\r\n\r\n                        eps_[row] = eps_[row] + self.mask * (eps_sum - eps_[row]) + self.mask_inv * (eps_sum - eps_[row])\r\n\r\n                    elif extra_options_flag(\"eps_proj_self1\", extra_options):\r\n                        eps_row_lerp = eps_[row]   +   self.mask * (eps_row-eps_[row])   +   self.mask_inv * (eps_row_inv-eps_[row])\r\n\r\n                        eps_collinear_eps_lerp = get_collinear(eps_[row], eps_[row])\r\n                        eps_lerp_ortho_eps     = get_orthogonal(eps_[row], eps_[row])\r\n\r\n                        eps_[row] = eps_collinear_eps_lerp + eps_lerp_ortho_eps\r\n\r\n                    elif extra_options_flag(\"eps_proj_v4z\", extra_options):\r\n                        eps_row_lerp = eps_[row]   +   self.mask * (eps_row-eps_[row])   +   self.mask_inv * (eps_row_inv-eps_[row])\r\n\r\n                        eps_collinear_eps_lerp = get_collinear(eps_[row], eps_row_lerp)\r\n                        eps_lerp_ortho_eps     = get_orthogonal(eps_row_lerp, eps_[row])\r\n\r\n                        peak = max(lgw, lgw_inv)\r\n                        lgw_mask_sum = (lgw_mask + lgw_mask_inv)\r\n\r\n                        eps_sum = eps_collinear_eps_lerp + eps_lerp_ortho_eps\r\n                        #NOT FINISHED!!!\r\n                        #eps_[row] = eps_[row] + lgw_mask * (eps_sum - eps_[row]) + lgw_mask_inv * (eps_sum - eps_[row])\r\n\r\n                    elif extra_options_flag(\"eps_proj_v5\", extra_options):\r\n                        eps_row_lerp = eps_[row]   +   lgw_mask * (eps_row-eps_[row])   +   lgw_mask_inv * (eps_row_inv-eps_[row])\r\n\r\n                        eps_collinear_eps_lerp = get_collinear(eps_[row], eps_row_lerp)\r\n                        eps_lerp_ortho_eps     = get_orthogonal(eps_row_lerp, eps_[row])\r\n\r\n                        eps_[row] = ((lgw_mask + lgw_mask_inv)==0) * eps_[row]   +   ((lgw_mask + lgw_mask_inv)>0) * (eps_collinear_eps_lerp + eps_lerp_ortho_eps)\r\n\r\n                    elif extra_options_flag(\"eps_proj_v6\", extra_options):\r\n                        eps_row_lerp = eps_[row]   +   lgw_mask * (eps_row-eps_[row])   +   lgw_mask_inv * (eps_row_inv-eps_[row])\r\n\r\n                        eps_collinear_eps_lerp = get_collinear(eps_[row], eps_row_lerp)\r\n                        eps_lerp_ortho_eps     = get_orthogonal(eps_row_lerp, eps_[row])\r\n\r\n                        eps_[row] = ((lgw_mask * lgw_mask_inv)==0) * eps_[row]   +   ((lgw_mask * lgw_mask_inv)>0) * (eps_collinear_eps_lerp + eps_lerp_ortho_eps)\r\n\r\n\r\n                    elif extra_options_flag(\"eps_proj_old_default\", extra_options):\r\n                        eps_row_lerp = eps_[row]   +   lgw_mask * (eps_row-eps_[row])   +   lgw_mask_inv * (eps_row_inv-eps_[row])\r\n                        #eps_row_lerp = eps_[row]   +   lgw_mask * (eps_row-eps_[row])   +   (1-lgw_mask) * (eps_row_inv-eps_[row])\r\n                        \r\n                        eps_collinear_eps_lerp = get_collinear(eps_[row], eps_row_lerp)  \r\n                        eps_lerp_ortho_eps     = get_orthogonal(eps_row_lerp, eps_[row])  \r\n                        \r\n                        eps_[row] = eps_collinear_eps_lerp + eps_lerp_ortho_eps\r\n                        \r\n                    else: #elif extra_options_flag(\"eps_proj_v4d\", extra_options):\r\n                        #if row > 0:\r\n                            #lgw_mask_factor = float(get_extra_options_kv(\"substep_lgw_mask_factor\", \"1.0\", extra_options))\r\n                            #lgw_mask_inv_factor = float(get_extra_options_kv(\"substep_lgw_mask_inv_factor\", \"1.0\", extra_options))\r\n                        lgw_mask_factor = 1\r\n                        if extra_options_flag(\"substep_eps_proj_scaling\", extra_options):\r\n                            lgw_mask_factor = 1/(row+1)\r\n                            \r\n                        if extra_options_flag(\"substep_eps_proj_factors\", extra_options):\r\n                            #value_str = get_extra_options_list(\"substep_eps_proj_factors\", \"\", extra_options)\r\n                            #float_list = [float(item.strip()) for item in value_str.split(',') if item.strip()]\r\n                            float_list = get_extra_options_list(\"substep_eps_proj_factors\", \"\", extra_options, ret_type=float)\r\n                            lgw_mask_factor = float_list[row]\r\n                        \r\n                        eps_row_lerp = eps_[row]   +   self.mask * (eps_row-eps_[row])   +   (1-self.mask) * (eps_row_inv-eps_[row])\r\n\r\n                        eps_collinear_eps_lerp = get_collinear(eps_[row], eps_row_lerp)\r\n                        eps_lerp_ortho_eps     = get_orthogonal(eps_row_lerp, eps_[row])\r\n\r\n                        eps_sum = eps_collinear_eps_lerp + eps_lerp_ortho_eps\r\n\r\n                        eps_[row] = eps_[row] + lgw_mask_factor*lgw_mask * (eps_sum - eps_[row]) + lgw_mask_factor*lgw_mask_inv * (eps_sum - eps_[row])\r\n\r\n\r\n\r\n                elif extra_options_flag(\"disable_lgw_scaling\", extra_options):\r\n                    eps_row, eps_row_inv = get_guide_epsilon_substep(x_0, x_, y0, y0_inv, s_, row, rk_type)\r\n                    eps_[row] = eps_[row]      +     lgw_mask * (eps_row - eps_[row])    +    lgw_mask_inv * (eps_row_inv - eps_[row])\r\n                    \r\n\r\n                elif (lgw > 0 or lgw_inv > 0): # default old channelwise epsilon\r\n                    avg, avg_inv = 0, 0\r\n                    for b, c in itertools.product(range(x_0.shape[0]), range(x_0.shape[1])):\r\n                        avg     += torch.norm(data_[row][b][c] - y0    [b][c])\r\n                        avg_inv += torch.norm(data_[row][b][c] - y0_inv[b][c])\r\n                    avg     /= x_0.shape[1]\r\n                    avg_inv /= x_0.shape[1]\r\n                    \r\n                    for b, c in itertools.product(range(x_0.shape[0]), range(x_0.shape[1])):\r\n                        ratio     = torch.nan_to_num(torch.norm(data_[row][b][c] - y0    [b][c])   /   avg,     0)\r\n                        ratio_inv = torch.nan_to_num(torch.norm(data_[row][b][c] - y0_inv[b][c])   /   avg_inv, 0)\r\n                        \r\n                        eps_row, eps_row_inv = get_guide_epsilon_substep(x_0, x_, y0, y0_inv, s_, row, rk_type, b, c)\r\n                        eps_[row][b][c] = eps_[row][b][c]      +     ratio * lgw_mask[b][c] * (eps_row - eps_[row][b][c])    +    ratio_inv * lgw_mask_inv[b][c] * (eps_row_inv - eps_[row][b][c])\r\n                        \r\n                temporal_smoothing = float(get_extra_options_kv(\"temporal_smoothing\", \"0.0\", extra_options))\r\n                if temporal_smoothing > 0:\r\n                    eps_[row] = apply_temporal_smoothing(eps_[row], temporal_smoothing)\r\n                \r\n\r\n\r\n\r\n        elif (UNSAMPLE or guide_mode in {\"resample\", \"unsample\"}) and (lgw > 0 or lgw_inv > 0):\r\n                \r\n            cvf = rk.get_epsilon(x_0, x_[row+1], y0, sigma, s_[row], sigma_down, unsample_resample_scale, extra_options)\r\n            if UNSAMPLE and sigma > sigma_next and latent_guide_inv is not None:\r\n                cvf_inv = rk.get_epsilon(x_0, x_[row+1], y0_inv, sigma, s_[row], sigma_down, unsample_resample_scale, extra_options)      \r\n            else:\r\n                cvf_inv = torch.zeros_like(cvf)\r\n\r\n            tol_value = float(get_extra_options_kv(\"tol\", \"-1.0\", extra_options))\r\n            if tol_value >= 0:\r\n                for b, c in itertools.product(range(x_0.shape[0]), range(x_0.shape[1])):\r\n                    current_diff     = torch.norm(data_[row][b][c] - y0    [b][c]) \r\n                    current_diff_inv = torch.norm(data_[row][b][c] - y0_inv[b][c]) \r\n                    \r\n                    lgw_scaled     = torch.nan_to_num(1-(tol_value/current_diff),     0)\r\n                    lgw_scaled_inv = torch.nan_to_num(1-(tol_value/current_diff_inv), 0)\r\n                    \r\n                    lgw_tmp     = min(lgw    , lgw_scaled)\r\n                    lgw_tmp_inv = min(lgw_inv, lgw_scaled_inv)\r\n\r\n                    lgw_mask_clamp     = torch.clamp(lgw_mask,     max=lgw_tmp)\r\n                    lgw_mask_clamp_inv = torch.clamp(lgw_mask_inv, max=lgw_tmp_inv)\r\n\r\n                    eps_[row][b][c] = eps_[row][b][c] + lgw_mask_clamp[b][c] * (cvf[b][c] - eps_[row][b][c]) + lgw_mask_clamp_inv[b][c] * (cvf_inv[b][c] - eps_[row][b][c])\r\n                    \r\n            elif extra_options_flag(\"disable_lgw_scaling\", extra_options):\r\n                eps_[row] = eps_[row] + lgw_mask * (cvf - eps_[row]) + lgw_mask_inv * (cvf_inv - eps_[row])\r\n                \r\n            else:\r\n                avg, avg_inv = 0, 0\r\n                for b, c in itertools.product(range(x_0.shape[0]), range(x_0.shape[1])):\r\n                    avg     += torch.norm(lgw_mask    [b][c] * data_[row][b][c]   -   lgw_mask    [b][c] * y0    [b][c])\r\n                    avg_inv += torch.norm(lgw_mask_inv[b][c] * data_[row][b][c]   -   lgw_mask_inv[b][c] * y0_inv[b][c])\r\n                avg     /= x_0.shape[1]\r\n                avg_inv /= x_0.shape[1]\r\n                \r\n                for b, c in itertools.product(range(x_0.shape[0]), range(x_0.shape[1])):\r\n                    ratio     = torch.nan_to_num(torch.norm(lgw_mask    [b][c] * data_[row][b][c] - lgw_mask    [b][c] * y0    [b][c])   /   avg,     0)\r\n                    ratio_inv = torch.nan_to_num(torch.norm(lgw_mask_inv[b][c] * data_[row][b][c] - lgw_mask_inv[b][c] * y0_inv[b][c])   /   avg_inv, 0)\r\n                            \r\n                    eps_[row][b][c] = eps_[row][b][c]      +     ratio * lgw_mask[b][c] * (cvf[b][c] - eps_[row][b][c])    +    ratio_inv * lgw_mask_inv[b][c] * (cvf_inv[b][c] - eps_[row][b][c])\r\n                    \r\n        if extra_options_flag(\"substep_eps_ch_mean_std\", extra_options):\r\n            eps_[row] = normalize_latent(eps_[row], eps_orig[row])\r\n        if extra_options_flag(\"substep_eps_ch_mean\", extra_options):\r\n            eps_[row] = normalize_latent(eps_[row], eps_orig[row], std=False)\r\n        if extra_options_flag(\"substep_eps_ch_std\", extra_options):\r\n            eps_[row] = normalize_latent(eps_[row], eps_orig[row], mean=False)\r\n        if extra_options_flag(\"substep_eps_mean_std\", extra_options):\r\n            eps_[row] = normalize_latent(eps_[row], eps_orig[row], channelwise=False)\r\n        if extra_options_flag(\"substep_eps_mean\", extra_options):\r\n            eps_[row] = normalize_latent(eps_[row], eps_orig[row], std=False, channelwise=False)\r\n        if extra_options_flag(\"substep_eps_std\", extra_options):\r\n            eps_[row] = normalize_latent(eps_[row], eps_orig[row], mean=False, channelwise=False)\r\n        return eps_, x_\r\n\r\n\r\n\r\n    @torch.no_grad\r\n    def process_guides_poststep(self, x, denoised, eps, step, extra_options):\r\n        x_orig = x.clone()\r\n        mean_weight = float(get_extra_options_kv(\"mean_weight\", \"0.01\", extra_options))\r\n        \r\n        y0 = self.y0\r\n        if self.y0.shape[0] > 1:\r\n            y0 = self.y0[min(step, self.y0.shape[0]-1)].unsqueeze(0)  \r\n        y0_inv = self.y0_inv\r\n        \r\n        lgw_mask = self.lgw_masks[step].clone()\r\n        lgw_mask_inv = self.lgw_masks_inv[step].clone() if self.lgw_masks_inv is not None else None\r\n        mask = self.mask #needed for bitwise mask below\r\n        \r\n        lgw = self.lgw[step]\r\n        lgw_inv = self.lgw_inv[step]\r\n        \r\n        latent_guide = self.latent_guide\r\n        latent_guide_inv = self.latent_guide_inv\r\n        guide_mode = self.guide_mode\r\n        UNSAMPLE = self.UNSAMPLE\r\n        \r\n        if self.guide_mode: \r\n            data_norm   = denoised - denoised.mean(dim=(-2,-1), keepdim=True)\r\n            y0_norm     = y0         -         y0.mean(dim=(-2,-1), keepdim=True)\r\n            y0_inv_norm = y0_inv     -     y0_inv.mean(dim=(-2,-1), keepdim=True)\r\n\r\n            y0_cossim     = get_cosine_similarity(data_norm*lgw_mask,     y0_norm    *lgw_mask)\r\n            y0_cossim_inv = get_cosine_similarity(data_norm*lgw_mask_inv, y0_inv_norm*lgw_mask_inv)\r\n            \r\n            if y0_cossim < self.guide_cossim_cutoff_ or y0_cossim_inv < self.guide_bkg_cossim_cutoff_:\r\n                lgw_mask_cossim, lgw_mask_cossim_inv = lgw_mask, lgw_mask_inv\r\n                if y0_cossim     >= self.guide_cossim_cutoff_:\r\n                    lgw_mask_cossim     = torch.zeros_like(lgw_mask)\r\n                if y0_cossim_inv >= self.guide_bkg_cossim_cutoff_:\r\n                    lgw_mask_cossim_inv = torch.zeros_like(lgw_mask_inv)\r\n                lgw_mask = lgw_mask_cossim\r\n                lgw_mask_inv = lgw_mask_cossim_inv\r\n            else:\r\n                return x\r\n        \r\n        if guide_mode in {\"epsilon_dynamic_mean_std\", \"epsilon_dynamic_mean\", \"epsilon_dynamic_std\", \"epsilon_dynamic_mean_from_bkg\"}:\r\n        \r\n            denoised_masked     = denoised * ((mask==1)*mask)\r\n            denoised_masked_inv = denoised * ((mask==0)*(1-mask))\r\n            \r\n            \r\n            d_shift, d_shift_inv = torch.zeros_like(x), torch.zeros_like(x)\r\n            \r\n            for b, c in itertools.product(range(x.shape[0]), range(x.shape[1])):\r\n                denoised_mask     = denoised[b][c][mask[b][c] == 1]\r\n                denoised_mask_inv = denoised[b][c][mask[b][c] == 0]\r\n                \r\n                if guide_mode == \"epsilon_dynamic_mean_std\":\r\n                    d_shift[b][c] = (denoised_masked[b][c] - denoised_mask.mean()) / denoised_mask.std()\r\n                    d_shift[b][c] = (d_shift[b][c] * denoised_mask_inv.std()) + denoised_mask_inv.mean()\r\n                    \r\n                elif guide_mode == \"epsilon_dynamic_mean\":\r\n                    d_shift[b][c]     = denoised_masked[b][c]     - denoised_mask.mean()     + denoised_mask_inv.mean()\r\n                    d_shift_inv[b][c] = denoised_masked_inv[b][c] - denoised_mask_inv.mean() + denoised_mask.mean()\r\n\r\n                elif guide_mode == \"epsilon_dynamic_mean_from_bkg\":\r\n                    d_shift[b][c] = denoised_masked[b][c] - denoised_mask.mean() + denoised_mask_inv.mean()\r\n\r\n            if guide_mode in {\"epsilon_dynamic_mean_std\", \"epsilon_dynamic_mean_from_bkg\"}:\r\n                denoised_shifted = denoised   +   mean_weight * lgw_mask * (d_shift - denoised_masked) \r\n            elif guide_mode == \"epsilon_dynamic_mean\":\r\n                denoised_shifted = denoised   +   mean_weight * lgw_mask * (d_shift - denoised_masked)   +   mean_weight * lgw_mask_inv * (d_shift_inv - denoised_masked_inv)\r\n                \r\n            x = denoised_shifted + eps\r\n        \r\n        \r\n        if UNSAMPLE == False and (latent_guide is not None or latent_guide_inv is not None) and guide_mode in (\"hard_light\", \"blend\", \"blend_projection\", \"mean_std\", \"mean\", \"mean_tiled\", \"std\"):\r\n            if guide_mode == \"hard_light\":\r\n                d_shift, d_shift_inv = hard_light_blend(y0, denoised), hard_light_blend(y0_inv, denoised)\r\n            elif guide_mode == \"blend\":\r\n                d_shift, d_shift_inv = y0, y0_inv\r\n                \r\n            elif guide_mode == \"blend_projection\":\r\n                #d_shift     = get_collinear(denoised, y0) \r\n                #d_shift_inv = get_collinear(denoised, y0_inv) \r\n                \r\n                d_lerp = denoised   +   lgw_mask * (y0-denoised)   +   lgw_mask_inv * (y0_inv-denoised)\r\n                \r\n                d_collinear_d_lerp = get_collinear(denoised, d_lerp)  \r\n                d_lerp_ortho_d     = get_orthogonal(d_lerp, denoised)  \r\n                \r\n                denoised_shifted = d_collinear_d_lerp + d_lerp_ortho_d\r\n                x = denoised_shifted + eps\r\n                return x\r\n\r\n\r\n            elif guide_mode == \"mean_std\":\r\n                d_shift, d_shift_inv = normalize_latent([denoised, denoised], [y0, y0_inv])\r\n            elif guide_mode == \"mean\":\r\n                d_shift, d_shift_inv = normalize_latent([denoised, denoised], [y0, y0_inv], std=False)\r\n            elif guide_mode == \"std\":\r\n                d_shift, d_shift_inv = normalize_latent([denoised, denoised], [y0, y0_inv], mean=False)\r\n            elif guide_mode == \"mean_tiled\":\r\n                mean_tile_size = int(get_extra_options_kv(\"mean_tile\", \"8\", extra_options))\r\n                y0_tiled       = rearrange(y0,       \"b c (h t1) (w t2) -> (t1 t2) b c h w\", t1=mean_tile_size, t2=mean_tile_size)\r\n                y0_inv_tiled   = rearrange(y0_inv,   \"b c (h t1) (w t2) -> (t1 t2) b c h w\", t1=mean_tile_size, t2=mean_tile_size)\r\n                denoised_tiled = rearrange(denoised, \"b c (h t1) (w t2) -> (t1 t2) b c h w\", t1=mean_tile_size, t2=mean_tile_size)\r\n                d_shift_tiled, d_shift_inv_tiled = torch.zeros_like(y0_tiled), torch.zeros_like(y0_tiled)\r\n                for i in range(y0_tiled.shape[0]):\r\n                    d_shift_tiled[i], d_shift_inv_tiled[i] = normalize_latent([denoised_tiled[i], denoised_tiled[i]], [y0_tiled[i], y0_inv_tiled[i]], std=False)\r\n                d_shift     = rearrange(d_shift_tiled,     \"(t1 t2) b c h w -> b c (h t1) (w t2)\", t1=mean_tile_size, t2=mean_tile_size)\r\n                d_shift_inv = rearrange(d_shift_inv_tiled, \"(t1 t2) b c h w -> b c (h t1) (w t2)\", t1=mean_tile_size, t2=mean_tile_size)\r\n\r\n\r\n            if guide_mode in (\"hard_light\", \"blend\", \"mean_std\", \"mean\", \"mean_tiled\", \"std\"):\r\n                if latent_guide_inv is None:\r\n                    denoised_shifted = denoised   +   lgw_mask * (d_shift - denoised)\r\n                else:\r\n                    denoised_shifted = denoised   +   lgw_mask * (d_shift - denoised)   +   lgw_mask_inv * (d_shift_inv - denoised)\r\n            \r\n                if extra_options_flag(\"poststep_denoised_ch_mean_std\", extra_options):\r\n                    denoised_shifted = normalize_latent(denoised_shifted, denoised)\r\n                if extra_options_flag(\"poststep_denoised_ch_mean\", extra_options):\r\n                    denoised_shifted = normalize_latent(denoised_shifted, denoised, std=False)\r\n                if extra_options_flag(\"poststep_denoised_ch_std\", extra_options):\r\n                    denoised_shifted = normalize_latent(denoised_shifted, denoised, mean=False)\r\n                if extra_options_flag(\"poststep_denoised_mean_std\", extra_options):\r\n                    denoised_shifted = normalize_latent(denoised_shifted, denoised, channelwise=False)\r\n                if extra_options_flag(\"poststep_denoised_mean\", extra_options):\r\n                    denoised_shifted = normalize_latent(denoised_shifted, denoised, std=False, channelwise=False)\r\n                if extra_options_flag(\"poststep_denoised_std\", extra_options):\r\n                    denoised_shifted = normalize_latent(denoised_shifted, denoised, mean=False, channelwise=False)\r\n\r\n                x = denoised_shifted + eps\r\n\r\n        if extra_options_flag(\"poststep_x_ch_mean_std\", extra_options):\r\n            x = normalize_latent(x, x_orig)\r\n        if extra_options_flag(\"poststep_x_ch_mean\", extra_options):\r\n            x = normalize_latent(x, x_orig, std=False)\r\n        if extra_options_flag(\"poststep_x_ch_std\", extra_options):\r\n            x = normalize_latent(x, x_orig, mean=False)\r\n        if extra_options_flag(\"poststep_x_mean_std\", extra_options):\r\n            x = normalize_latent(x, x_orig, channelwise=False)\r\n        if extra_options_flag(\"poststep_x_mean\", extra_options):\r\n            x = normalize_latent(x, x_orig, std=False, channelwise=False)\r\n        if extra_options_flag(\"poststep_x_std\", extra_options):\r\n            x = normalize_latent(x, x_orig, mean=False, channelwise=False)\r\n        return x\r\n\r\n\r\n\r\ndef apply_frame_weights(mask, frame_weights):\r\n    if frame_weights is not None:\r\n        for f in range(mask.shape[2]):\r\n            frame_weight = frame_weights[f]\r\n            mask[..., f:f+1, :, :] *= frame_weight\r\n\r\n\r\n\r\ndef prepare_mask(x, mask, LGW_MASK_RESCALE_MIN) -> Tuple[torch.Tensor, bool]:\r\n    if mask is None:\r\n        mask = torch.ones_like(x)\r\n        LGW_MASK_RESCALE_MIN = False\r\n        return mask, LGW_MASK_RESCALE_MIN\r\n    \r\n    spatial_mask = mask.unsqueeze(1)\r\n    target_height = x.shape[-2]\r\n    target_width = x.shape[-1]\r\n    spatial_mask = F.interpolate(spatial_mask, size=(target_height, target_width), mode='bilinear', align_corners=False)\r\n\r\n    while spatial_mask.dim() < x.dim():\r\n        spatial_mask = spatial_mask.unsqueeze(2)\r\n    \r\n    repeat_shape = [1] #batch\r\n    for i in range(1, x.dim() - 2):\r\n        repeat_shape.append(x.shape[i])\r\n    repeat_shape.extend([1, 1]) #height and width\r\n\r\n    mask = spatial_mask.repeat(*repeat_shape).to(x.dtype).to(x.device)\r\n    \r\n    del spatial_mask\r\n    return mask, LGW_MASK_RESCALE_MIN\r\n    \r\ndef prepare_weighted_masks(mask, mask_inv, lgw_, lgw_inv_, latent_guide, latent_guide_inv, LGW_MASK_RESCALE_MIN):\r\n    if LGW_MASK_RESCALE_MIN: \r\n        lgw_mask     =    mask  * (1-lgw_) + lgw_\r\n        lgw_mask_inv = (1-mask) * (1-lgw_inv_) + lgw_inv_\r\n    else:\r\n        if latent_guide is not None:\r\n            lgw_mask = mask * lgw_\r\n        else:\r\n            lgw_mask = torch.zeros_like(mask)\r\n        if latent_guide_inv is not None:\r\n            if mask_inv is not None:\r\n                lgw_mask_inv = torch.minimum(1-mask_inv, (1-mask) * lgw_inv_)\r\n            else:\r\n                lgw_mask_inv = (1-mask) * lgw_inv_\r\n        else:\r\n            lgw_mask_inv = torch.zeros_like(mask)\r\n    return lgw_mask, lgw_mask_inv\r\n\r\ndef apply_temporal_smoothing(tensor, temporal_smoothing):\r\n    if temporal_smoothing <= 0 or tensor.dim() != 5:\r\n        return tensor\r\n\r\n    kernel_size = 5\r\n    padding = kernel_size // 2\r\n    temporal_kernel = torch.tensor(\r\n        [0.1, 0.2, 0.4, 0.2, 0.1],\r\n        device=tensor.device, dtype=tensor.dtype\r\n    ) * temporal_smoothing\r\n    temporal_kernel[kernel_size//2] += (1 - temporal_smoothing)\r\n    temporal_kernel = temporal_kernel / temporal_kernel.sum()\r\n\r\n    # resahpe for conv1d\r\n    b, c, f, h, w = tensor.shape\r\n    data_flat = tensor.permute(0, 1, 3, 4, 2).reshape(-1, f)\r\n\r\n    # apply smoohting\r\n    data_smooth = F.conv1d(\r\n        data_flat.unsqueeze(1),\r\n        temporal_kernel.view(1, 1, -1),\r\n        padding=padding\r\n    ).squeeze(1)\r\n\r\n    return data_smooth.view(b, c, h, w, f).permute(0, 1, 4, 2, 3)\r\n\r\ndef get_guide_epsilon_substep(x_0, x_, y0, y0_inv, s_, row, rk_type, b=None, c=None):\r\n    s_in = x_0.new_ones([x_0.shape[0]])\r\n    \r\n    if b is not None and c is not None:  \r\n        index = (b, c)\r\n    elif b is not None: \r\n        index = (b,)\r\n    else: \r\n        index = ()\r\n\r\n    if RK_Method.is_exponential(rk_type):\r\n        eps_row     = y0    [index] - x_0[index]\r\n        eps_row_inv = y0_inv[index] - x_0[index]\r\n    else:\r\n        eps_row     = (x_[row+1][index] - y0    [index]) / (s_[row] * s_in)\r\n        eps_row_inv = (x_[row+1][index] - y0_inv[index]) / (s_[row] * s_in)\r\n    \r\n    return eps_row, eps_row_inv\r\n\r\ndef get_guide_epsilon(x_0, x_, y0, sigma, rk_type, b=None, c=None):\r\n    s_in = x_0.new_ones([x_0.shape[0]])\r\n    \r\n    if b is not None and c is not None:  \r\n        index = (b, c)\r\n    elif b is not None: \r\n        index = (b,)\r\n    else: \r\n        index = ()\r\n\r\n    if RK_Method.is_exponential(rk_type):\r\n        eps     = y0    [index] - x_0[index]\r\n    else:\r\n        eps     = (x_[index] - y0    [index]) / (sigma * s_in)\r\n    \r\n    return eps\r\n\r\n\r\n\r\n\r\n@torch.no_grad\r\ndef noise_cossim_guide_tiled(x_list, guide, cossim_mode=\"forward\", tile_size=2, step=0):\r\n\r\n    guide_tiled = rearrange(guide, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)\r\n\r\n    x_tiled_list = [\r\n        rearrange(x, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)\r\n        for x in x_list\r\n    ]\r\n    x_tiled_stack = torch.stack([x_tiled[0] for x_tiled in x_tiled_list])  # [n_x, n_tiles, c, h, w]\r\n\r\n    guide_flat = guide_tiled[0].view(guide_tiled.shape[1], -1).unsqueeze(0)  # [1, n_tiles, c*h*w]\r\n    x_flat = x_tiled_stack.view(x_tiled_stack.size(0), x_tiled_stack.size(1), -1)  # [n_x, n_tiles, c*h*w]\r\n\r\n    cossim_tmp_all = F.cosine_similarity(x_flat, guide_flat, dim=-1)  # [n_x, n_tiles]\r\n\r\n    if cossim_mode == \"forward\":\r\n        indices = cossim_tmp_all.argmax(dim=0) \r\n    elif cossim_mode == \"reverse\":\r\n        indices = cossim_tmp_all.argmin(dim=0) \r\n    elif cossim_mode == \"orthogonal\":\r\n        indices = torch.abs(cossim_tmp_all).argmin(dim=0) \r\n    elif cossim_mode == \"forward_reverse\":\r\n        if step % 2 == 0:\r\n            indices = cossim_tmp_all.argmax(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"reverse_forward\":\r\n        if step % 2 == 1:\r\n            indices = cossim_tmp_all.argmax(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"orthogonal_reverse\":\r\n        if step % 2 == 0:\r\n            indices = torch.abs(cossim_tmp_all).argmin(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"reverse_orthogonal\":\r\n        if step % 2 == 1:\r\n            indices = torch.abs(cossim_tmp_all).argmin(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    else:\r\n        target_value = float(cossim_mode)\r\n        indices = torch.abs(cossim_tmp_all - target_value).argmin(dim=0)  \r\n\r\n    x_tiled_out = x_tiled_stack[indices, torch.arange(indices.size(0))]  # [n_tiles, c, h, w]\r\n\r\n    x_tiled_out = x_tiled_out.unsqueeze(0) \r\n    x_detiled = rearrange(x_tiled_out, \"b (t1 t2) c h w -> b c (h t1) (w t2)\", t1=tile_size, t2=tile_size)\r\n\r\n    return x_detiled\r\n\r\n\r\n@torch.no_grad\r\ndef noise_cossim_eps_tiled(x_list, eps, noise_list, cossim_mode=\"forward\", tile_size=2, step=0):\r\n\r\n    eps_tiled = rearrange(eps, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)\r\n    x_tiled_list = [\r\n        rearrange(x, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)\r\n        for x in x_list\r\n    ]\r\n    noise_tiled_list = [\r\n        rearrange(noise, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)\r\n        for noise in noise_list\r\n    ]\r\n\r\n    noise_tiled_stack = torch.stack([noise_tiled[0] for noise_tiled in noise_tiled_list])  # [n_x, n_tiles, c, h, w]\r\n    eps_expanded = eps_tiled[0].view(eps_tiled.shape[1], -1).unsqueeze(0)  # [1, n_tiles, c*h*w]\r\n    noise_flat = noise_tiled_stack.view(noise_tiled_stack.size(0), noise_tiled_stack.size(1), -1)  # [n_x, n_tiles, c*h*w]\r\n    cossim_tmp_all = F.cosine_similarity(noise_flat, eps_expanded, dim=-1)  # [n_x, n_tiles]\r\n\r\n    if cossim_mode == \"forward\":\r\n        indices = cossim_tmp_all.argmax(dim=0)  \r\n    elif cossim_mode == \"reverse\":\r\n        indices = cossim_tmp_all.argmin(dim=0) \r\n    elif cossim_mode == \"orthogonal\":\r\n        indices = torch.abs(cossim_tmp_all).argmin(dim=0) \r\n    elif cossim_mode == \"orthogonal_pos\":\r\n        positive_mask = cossim_tmp_all > 0\r\n        positive_tmp = torch.where(positive_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('inf')))\r\n        indices = positive_tmp.argmin(dim=0)\r\n    elif cossim_mode == \"orthogonal_neg\":\r\n        negative_mask = cossim_tmp_all < 0\r\n        negative_tmp = torch.where(negative_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('-inf')))\r\n        indices = negative_tmp.argmax(dim=0)\r\n    elif cossim_mode == \"orthogonal_posneg\":\r\n        if step % 2 == 0:\r\n            positive_mask = cossim_tmp_all > 0\r\n            positive_tmp = torch.where(positive_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('inf')))\r\n            indices = positive_tmp.argmin(dim=0)\r\n        else:\r\n            negative_mask = cossim_tmp_all < 0\r\n            negative_tmp = torch.where(negative_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('-inf')))\r\n            indices = negative_tmp.argmax(dim=0)\r\n    elif cossim_mode == \"orthogonal_negpos\":\r\n        if step % 2 == 1:\r\n            positive_mask = cossim_tmp_all > 0\r\n            positive_tmp = torch.where(positive_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('inf')))\r\n            indices = positive_tmp.argmin(dim=0)\r\n        else:\r\n            negative_mask = cossim_tmp_all < 0\r\n            negative_tmp = torch.where(negative_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('-inf')))\r\n            indices = negative_tmp.argmax(dim=0)\r\n    elif cossim_mode == \"forward_reverse\":\r\n        if step % 2 == 0:\r\n            indices = cossim_tmp_all.argmax(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"reverse_forward\":\r\n        if step % 2 == 1:\r\n            indices = cossim_tmp_all.argmax(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"orthogonal_reverse\":\r\n        if step % 2 == 0:\r\n            indices = torch.abs(cossim_tmp_all).argmin(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"reverse_orthogonal\":\r\n        if step % 2 == 1:\r\n            indices = torch.abs(cossim_tmp_all).argmin(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    else:\r\n        target_value = float(cossim_mode)\r\n        indices = torch.abs(cossim_tmp_all - target_value).argmin(dim=0)\r\n    #else:\r\n    #    raise ValueError(f\"Unknown cossim_mode: {cossim_mode}\")\r\n\r\n    x_tiled_stack = torch.stack([x_tiled[0] for x_tiled in x_tiled_list])  # [n_x, n_tiles, c, h, w]\r\n    x_tiled_out = x_tiled_stack[indices, torch.arange(indices.size(0))]  # [n_tiles, c, h, w]\r\n\r\n    x_tiled_out = x_tiled_out.unsqueeze(0)  # restore batch dim\r\n    x_detiled = rearrange(x_tiled_out, \"b (t1 t2) c h w -> b c (h t1) (w t2)\", t1=tile_size, t2=tile_size)\r\n    return x_detiled\r\n\r\n\r\n\r\n@torch.no_grad\r\ndef noise_cossim_guide_eps_tiled(x_0, x_list, y0, noise_list, cossim_mode=\"forward\", tile_size=2, step=0, sigma=None, rk_type=None):\r\n\r\n    x_tiled_stack = torch.stack([\r\n        rearrange(x, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)[0]\r\n        for x in x_list\r\n    ])  # [n_x, n_tiles, c, h, w]\r\n    eps_guide_stack = torch.stack([\r\n        rearrange(x - y0, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)[0]\r\n        for x in x_list\r\n    ])  # [n_x, n_tiles, c, h, w]\r\n    del x_list\r\n\r\n    noise_tiled_stack = torch.stack([\r\n        rearrange(noise, \"b c (h t1) (w t2) -> b (t1 t2) c h w\", t1=tile_size, t2=tile_size)[0]\r\n        for noise in noise_list\r\n    ])  # [n_x, n_tiles, c, h, w]\r\n    del noise_list\r\n\r\n    noise_flat = noise_tiled_stack.view(noise_tiled_stack.size(0), noise_tiled_stack.size(1), -1)  # [n_x, n_tiles, c*h*w]\r\n    eps_guide_flat = eps_guide_stack.view(eps_guide_stack.size(0), eps_guide_stack.size(1), -1)  # [n_x, n_tiles, c*h*w]\r\n\r\n    cossim_tmp_all = F.cosine_similarity(noise_flat, eps_guide_flat, dim=-1)  # [n_x, n_tiles]\r\n    del noise_tiled_stack, noise_flat, eps_guide_stack, eps_guide_flat\r\n\r\n    if cossim_mode == \"forward\":\r\n        indices = cossim_tmp_all.argmax(dim=0) \r\n    elif cossim_mode == \"reverse\":\r\n        indices = cossim_tmp_all.argmin(dim=0) \r\n    elif cossim_mode == \"orthogonal\":\r\n        indices = torch.abs(cossim_tmp_all).argmin(dim=0) \r\n    elif cossim_mode == \"orthogonal_pos\":\r\n        positive_mask = cossim_tmp_all > 0\r\n        positive_tmp = torch.where(positive_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('inf')))\r\n        indices = positive_tmp.argmin(dim=0)\r\n    elif cossim_mode == \"orthogonal_neg\":\r\n        negative_mask = cossim_tmp_all < 0\r\n        negative_tmp = torch.where(negative_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('-inf')))\r\n        indices = negative_tmp.argmax(dim=0)\r\n    elif cossim_mode == \"orthogonal_posneg\":\r\n        if step % 2 == 0:\r\n            positive_mask = cossim_tmp_all > 0\r\n            positive_tmp = torch.where(positive_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('inf')))\r\n            indices = positive_tmp.argmin(dim=0)\r\n        else:\r\n            negative_mask = cossim_tmp_all < 0\r\n            negative_tmp = torch.where(negative_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('-inf')))\r\n            indices = negative_tmp.argmax(dim=0)\r\n    elif cossim_mode == \"orthogonal_negpos\":\r\n        if step % 2 == 1:\r\n            positive_mask = cossim_tmp_all > 0\r\n            positive_tmp = torch.where(positive_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('inf')))\r\n            indices = positive_tmp.argmin(dim=0)\r\n        else:\r\n            negative_mask = cossim_tmp_all < 0\r\n            negative_tmp = torch.where(negative_mask, cossim_tmp_all, torch.full_like(cossim_tmp_all, float('-inf')))\r\n            indices = negative_tmp.argmax(dim=0)\r\n    elif cossim_mode == \"forward_reverse\":\r\n        if step % 2 == 0:\r\n            indices = cossim_tmp_all.argmax(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"reverse_forward\":\r\n        if step % 2 == 1:\r\n            indices = cossim_tmp_all.argmax(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"orthogonal_reverse\":\r\n        if step % 2 == 0:\r\n            indices = torch.abs(cossim_tmp_all).argmin(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    elif cossim_mode == \"reverse_orthogonal\":\r\n        if step % 2 == 1:\r\n            indices = torch.abs(cossim_tmp_all).argmin(dim=0)\r\n        else:\r\n            indices = cossim_tmp_all.argmin(dim=0)\r\n    else:\r\n        target_value = float(cossim_mode)\r\n        indices = torch.abs(cossim_tmp_all - target_value).argmin(dim=0)  \r\n\r\n    x_tiled_out = x_tiled_stack[indices, torch.arange(indices.size(0))]  # [n_tiles, c, h, w]\r\n    del x_tiled_stack\r\n\r\n    x_tiled_out = x_tiled_out.unsqueeze(0)  \r\n    x_detiled = rearrange(x_tiled_out, \"b (t1 t2) c h w -> b c (h t1) (w t2)\", t1=tile_size, t2=tile_size)\r\n\r\n    return x_detiled\r\n\r\n\r\n\r\n\r\n\r\n\r\ndef get_collinear(x, y):\r\n\r\n    y_flat = y.view(y.size(0), -1).clone()\r\n    x_flat = x.view(x.size(0), -1).clone()\r\n\r\n    y_flat /= y_flat.norm(dim=-1, keepdim=True)\r\n    x_proj_y = torch.sum(x_flat * y_flat, dim=-1, keepdim=True) * y_flat\r\n\r\n    return x_proj_y.view_as(x)\r\n\r\n\r\ndef get_orthogonal(x, y):\r\n\r\n    y_flat = y.view(y.size(0), -1).clone()\r\n    x_flat = x.view(x.size(0), -1).clone()\r\n\r\n    y_flat /= y_flat.norm(dim=-1, keepdim=True)\r\n    x_proj_y = torch.sum(x_flat * y_flat, dim=-1, keepdim=True) * y_flat\r\n    \r\n    x_ortho_y = x_flat - x_proj_y \r\n\r\n    return x_ortho_y.view_as(x)\r\n\r\n\r\n\r\ndef get_orthogonal_noise_from_channelwise(*refs, max_iter=500, max_score=1e-15):\r\n    noise, *refs = refs\r\n    noise_tmp = noise.clone()\r\n    #b,c,h,w = noise.shape\r\n    if (noise.dim() == 4):\r\n        b,ch,h,w = noise.shape\r\n    elif (noise.dim() == 5):\r\n        b,ch,t,h,w = noise.shape\r\n    \r\n    for i in range(max_iter):\r\n        noise_tmp = gram_schmidt_channels_optimized(noise_tmp, *refs)\r\n        \r\n        cossim_scores = []\r\n        for ref in refs:\r\n            #for c in range(noise.shape[-3]):\r\n            for c in range(ch):\r\n                cossim_scores.append(get_cosine_similarity(noise_tmp[0][c], ref[0][c]).abs())\r\n            cossim_scores.append(get_cosine_similarity(noise_tmp[0], ref[0]).abs())\r\n            \r\n        if max(cossim_scores) < max_score:\r\n            break\r\n    \r\n    return noise_tmp\r\n\r\n\r\n\r\ndef gram_schmidt_channels_optimized(A, *refs):\r\n    if (A.dim() == 4):\r\n        b,c,h,w = A.shape\r\n    elif (A.dim() == 5):\r\n        b,c,t,h,w = A.shape\r\n\r\n    A_flat = A.view(b, c, -1)  \r\n    \r\n    for ref in refs:\r\n        ref_flat = ref.view(b, c, -1).clone()  \r\n\r\n        ref_flat /= ref_flat.norm(dim=-1, keepdim=True) \r\n\r\n        proj_coeff = torch.sum(A_flat * ref_flat, dim=-1, keepdim=True)  \r\n        projection = proj_coeff * ref_flat \r\n\r\n        A_flat -= projection\r\n\r\n    return A_flat.view_as(A)\r\n\r\n\r\n\r\nclass NoiseStepHandlerOSDE:\r\n    def __init__(self, x, eps=None, data=None, x_init=None, guide=None, guide_bkg=None):\r\n        self.noise = None\r\n        self.x = x\r\n        self.eps = eps\r\n        self.data = data\r\n        self.x_init = x_init\r\n        self.guide = guide\r\n        self.guide_bkg = guide_bkg\r\n        \r\n        self.eps_list = None\r\n\r\n        self.noise_cossim_map = {\r\n            \"eps_orthogonal\":              [self.noise, self.eps],\r\n            \"eps_data_orthogonal\":         [self.noise, self.eps, self.data],\r\n\r\n            \"data_orthogonal\":             [self.noise, self.data],\r\n            \"xinit_orthogonal\":            [self.noise, self.x_init],\r\n            \r\n            \"x_orthogonal\":                [self.noise, self.x],\r\n            \"x_data_orthogonal\":           [self.noise, self.x, self.data],\r\n            \"x_eps_orthogonal\":            [self.noise, self.x, self.eps],\r\n\r\n            \"x_eps_data_orthogonal\":       [self.noise, self.x, self.eps, self.data],\r\n            \"x_eps_data_xinit_orthogonal\": [self.noise, self.x, self.eps, self.data, self.x_init],\r\n            \r\n            \"x_eps_guide_orthogonal\":      [self.noise, self.x, self.eps, self.guide],\r\n            \"x_eps_guide_bkg_orthogonal\":  [self.noise, self.x, self.eps, self.guide_bkg],\r\n            \r\n            \"noise_orthogonal\":            [self.noise, self.x_init],\r\n            \r\n            \"guide_orthogonal\":            [self.noise, self.guide],\r\n            \"guide_bkg_orthogonal\":        [self.noise, self.guide_bkg],\r\n        }\r\n\r\n    def check_cossim_source(self, source):\r\n        return source in self.noise_cossim_map\r\n\r\n    def get_ortho_noise(self, noise, prev_noises=None, max_iter=100, max_score=1e-7, NOISE_COSSIM_SOURCE=\"eps_orthogonal\"):\r\n        \r\n        if NOISE_COSSIM_SOURCE not in self.noise_cossim_map:\r\n            raise ValueError(f\"Invalid NOISE_COSSIM_SOURCE: {NOISE_COSSIM_SOURCE}\")\r\n        \r\n        self.noise_cossim_map[NOISE_COSSIM_SOURCE][0] = noise\r\n\r\n        params = self.noise_cossim_map[NOISE_COSSIM_SOURCE]\r\n        \r\n        noise = get_orthogonal_noise_from_channelwise(*params, max_iter=max_iter, max_score=max_score)\r\n        \r\n        return noise\r\n\r\n\r\n\r\ndef handle_tiled_etc_noise_steps(x_0, x, x_prenoise, x_init, eps, denoised, y0, y0_inv, step, \r\n                                 rk_type, rk, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t,\r\n                                 NOISE_COSSIM_SOURCE, NOISE_COSSIM_MODE, noise_cossim_tile_size, noise_cossim_iterations,\r\n                                 extra_options):\r\n    \r\n    x_tmp, cossim_tmp, noise_tmp_list = [], [], []\r\n    if step > int(get_extra_options_kv(\"noise_cossim_end_step\", \"10000\", extra_options)):\r\n        NOISE_COSSIM_SOURCE = get_extra_options_kv(\"noise_cossim_takeover_source\", \"eps\", extra_options)\r\n        NOISE_COSSIM_MODE   = get_extra_options_kv(\"noise_cossim_takeover_mode\", \"forward\", extra_options)\r\n        noise_cossim_tile_size   = int(get_extra_options_kv(\"noise_cossim_takeover_tile\", str(noise_cossim_tile_size), extra_options))\r\n        noise_cossim_iterations   = int(get_extra_options_kv(\"noise_cossim_takeover_iterations\", str(noise_cossim_iterations), extra_options))\r\n        \r\n    for i in range(noise_cossim_iterations):\r\n        x_tmp.append(rk.add_noise_post(x, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t)    )#y0, lgw, sigma_down are currently unused\r\n        noise_tmp = x_tmp[i] - x\r\n        if extra_options_flag(\"noise_noise_zscore_norm\", extra_options):\r\n            noise_tmp = (noise_tmp - noise_tmp.mean()) / noise_tmp.std()\r\n        if extra_options_flag(\"noise_eps_zscore_norm\", extra_options):\r\n            eps = (eps - eps.mean()) / eps.std()\r\n        if   NOISE_COSSIM_SOURCE in (\"eps_tiled\", \"guide_epsilon_tiled\", \"guide_bkg_epsilon_tiled\", \"iig_tiled\"):\r\n            noise_tmp_list.append(noise_tmp)\r\n        if   NOISE_COSSIM_SOURCE == \"eps\":\r\n            cossim_tmp.append(get_cosine_similarity(eps, noise_tmp))\r\n        if   NOISE_COSSIM_SOURCE == \"eps_ch\":\r\n            cossim_total = torch.zeros_like(eps[0][0][0][0])\r\n            for ch in range(eps.shape[1]):\r\n                cossim_total += get_cosine_similarity(eps[0][ch], noise_tmp[0][ch])\r\n            cossim_tmp.append(cossim_total)\r\n        elif NOISE_COSSIM_SOURCE == \"data\":\r\n            cossim_tmp.append(get_cosine_similarity(denoised, noise_tmp))\r\n        elif NOISE_COSSIM_SOURCE == \"latent\":\r\n            cossim_tmp.append(get_cosine_similarity(x_prenoise, noise_tmp))\r\n        elif NOISE_COSSIM_SOURCE == \"x_prenoise\":\r\n            cossim_tmp.append(get_cosine_similarity(x_prenoise, x_tmp[i]))\r\n        elif NOISE_COSSIM_SOURCE == \"x\":\r\n            cossim_tmp.append(get_cosine_similarity(x, x_tmp[i]))\r\n        elif NOISE_COSSIM_SOURCE == \"x_data\":\r\n            cossim_tmp.append(get_cosine_similarity(denoised, x_tmp[i]))\r\n        elif NOISE_COSSIM_SOURCE == \"x_init_vs_noise\":\r\n            cossim_tmp.append(get_cosine_similarity(x_init, noise_tmp))\r\n        elif NOISE_COSSIM_SOURCE == \"mom\":\r\n            cossim_tmp.append(get_cosine_similarity(denoised, x + sigma_next*noise_tmp))\r\n        elif NOISE_COSSIM_SOURCE == \"guide\":\r\n            cossim_tmp.append(get_cosine_similarity(y0, x_tmp[i]))\r\n        elif NOISE_COSSIM_SOURCE == \"guide_bkg\":\r\n            cossim_tmp.append(get_cosine_similarity(y0_inv, x_tmp[i]))\r\n            \r\n    if step < int(get_extra_options_kv(\"noise_cossim_start_step\", \"0\", extra_options)):\r\n        x = x_tmp[0]\r\n\r\n    elif (NOISE_COSSIM_SOURCE == \"eps_tiled\"):\r\n        x = noise_cossim_eps_tiled(x_tmp, eps, noise_tmp_list, cossim_mode=NOISE_COSSIM_MODE, tile_size=noise_cossim_tile_size, step=step)\r\n    elif (NOISE_COSSIM_SOURCE == \"guide_epsilon_tiled\"):\r\n        x = noise_cossim_guide_eps_tiled(x_0, x_tmp, y0, noise_tmp_list, cossim_mode=NOISE_COSSIM_MODE, tile_size=noise_cossim_tile_size, step=step, sigma=sigma, rk_type=rk_type)\r\n    elif (NOISE_COSSIM_SOURCE == \"guide_bkg_epsilon_tiled\"):\r\n        x = noise_cossim_guide_eps_tiled(x_0, x_tmp, y0_inv, noise_tmp_list, cossim_mode=NOISE_COSSIM_MODE, tile_size=noise_cossim_tile_size, step=step, sigma=sigma, rk_type=rk_type)\r\n    elif (NOISE_COSSIM_SOURCE == \"guide_tiled\"):\r\n        x = noise_cossim_guide_tiled(x_tmp, y0, cossim_mode=NOISE_COSSIM_MODE, tile_size=noise_cossim_tile_size, step=step)\r\n    elif (NOISE_COSSIM_SOURCE == \"guide_bkg_tiled\"):\r\n        x = noise_cossim_guide_tiled(x_tmp, y0_inv, cossim_mode=NOISE_COSSIM_MODE, tile_size=noise_cossim_tile_size)\r\n    else:\r\n        for i in range(len(x_tmp)):\r\n            if   (NOISE_COSSIM_MODE == \"forward\") and (cossim_tmp[i] == max(cossim_tmp)):\r\n                x = x_tmp[i]\r\n                break\r\n            elif (NOISE_COSSIM_MODE == \"reverse\") and (cossim_tmp[i] == min(cossim_tmp)):\r\n                x = x_tmp[i]\r\n                break\r\n            elif (NOISE_COSSIM_MODE == \"orthogonal\") and (abs(cossim_tmp[i]) == min(abs(val) for val in cossim_tmp)):\r\n                x = x_tmp[i]\r\n                break\r\n            elif (NOISE_COSSIM_MODE != \"forward\") and (NOISE_COSSIM_MODE != \"reverse\") and (NOISE_COSSIM_MODE != \"orthogonal\"):\r\n                x = x_tmp[0]\r\n                break\r\n    return x\r\n\r\n\r\n\r\n"
  },
  {
    "path": "legacy/rk_method.py",
    "content": "import torch\r\nimport re\r\n\r\nimport torch.nn.functional as F\r\nimport torchvision.transforms as T\r\n\r\nfrom .noise_classes import *\r\n\r\nimport comfy.model_patcher\r\nimport comfy.supported_models\r\n\r\nimport itertools \r\n\r\nfrom .rk_coefficients import *\r\nfrom .phi_functions import *\r\n\r\n\r\n\r\nclass RK_Method:\r\n    def __init__(self, model, name=\"\", method=\"explicit\", dynamic_method=False, device='cuda', dtype=torch.float64):\r\n        self.model = model\r\n        self.model_sampling = model.inner_model.inner_model.model_sampling\r\n        self.device = device\r\n        self.dtype = dtype\r\n        \r\n        self.method = method\r\n        self.dynamic_method = dynamic_method\r\n        \r\n        self.stages = 0\r\n        self.name = name\r\n        self.ab = None\r\n        self.a = None\r\n        self.b = None\r\n        self.c = None\r\n        self.denoised = None\r\n        self.uncond = None\r\n        \r\n        self.rows = 0\r\n        self.cols = 0\r\n        \r\n        self.y0 = None\r\n        self.y0_inv = None\r\n        \r\n        self.sigma_min = model.inner_model.inner_model.model_sampling.sigma_min.to(dtype)\r\n        self.sigma_max = model.inner_model.inner_model.model_sampling.sigma_max.to(dtype)\r\n        \r\n        self.noise_sampler = None\r\n        \r\n        self.h_prev = None\r\n        self.h_prev2 = None\r\n        self.multistep_stages = 0\r\n        \r\n        self.cfg_cw = 1.0\r\n\r\n        \r\n    @staticmethod\r\n    def is_exponential(rk_type):\r\n        #if rk_type.startswith((\"res\", \"dpmpp\", \"ddim\", \"irk_exp_diag_2s\"   )): \r\n        if rk_type.startswith((\"res\", \"dpmpp\", \"ddim\", \"lawson\", \"genlawson\")): \r\n            return True\r\n        else:\r\n            return False\r\n\r\n    @staticmethod\r\n    def create(model, rk_type, device='cuda', dtype=torch.float64, name=\"\", method=\"explicit\"):\r\n        if RK_Method.is_exponential(rk_type):\r\n            return RK_Method_Exponential(model, name, method, device, dtype)\r\n        else:\r\n            return RK_Method_Linear(model, name, method, device, dtype)\r\n                \r\n    def __call__(self):\r\n        raise NotImplementedError(\"This method got clownsharked!\")\r\n    \r\n    def model_epsilon(self, x, sigma, **extra_args):\r\n        s_in = x.new_ones([x.shape[0]])\r\n        denoised = self.model(x, sigma * s_in, **extra_args)\r\n        denoised = self.calc_cfg_channelwise(denoised)\r\n\r\n        #return x0 ###################################THIS WORKS ONLY WITH THE MODEL SAMPLING PATCH\r\n        eps = (x - denoised) / (sigma * s_in).view(x.shape[0], 1, 1, 1)\r\n        return eps, denoised\r\n    \r\n    def model_denoised(self, x, sigma, **extra_args):\r\n        s_in = x.new_ones([x.shape[0]])\r\n        denoised = self.model(x, sigma * s_in, **extra_args)\r\n        denoised = self.calc_cfg_channelwise(denoised)\r\n        return denoised\r\n    \r\n\r\n\r\n    def init_noise_sampler(self, x, noise_seed, noise_sampler_type, alpha, k=1., scale=0.1):\r\n        seed = torch.initial_seed()+1 if noise_seed == -1 else noise_seed\r\n        if noise_sampler_type == \"fractal\":\r\n            self.noise_sampler = NOISE_GENERATOR_CLASSES.get(noise_sampler_type)(x=x, seed=seed, sigma_min=self.sigma_min, sigma_max=self.sigma_max)\r\n            self.noise_sampler.alpha = alpha\r\n            self.noise_sampler.k = k\r\n            self.noise_sampler.scale = scale\r\n        else:\r\n            self.noise_sampler = NOISE_GENERATOR_CLASSES_SIMPLE.get(noise_sampler_type)(x=x, seed=seed, sigma_min=self.sigma_min, sigma_max=self.sigma_max)\r\n            \r\n    def add_noise_pre(self, x, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, noise_mode, SDE_NOISE_EXTERNAL=False, sde_noise_t=None):\r\n        if isinstance(self.model_sampling, comfy.model_sampling.CONST) == False and noise_mode == \"hard\": \r\n            return self.add_noise(x, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, SDE_NOISE_EXTERNAL, sde_noise_t)\r\n        else:\r\n            return x\r\n        \r\n    def add_noise_post(self, x, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, noise_mode, SDE_NOISE_EXTERNAL=False, sde_noise_t=None):\r\n        if isinstance(self.model_sampling, comfy.model_sampling.CONST) == True   or   (isinstance(self.model_sampling, comfy.model_sampling.CONST) == False and noise_mode != \"hard\"):\r\n            return self.add_noise(x, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, SDE_NOISE_EXTERNAL, sde_noise_t)\r\n        else:\r\n            return x\r\n    \r\n    def add_noise(self, x, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, SDE_NOISE_EXTERNAL, sde_noise_t):\r\n\r\n        if sigma_next > 0.0:\r\n            noise = self.noise_sampler(sigma=sigma, sigma_next=sigma_next)\r\n            noise = torch.nan_to_num((noise - noise.mean()) / noise.std(), 0.0)\r\n\r\n            if SDE_NOISE_EXTERNAL:\r\n                noise = (1-s_noise) * noise + s_noise * sde_noise_t\r\n            \r\n            return alpha_ratio * x + noise * sigma_up * s_noise\r\n        \r\n        else:\r\n            return x\r\n\r\n\r\n    def set_coeff(self, rk_type, h, c1=0.0, c2=0.5, c3=1.0, stepcount=0, sigmas=None, sigma=None, sigma_down=None, extra_options=None):\r\n        if rk_type == \"default\": \r\n            return\r\n\r\n        sigma = sigmas[stepcount]\r\n        sigma_next = sigmas[stepcount+1]\r\n        \r\n        a, b, ci, multistep_stages, FSAL = get_rk_methods(rk_type, h, c1, c2, c3, self.h_prev, self.h_prev2, stepcount, sigmas, sigma, sigma_next, sigma_down, extra_options)\r\n        \r\n        self.multistep_stages = multistep_stages\r\n        \r\n        self.a = torch.tensor(a, dtype=h.dtype, device=h.device)\r\n        self.a = self.a.view(*self.a.shape, 1, 1, 1, 1, 1)\r\n        \r\n        \r\n        self.b = torch.tensor(b, dtype=h.dtype, device=h.device)\r\n        self.b = self.b.view(*self.b.shape, 1, 1, 1, 1, 1)\r\n        \r\n        self.c = torch.tensor(ci, dtype=h.dtype, device=h.device)\r\n        self.rows = self.a.shape[0]\r\n        self.cols = self.a.shape[1]\r\n\r\n\r\n    def a_k_sum(self, k, row):\r\n        if len(k.shape) == 4:\r\n            a_coeff = self.a[row].squeeze(-1)\r\n            ks = k * a_coeff.sum(dim=0)\r\n        elif len(k.shape) == 5:\r\n            a_coeff = self.a[row].squeeze(-1)\r\n            ks = (k[0:self.cols] * a_coeff).sum(dim=0)\r\n        elif len(k.shape) == 6:\r\n            a_coeff = self.a[row]\r\n            ks = (k[0:self.cols] * a_coeff).sum(dim=0)\r\n        else:\r\n            raise ValueError(f\"Unexpected k shape: {k.shape}\")\r\n        return ks\r\n\r\n    def b_k_sum(self, k, row):\r\n        if len(k.shape) == 4:\r\n            b_coeff = self.b[row].squeeze(-1)\r\n            ks = k * b_coeff.sum(dim=0)\r\n        elif len(k.shape) == 5:\r\n            b_coeff = self.b[row].squeeze(-1)\r\n            ks = (k[0:self.cols] * b_coeff).sum(dim=0)\r\n        elif len(k.shape) == 6:\r\n            b_coeff = self.b[row]\r\n            ks = (k[0:self.cols] * b_coeff).sum(dim=0)\r\n        else:\r\n            raise ValueError(f\"Unexpected k shape: {k.shape}\")\r\n        return ks\r\n\r\n\r\n    def init_cfg_channelwise(self, x, cfg_cw=1.0, **extra_args):\r\n        self.uncond = [torch.full_like(x, 0.0)]\r\n        self.cfg_cw = cfg_cw\r\n        if cfg_cw != 1.0:\r\n            def post_cfg_function(args):\r\n                self.uncond[0] = args[\"uncond_denoised\"]\r\n                return args[\"denoised\"]\r\n            model_options = extra_args.get(\"model_options\", {}).copy()\r\n            extra_args[\"model_options\"] = comfy.model_patcher.set_model_options_post_cfg_function(model_options, post_cfg_function, disable_cfg1_optimization=True)\r\n        return extra_args\r\n            \r\n            \r\n    def calc_cfg_channelwise(self, denoised):\r\n        if self.cfg_cw != 1.0:            \r\n            avg = 0\r\n            for b, c in itertools.product(range(denoised.shape[0]), range(denoised.shape[1])):\r\n                avg     += torch.norm(denoised[b][c] - self.uncond[0][b][c])\r\n            avg  /= denoised.shape[1]\r\n            \r\n            for b, c in itertools.product(range(denoised.shape[0]), range(denoised.shape[1])):\r\n                ratio     = torch.nan_to_num(torch.norm(denoised[b][c] - self.uncond[0][b][c])   /   avg,     0)\r\n                denoised_new = self.uncond[0] + ratio * self.cfg_cw * (denoised - self.uncond[0])\r\n            return denoised_new\r\n        else:\r\n            return denoised\r\n        \r\n        \r\n\r\nclass RK_Method_Exponential(RK_Method):\r\n    def __init__(self, model, name=\"\", method=\"explicit\", device='cuda', dtype=torch.float64):\r\n        super().__init__(model, name, method, device, dtype) \r\n        self.exponential = True\r\n        self.eps_pred = True\r\n        \r\n    @staticmethod\r\n    def alpha_fn(neg_h):\r\n        return torch.exp(neg_h)\r\n\r\n    @staticmethod\r\n    def sigma_fn(t):\r\n        return t.neg().exp()\r\n\r\n    @staticmethod\r\n    def t_fn(sigma):\r\n        return sigma.log().neg()\r\n    \r\n    @staticmethod\r\n    def h_fn(sigma_down, sigma):\r\n        return -torch.log(sigma_down/sigma)\r\n\r\n    def __call__(self, x_0, x, sigma, h, **extra_args):\r\n\r\n        denoised = self.model_denoised(x, sigma, **extra_args)\r\n        epsilon = denoised - x_0\r\n        \r\n        \"\"\"if self.uncond == None:\r\n            self.uncond = [torch.zeros_like(x)]\r\n        denoised_u = self.uncond[0].clone()\r\n        if torch.all(denoised_u == 0):\r\n            epsilon_u = [torch.zeros_like(x_0)]\r\n        else:\r\n            epsilon_u = denoised_u[0] - x_0\"\"\"\r\n        if h is not None:\r\n            self.h_prev2 = self.h_prev\r\n            self.h_prev = h\r\n        #print(\"MODEL SIGMA: \", round(float(sigma),3))\r\n        return epsilon, denoised\r\n    \r\n    def data_to_vel(self, x, data, sigma):\r\n        return data - x\r\n    \r\n    def get_epsilon(self, x_0, x, y, sigma, sigma_cur, sigma_down=None, unsample_resample_scale=None, extra_options=None):\r\n        if sigma_down > sigma:\r\n            sigma_cur = self.sigma_max - sigma_cur.clone()\r\n        sigma_cur = unsample_resample_scale if unsample_resample_scale is not None else sigma_cur\r\n\r\n        if extra_options is not None:\r\n            if re.search(r\"\\bpower_unsample\\b\", extra_options) or re.search(r\"\\bpower_resample\\b\", extra_options):\r\n                if sigma_down is None:\r\n                    return y - x_0\r\n                else:\r\n                    if sigma_down > sigma:\r\n                        return (x_0 - y) * sigma_cur\r\n                    else:\r\n                        return (y - x_0) * sigma_cur\r\n            else:\r\n                if sigma_down is None:\r\n                    return (y - x_0) / sigma_cur\r\n                else:\r\n                    if sigma_down > sigma:\r\n                        return (x_0 - y) / sigma_cur\r\n                    else:\r\n                        return (y - x_0) / sigma_cur\r\n\r\n\r\n\r\nclass RK_Method_Linear(RK_Method):\r\n    def __init__(self, model, name=\"\", method=\"explicit\", device='cuda', dtype=torch.float64):\r\n        super().__init__(model, name, method, device, dtype) \r\n        self.expanential = False\r\n        self.eps_pred = True\r\n        \r\n    @staticmethod\r\n    def alpha_fn(neg_h):\r\n        return torch.ones_like(neg_h)\r\n\r\n    @staticmethod\r\n    def sigma_fn(t):\r\n        return t\r\n\r\n    @staticmethod\r\n    def t_fn(sigma):\r\n        return sigma\r\n    \r\n    @staticmethod\r\n    def h_fn(sigma_down, sigma):\r\n        return sigma_down - sigma\r\n    \r\n    def __call__(self, x_0, x, sigma, h, **extra_args):\r\n        #s_in = x.new_ones([x.shape[0]])\r\n        \r\n        epsilon, denoised = self.model_epsilon(x, sigma, **extra_args)\r\n        \r\n        \"\"\"if self.uncond == None:\r\n            self.uncond = [torch.zeros_like(x)]\r\n        denoised_u = self.uncond[0].clone()\r\n        if torch.all(denoised_u[0] == 0):\r\n            epsilon_u = [torch.zeros_like(x_0)]\r\n        else:\r\n            epsilon_u  = (x_0 - denoised_u[0]) / (sigma * s_in).view(x.shape[0], 1, 1, 1)\"\"\"\r\n        if h is not None:\r\n            self.h_prev2 = self.h_prev\r\n            self.h_prev = h\r\n        #print(\"MODEL SIGMA: \", round(float(sigma),3))\r\n\r\n        return epsilon, denoised\r\n\r\n    def data_to_vel(self, x, data, sigma):\r\n        return (data - x) / sigma\r\n    \r\n    def get_epsilon(self, x_0, x, y, sigma, sigma_cur, sigma_down=None, unsample_resample_scale=None, extra_options=None):\r\n        if sigma_down > sigma:\r\n            sigma_cur = self.sigma_max - sigma_cur.clone()\r\n        sigma_cur = unsample_resample_scale if unsample_resample_scale is not None else sigma_cur\r\n\r\n        if sigma_down is None:\r\n            return (x - y) / sigma_cur\r\n        else:\r\n            if sigma_down > sigma:\r\n                return (y - x) / sigma_cur\r\n            else:\r\n                return (x - y) / sigma_cur\r\n\r\n\r\n\r\n"
  },
  {
    "path": "legacy/rk_sampler.py",
    "content": "import torch\r\nimport torch.nn.functional as F\r\n\r\nfrom tqdm.auto import trange\r\n\r\n\r\nfrom .noise_classes import *\r\nfrom .noise_sigmas_timesteps_scaling import get_res4lyf_step_with_model, get_res4lyf_half_step3\r\n\r\nfrom .rk_method import RK_Method\r\nfrom .rk_guide_func import *\r\n\r\nfrom .latents import normalize_latent, initialize_or_scale, latent_normalize_channels\r\nfrom .helper import get_extra_options_kv, extra_options_flag, get_cosine_similarity, is_RF_model\r\nfrom .sigmas import get_sigmas\r\n\r\nPRINT_DEBUG=False\r\n\r\n\r\ndef prepare_sigmas(model, sigmas):\r\n    if sigmas[0] == 0.0:      #remove padding used to prevent comfy from adding noise to the latent (for unsampling, etc.)\r\n        UNSAMPLE = True\r\n        sigmas = sigmas[1:-1]\r\n    else: \r\n        UNSAMPLE = False\r\n        \r\n    if hasattr(model, \"sigmas\"):\r\n        model.sigmas = sigmas\r\n        \r\n    return sigmas, UNSAMPLE\r\n\r\n\r\ndef prepare_step_to_sigma_zero(rk, irk, rk_type, irk_type, model, x, extra_options, alpha, k, noise_sampler_type, cfg_cw=1.0, **extra_args):\r\n    rk_type_final_step = f\"ralston_{rk_type[-2:]}\" if rk_type[-2:] in {\"2s\", \"3s\"} else \"ralston_3s\"\r\n    rk_type_final_step = f\"deis_2m\" if rk_type[-2:] in {\"2m\", \"3m\", \"4m\"} else rk_type_final_step\r\n    rk_type_final_step = f\"buehler\" if rk_type in {\"ddim\"} else rk_type_final_step\r\n    rk_type_final_step = get_extra_options_kv(\"rk_type_final_step\", rk_type_final_step, extra_options)\r\n    rk = RK_Method.create(model, rk_type_final_step, x.device)\r\n    rk.init_noise_sampler(x, torch.initial_seed() + 1, noise_sampler_type, alpha=alpha, k=k)\r\n    extra_args =  rk.init_cfg_channelwise(x, cfg_cw, **extra_args)\r\n\r\n    if any(element >= 1 for element in irk.c):\r\n        irk_type_final_step = f\"gauss-legendre_{rk_type[-2:]}\" if rk_type[-2:] in {\"2s\", \"3s\", \"4s\", \"5s\"} else \"gauss-legendre_2s\"\r\n        irk_type_final_step = f\"deis_2m\" if rk_type[-2:] in {\"2m\", \"3m\", \"4m\"} else irk_type_final_step\r\n        irk_type_final_step = get_extra_options_kv(\"irk_type_final_step\", irk_type_final_step, extra_options)\r\n        irk = RK_Method.create(model, irk_type_final_step, x.device)\r\n        irk.init_noise_sampler(x, torch.initial_seed() + 100, noise_sampler_type, alpha=alpha, k=k)\r\n        extra_args =  irk.init_cfg_channelwise(x, cfg_cw, **extra_args)\r\n    else:\r\n        irk_type_final_step = irk_type\r\n\r\n    eta, eta_var = 0, 0\r\n    return rk, irk, rk_type_final_step, irk_type_final_step, eta, eta_var, extra_args\r\n\r\n\r\n\r\n@torch.no_grad()\r\ndef sample_rk(model, x, sigmas, extra_args=None, callback=None, disable=None, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"res_2m\", implicit_sampler_name=\"explicit_full\",\r\n              sigma_fn_formula=\"\", t_fn_formula=\"\",\r\n                  eta=0.0, eta_var=0.0, s_noise=1., d_noise=1., alpha=-1.0, k=1.0, scale=0.1, c1=0.0, c2=0.5, c3=1.0, implicit_steps=0, reverse_weight=0.0,\r\n                  latent_guide=None, latent_guide_inv=None, latent_guide_weight=0.0, latent_guide_weight_inv=0.0, latent_guide_weights=None, latent_guide_weights_inv=None, guide_mode=\"\", \r\n                  GARBAGE_COLLECT=False, mask=None, mask_inv=None, LGW_MASK_RESCALE_MIN=True, sigmas_override=None, unsample_resample_scales=None,regional_conditioning_weights=None, sde_noise=[],\r\n                  extra_options=\"\",\r\n                  etas=None, s_noises=None, momentums=None, guides=None, cfgpp=0.0, cfg_cw = 1.0,regional_conditioning_floors=None, frame_weights_grp=None, eta_substep=0.0, noise_mode_sde_substep=\"hard\", guide_cossim_cutoff_=1.0, guide_bkg_cossim_cutoff_=1.0,\r\n                  ):\r\n    extra_args = {} if extra_args is None else extra_args\r\n\r\n    noise_cossim_iterations         = int(get_extra_options_kv(\"noise_cossim_iterations\",         \"1\",          extra_options))\r\n    noise_substep_cossim_iterations = int(get_extra_options_kv(\"noise_substep_cossim_iterations\", \"1\",          extra_options))\r\n    NOISE_COSSIM_MODE               =     get_extra_options_kv(\"noise_cossim_mode\",               \"orthogonal\", extra_options)\r\n    NOISE_COSSIM_SOURCE             =     get_extra_options_kv(\"noise_cossim_source\",             \"x_eps_data_xinit_orthogonal\",       extra_options)\r\n    NOISE_SUBSTEP_COSSIM_MODE       =     get_extra_options_kv(\"noise_substep_cossim_mode\",       \"orthogonal\", extra_options)\r\n    NOISE_SUBSTEP_COSSIM_SOURCE     =     get_extra_options_kv(\"noise_substep_cossim_source\",     \"x_eps_data_xinit_orthogonal\",       extra_options)\r\n    SUBSTEP_SKIP_LAST               =     get_extra_options_kv(\"substep_skip_last\",               \"false\",      extra_options) == \"true\" \r\n    noise_cossim_tile_size          = int(get_extra_options_kv(\"noise_cossim_tile\",               \"2\",          extra_options))\r\n    noise_substep_cossim_tile_size  = int(get_extra_options_kv(\"noise_substep_cossim_tile\",       \"2\",          extra_options))\r\n    \r\n    substep_eta           = float(get_extra_options_kv(\"substep_eta\",           str(eta_substep),  extra_options))\r\n    substep_noise_scaling = float(get_extra_options_kv(\"substep_noise_scaling\", \"0.0\",  extra_options))\r\n    substep_noise_mode    =       get_extra_options_kv(\"substep_noise_mode\",    noise_mode_sde_substep, extra_options)\r\n    \r\n    substep_eta_start_step = int(get_extra_options_kv(\"substep_noise_start_step\",  \"-1\", extra_options))\r\n    substep_eta_final_step = int(get_extra_options_kv(\"substep_noise_final_step\", \"-1\", extra_options))\r\n    \r\n    noise_substep_cossim_max_iter  =   int(get_extra_options_kv(\"noise_substep_cossim_max_iter\",  \"5\",   extra_options))\r\n    noise_cossim_max_iter          =   int(get_extra_options_kv(\"noise_cossim_max_iter\",          \"5\",   extra_options))\r\n    noise_substep_cossim_max_score = float(get_extra_options_kv(\"noise_substep_cossim_max_score\", \"1e-7\", extra_options))\r\n    noise_cossim_max_score         = float(get_extra_options_kv(\"noise_cossim_max_score\",         \"1e-7\", extra_options))\r\n    \r\n    c1 = c1_ = float(get_extra_options_kv(\"c1\", str(c1), extra_options))\r\n    c2 = c2_ = float(get_extra_options_kv(\"c2\", str(c2), extra_options))\r\n    c3 = c3_ = float(get_extra_options_kv(\"c3\", str(c3), extra_options))\r\n    \r\n    guide_skip_steps = int(get_extra_options_kv(\"guide_skip_steps\", 0, extra_options))        \r\n\r\n    cfg_cw = float(get_extra_options_kv(\"cfg_cw\", str(cfg_cw), extra_options))\r\n    \r\n    MODEL_SAMPLING = model.inner_model.inner_model.model_sampling\r\n    \r\n    s_in, s_one = x.new_ones([x.shape[0]]), x.new_ones([1])\r\n    default_dtype = getattr(torch, get_extra_options_kv(\"default_dtype\", \"float64\", extra_options), torch.float64)   \r\n    max_steps=10000\r\n    \r\n    \r\n    \r\n    if sigmas_override is not None:\r\n        sigmas = sigmas_override.clone()\r\n    sigmas = sigmas.clone() * d_noise\r\n    sigmas, UNSAMPLE = prepare_sigmas(model, sigmas)\r\n    \r\n    SDE_NOISE_EXTERNAL = False\r\n    if sde_noise is not None:\r\n        if len(sde_noise) > 0 and sigmas[1] > sigmas[2]:\r\n            SDE_NOISE_EXTERNAL = True\r\n            sigma_up_total = torch.zeros_like(sigmas[0])\r\n            for i in range(len(sde_noise)-1):\r\n                sigma_up_total += sigmas[i+1]\r\n            eta = eta / sigma_up_total\r\n\r\n    irk_type = implicit_sampler_name\r\n    if implicit_sampler_name in (\"explicit_full\", \"explicit_diagonal\", \"none\"):\r\n        irk_type = rk_type\r\n    \r\n    rk_type = \"buehler\" if implicit_steps > 0 and implicit_sampler_name == \"explicit_full\" else rk_type\r\n    rk_type = get_extra_options_kv(\"rk_type\", rk_type, extra_options)\r\n    print(\"rk_type: \", rk_type)\r\n\r\n    rk       = RK_Method.create(model,  rk_type, x.device)\r\n    irk      = RK_Method.create(model, irk_type, x.device)\r\n\r\n    extra_args = irk.init_cfg_channelwise(x, cfg_cw, **extra_args)\r\n    extra_args =  rk.init_cfg_channelwise(x, cfg_cw, **extra_args)\r\n\r\n    rk. init_noise_sampler(x, noise_seed,     noise_sampler_type, alpha=alpha, k=k)\r\n    irk.init_noise_sampler(x, noise_seed+100, noise_sampler_type, alpha=alpha, k=k)\r\n\r\n\r\n\r\n    frame_weights, frame_weights_inv = None, None\r\n    if frame_weights_grp is not None and frame_weights_grp[0] is not None:\r\n        frame_weights = initialize_or_scale(frame_weights_grp[0], 1.0, max_steps).to(default_dtype)\r\n        frame_weights = F.pad(frame_weights, (0, max_steps), value=0.0)\r\n    if frame_weights_grp is not None and frame_weights_grp[1] is not None:\r\n        frame_weights_inv = initialize_or_scale(frame_weights_grp[1], 1.0, max_steps).to(default_dtype)\r\n        frame_weights_inv = F.pad(frame_weights_inv, (0, max_steps), value=0.0)\r\n    frame_weights_grp = (frame_weights, frame_weights_inv)\r\n\r\n    LG = LatentGuide(guides, x, model, sigmas, UNSAMPLE, LGW_MASK_RESCALE_MIN, extra_options)\r\n    x = LG.init_guides(x, rk.noise_sampler)\r\n    \r\n    y0, y0_inv = LG.y0, LG.y0_inv\r\n    lgw, lgw_inv = LG.lgw, LG.lgw_inv\r\n    guide_mode = LG.guide_mode\r\n\r\n\r\n\r\n    denoised, denoised_prev, eps, eps_prev = [torch.zeros_like(x) for _ in range(4)]\r\n    prev_noises = []\r\n    x_init = x.clone()\r\n    \r\n    \r\n    \r\n    for step in trange(len(sigmas)-1, disable=disable):\r\n\r\n        sigma, sigma_next = sigmas[step], sigmas[step+1]\r\n        unsample_resample_scale = float(unsample_resample_scales[step]) if unsample_resample_scales is not None else None\r\n        if regional_conditioning_weights is not None:\r\n            extra_args['model_options']['transformer_options']['regional_conditioning_weight'] = regional_conditioning_weights[step]\r\n            extra_args['model_options']['transformer_options']['regional_conditioning_floor']  = regional_conditioning_floors [step]\r\n        else:\r\n            extra_args['model_options']['transformer_options']['regional_conditioning_weight'] = 0.0\r\n            extra_args['model_options']['transformer_options']['regional_conditioning_floor']  = 0.0\r\n        \r\n        eta = eta_var = etas[step] if etas is not None else eta\r\n        s_noise = s_noises[step] if s_noises is not None else s_noise\r\n        \r\n  \r\n        if sigma_next == 0:\r\n            rk, irk, rk_type, irk_type, eta, eta_var, extra_args = prepare_step_to_sigma_zero(rk, irk, rk_type, irk_type, model, x, extra_options, alpha, k, noise_sampler_type, cfg_cw=cfg_cw, **extra_args)\r\n\r\n        sigma_up, sigma, sigma_down, alpha_ratio = get_res4lyf_step_with_model(model, sigma, sigma_next, eta, noise_mode)\r\n        h     =  rk.h_fn(sigma_down, sigma)\r\n        h_irk = irk.h_fn(sigma_down, sigma)\r\n        \r\n        c2, c3 = get_res4lyf_half_step3(sigma, sigma_down, c2_, c3_, t_fn=rk.t_fn, sigma_fn=rk.sigma_fn, t_fn_formula=t_fn_formula, sigma_fn_formula=sigma_fn_formula)\r\n        \r\n        rk. set_coeff(rk_type,  h,     c1, c2, c3, step, sigmas, sigma, sigma_down, extra_options)\r\n        irk.set_coeff(irk_type, h_irk, c1, c2, c3, step, sigmas, sigma, sigma_down, extra_options)\r\n        \r\n        s_       = [(  rk.sigma_fn( rk.t_fn(sigma) +     h*c_)) * s_one for c_ in   rk.c]\r\n        s_irk_rk = [(  rk.sigma_fn( rk.t_fn(sigma) +     h*c_)) * s_one for c_ in  irk.c]\r\n        s_irk    = [( irk.sigma_fn(irk.t_fn(sigma) + h_irk*c_)) * s_one for c_ in  irk.c]\r\n        \r\n        if step == 0 or step == guide_skip_steps:\r\n            x_, data_, data_u, eps_ = (torch.zeros(max(rk.rows, irk.rows) + 2, *x.shape, dtype=x.dtype, device=x.device) for step in range(4))\r\n\r\n        \r\n        sde_noise_t = None\r\n        if SDE_NOISE_EXTERNAL:\r\n            if step >= len(sde_noise):\r\n                SDE_NOISE_EXTERNAL=False\r\n            else:\r\n                sde_noise_t = sde_noise[step]\r\n                \r\n\r\n        x_prenoise = x.clone()\r\n        x_[0] = x\r\n        if sigma_up > 0:\r\n            x_[0] = rk.add_noise_pre(x, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t) #y0, lgw, sigma_down are currently unused\r\n        \r\n        x_0 = x_[0].clone()\r\n        \r\n        for ms in range(rk.multistep_stages):\r\n            if RK_Method.is_exponential(rk_type):\r\n                eps_ [rk.multistep_stages - ms] = -(x_0 - data_ [rk.multistep_stages - ms])\r\n            else:\r\n                eps_ [rk.multistep_stages - ms] =  (x_0 - data_ [rk.multistep_stages - ms]) / sigma\r\n\r\n        if implicit_steps == 0 or implicit_sampler_name == \"explicit_diagonal\": \r\n            for row in range(rk.rows - rk.multistep_stages):\r\n                for exim_iter in range(implicit_steps+1):\r\n                    sub_sigma_up, sub_sigma, sub_sigma_next, sub_sigma_down, sub_alpha_ratio = 0, s_[row], s_[row+1], s_[row+1], 1\r\n                        \r\n                    if (substep_eta_final_step < 0 and step == len(sigmas)-1+substep_eta_final_step)   or   (substep_eta_final_step > 0 and step > substep_eta_final_step):\r\n                        sub_sigma_up, sub_sigma, sub_sigma_down, sub_alpha_ratio = 0, s_[row], s_[row+1], 1\r\n                        \r\n                    edsef=1\r\n                    if extra_options_flag(\"explicit_diagonal_eta_substep_factors\", extra_options):\r\n                        #value_str = get_extra_options_list(\"explicit_diagonal_eta_substep_factors\", \"\", extra_options)\r\n                        #float_list = [float(item.strip()) for item in value_str.split(',') if item.strip()]\r\n                        float_list = get_extra_options_list(\"explicit_diagonal_eta_substep_factors\", \"\", extra_options, ret_type=float)\r\n                        edsef = float_list[exim_iter]\r\n                    nsef = 1\r\n                    if extra_options_flag(\"noise_eta_substep_factors\", extra_options):\r\n                        #value_str = get_extra_options_list(\"noise_eta_substep_factors\", \"\", extra_options)\r\n                        #nsef_list = [float(item.strip()) for item in value_str.split(',') if item.strip()]\r\n                        nsef_list = get_extra_options_list(\"noise_eta_substep_factors\", \"\", extra_options, ret_type=float)\r\n                        nsef = nsef_list[row]\r\n                    if exim_iter > 0 and rk_type.endswith(\"m\") and step >= int(rk_type[-2]): \r\n                        sub_sigma_up, sub_sigma, sub_sigma_down, sub_alpha_ratio = get_res4lyf_step_with_model(model, sigma, sigma_next, substep_eta*edsef*nsef, substep_noise_mode)\r\n                        sub_sigma_next = sigma_next\r\n                    if (row > 0 and not extra_options_flag(\"disable_rough_noise\", extra_options)): # and s_[row-1] >= s_[row]:\r\n                        sub_sigma_up, sub_sigma, sub_sigma_down, sub_alpha_ratio = get_res4lyf_step_with_model(model, s_[row-1], s_[row], substep_eta*edsef*nsef, substep_noise_mode)\r\n                        sub_sigma_next = s_[row]\r\n                        \r\n                    if row > 0 and substep_eta*edsef*nsef > 0 and row < rk.rows and ((SUBSTEP_SKIP_LAST == False) or (row < rk.rows - rk.multistep_stages - 1))   and   (sub_sigma_down > 0) and sigma_next > 0:\r\n                        substep_noise_scaling_ratio = s_[row+1]/sub_sigma_down\r\n                        eps_[row-1] *= 1 + substep_noise_scaling*(substep_noise_scaling_ratio-1)\r\n\r\n                    h_new = h.clone()\r\n                    if (rk_type.endswith(\"m\") and step >= int(rk_type[-2]) and sub_sigma_up > 0) or (row > 0 and sub_sigma_up > 0):\r\n                        if extra_options_flag(\"substep_eta_c_row_plus_one\", extra_options):\r\n                            h_new = (rk.h_fn(sub_sigma_down, sigma) / rk.c[row+1])[0]  \r\n                        else:\r\n                            if exim_iter > 0 and rk_type.endswith(\"m\") and step >= int(rk_type[-2]): \r\n                                c_val = -rk.h_prev/h\r\n                                h_new = (rk.h_fn(sub_sigma_down, sigma)) / c_val\r\n                            else:   \r\n                                h_new = (rk.h_fn(sub_sigma_down, sigma) / rk.c[row])[0]   #used to be rk.c[row+1]\r\n\r\n                        s_new_       = [(  rk.sigma_fn( rk.t_fn(sigma) +     h_new*c_)) * s_one for c_ in   rk.c]\r\n                        \"\"\"print(\"step, row: \", step, row)\r\n                        print(\"h, h_new: \", h.item(), h_new.item())\r\n                        print(\"s_: \", s_)\r\n                        print(\"s_new_: \", s_new_)\r\n                        print(\"sub_sigma_up, sub_sigma, sub_sigma_next, sub_sigma_down, sub_alpha_ratio: \", sub_sigma_up.item(), sub_sigma.item(), sub_sigma_next.item(), sub_sigma_down.item(), sub_alpha_ratio.item())\"\"\"\r\n                    # UPDATE\r\n                    #print(\"UPDATE: step,row,h_new: \", step, row, h_new.item())\r\n                    x_[row+1] = x_0 + h_new * rk.a_k_sum(eps_, row)\r\n                    if row > 0:\r\n                        if PRINT_DEBUG:\r\n                            print(\"A: step,row,h,h_new: \\n\", step, row, round(float(h.item()),3), round(float(h_new.item()),3))\r\n\r\n                    #print(\"step, row, exim_iter: \", step, row, exim_iter)\r\n\r\n\r\n                    # NOISE ADD\r\n                    if is_RF_model(model) == True   or   (is_RF_model(model) == False and noise_mode != \"hard\"):\r\n                        if (exim_iter < implicit_steps and sub_sigma_up > 0) or ((row > 0) and (sub_sigma_up > 0) and ((SUBSTEP_SKIP_LAST == False) or (row < rk.rows - rk.multistep_stages - 1))):\r\n                            if PRINT_DEBUG:\r\n                                print(\"A: sub_sigma_up, sub_sigma, sub_sigma_next, sub_sigma_down, sub_alpha_ratio: \\n\", round(float(sub_sigma_up),3), round(float(sub_sigma),3), round(float(sub_sigma_next),3), round(float(sub_sigma_down),3), round(float(sub_alpha_ratio),3))\r\n\r\n                            data_tmp = denoised_prev if data_[row-1].sum() == 0 else data_[row-1]\r\n                            eps_tmp  = eps_prev      if  eps_[row-1].sum() == 0 else eps_ [row-1]\r\n                            Osde = NoiseStepHandlerOSDE(x_[row+1], eps_tmp, data_tmp, x_init, y0, y0_inv)\r\n                            if Osde.check_cossim_source(NOISE_SUBSTEP_COSSIM_SOURCE):\r\n                                noise = rk.noise_sampler(sigma=sub_sigma, sigma_next=sub_sigma_next) \r\n                                noise_osde = Osde.get_ortho_noise(noise, prev_noises, max_iter=noise_substep_cossim_max_iter, max_score=noise_substep_cossim_max_score, NOISE_COSSIM_SOURCE=NOISE_SUBSTEP_COSSIM_SOURCE)\r\n                                x_[row+1] = sub_alpha_ratio * x_[row+1] + sub_sigma_up * noise_osde * s_noise\r\n                            elif extra_options_flag(\"noise_substep_cossim\", extra_options):\r\n                                x_[row+1] = handle_tiled_etc_noise_steps(x_0, x_[row+1], x_prenoise, x_init, eps_tmp, data_tmp, y0, y0_inv, row, rk_type, rk, sub_sigma_up, s_[row-1], s_[row], sub_alpha_ratio, s_noise, substep_noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t,\r\n                                    NOISE_SUBSTEP_COSSIM_SOURCE, NOISE_SUBSTEP_COSSIM_MODE, noise_substep_cossim_tile_size, noise_substep_cossim_iterations, extra_options)\r\n                            else:\r\n                                x_[row+1] = rk.add_noise_post(x_[row+1], sub_sigma_up, sub_sigma, sub_sigma_next, sub_alpha_ratio, s_noise, substep_noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t)\r\n\r\n\r\n                    # MODEL CALL\r\n                    if step < guide_skip_steps:\r\n                        eps_row, eps_row_inv = get_guide_epsilon_substep(x_0, x_, y0, y0_inv, s_, row, rk_type)\r\n                        eps_[row] = LG.mask * eps_row   +   (1-LG.mask) * eps_row_inv\r\n                    else:\r\n                        if implicit_steps == 0 or row > 0 or (row == 0 and not extra_options_flag(\"explicit_diagonal_implicit_predictor\", extra_options)):\r\n                            eps_[row], data_[row] = rk(x_0, x_[row+1], s_[row], h, **extra_args)   \r\n                            #print(\"exim: \", step, row, exim_iter)\r\n                        else:\r\n                            if extra_options_flag(\"explicit_diagonal_implicit_predictor_disable_noise\", extra_options):\r\n                                sub_sigma_up, sub_sigma_down, sub_alpha_ratio = sub_sigma_up*0, sub_sigma_next, sub_alpha_ratio/sub_alpha_ratio\r\n                            eps_[row], data_[row] = rk(x_0, x_[row+1], s_[row], h, **extra_args)\r\n                            eps_, x_ = LG.process_guides_substep(x_0, x_, eps_, data_, row, step, sigma, sigma_next, sigma_down, s_, unsample_resample_scale, rk, rk_type, extra_options, frame_weights_grp)\r\n                            h_mini = rk.h_fn(sub_sigma_down, sub_sigma)\r\n                            x_[row+1] = x_0 + h_mini * eps_[row]\r\n                            \r\n                            Osde = NoiseStepHandlerOSDE(x_[row+1], eps_[row], data_[row], x_init, y0, y0_inv)\r\n                            if Osde.check_cossim_source(NOISE_SUBSTEP_COSSIM_SOURCE):\r\n                                noise = rk.noise_sampler(sigma=sub_sigma, sigma_next=sub_sigma_next)\r\n                                noise_osde = Osde.get_ortho_noise(noise, prev_noises, max_iter=noise_substep_cossim_max_iter, max_score=noise_substep_cossim_max_score, NOISE_COSSIM_SOURCE=NOISE_SUBSTEP_COSSIM_SOURCE)\r\n                                x_[row+1] = sub_alpha_ratio * x_[row+1] + sub_sigma_up * noise_osde * s_noise\r\n                            else:\r\n                                x_[row+1] = rk.add_noise_post(x_[row+1], sub_sigma_up, sub_sigma, sub_sigma_next, sub_alpha_ratio, s_noise, substep_noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t)\r\n                                \r\n                            for inner_exim_iter in range(implicit_steps): # implicit buehler update to find Yn+1\r\n                                #print(\"inner_exim: \", step, row, inner_exim_iter)\r\n                                eps_[row], data_[row] = rk(x_0, x_[row+1], s_[row+1], h, **extra_args)\r\n                                eps_, x_ = LG.process_guides_substep(x_0, x_, eps_, data_, row, step, sigma, sigma_next, sigma_down, s_, unsample_resample_scale, rk, rk_type, extra_options, frame_weights_grp)\r\n                                x_[row+1] = x_0 + h_mini * eps_[row]\r\n                                \r\n                                Osde = NoiseStepHandlerOSDE(x_[row+1], eps_[row], data_[row], x_init, y0, y0_inv)\r\n                                if Osde.check_cossim_source(NOISE_SUBSTEP_COSSIM_SOURCE):\r\n                                    noise = rk.noise_sampler(sigma=sub_sigma, sigma_next=sub_sigma_next)\r\n                                    noise_osde = Osde.get_ortho_noise(noise, prev_noises, max_iter=noise_substep_cossim_max_iter, max_score=noise_substep_cossim_max_score, NOISE_COSSIM_SOURCE=NOISE_SUBSTEP_COSSIM_SOURCE)\r\n                                    x_[row+1] = sub_alpha_ratio * x_[row+1] + sub_sigma_up * noise_osde * s_noise\r\n                                else:\r\n                                    x_[row+1] = rk.add_noise_post(x_[row+1], sub_sigma_up, sub_sigma, sub_sigma_next, sub_alpha_ratio, s_noise, substep_noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t)\r\n\r\n\r\n\r\n                        if extra_options_flag(\"rk_linear_straight\", extra_options):\r\n                            eps_[row] = (x_0 - data_[row]) / sigma\r\n                        if sub_sigma_up > 0 and not RK_Method.is_exponential(rk_type):\r\n                            eps_[row] = (x_0 - data_[row]) / sigma\r\n\r\n\r\n                    # GUIDES \r\n                    eps_row_tmp, x_row_tmp = eps_[row].clone(), x_[row+1].clone()\r\n                    eps_, x_ = LG.process_guides_substep(x_0, x_, eps_, data_, row, step, sigma, sigma_next, sigma_down, s_, unsample_resample_scale, rk, rk_type, extra_options, frame_weights_grp)\r\n\r\n                    if extra_options_flag(\"explicit_diagonal_eps_proj_factors\", extra_options):\r\n                        #value_str = get_extra_options_list(\"explicit_diagonal_eps_proj_factors\", \"\", extra_options)\r\n                        #float_list = [float(item.strip()) for item in value_str.split(',') if item.strip()]\r\n                        value_str = get_extra_options_list(\"explicit_diagonal_eps_proj_factors\", \"\", extra_options, ret_type=float)\r\n                        eps_[row] = (float_list[exim_iter]) * eps_[row]   +   (1-float_list[exim_iter]) * eps_row_tmp\r\n                        x_[row+1] = (float_list[exim_iter]) * x_[row+1]   +   (1-float_list[exim_iter]) * x_row_tmp\r\n\r\n                    if row > 0 and exim_iter <= implicit_steps and implicit_steps > 0:\r\n                        eps_[row-1] = eps_[row]\r\n                    \r\n                    if implicit_steps > 0 and row == 0:\r\n                        break\r\n\r\n            if PRINT_DEBUG:\r\n                print(\"B: step,h,h_new: \\n\", step, round(float(h.item()),3), round(float(h_new.item()),3))\r\n                print(\"B: sub_sigma_up, sub_sigma, sub_sigma_next, sub_sigma_down, sub_alpha_ratio: \\n\", round(float(sub_sigma_up),3), round(float(sub_sigma),3), round(float(sub_sigma_next),3), round(float(sub_sigma_down),3), round(float(sub_alpha_ratio),3))\r\n\r\n            x = x_0 + h * rk.b_k_sum(eps_, 0)\r\n                    \r\n            denoised = x_0 + ((sigma / (sigma - sigma_down)) *  h) * rk.b_k_sum(eps_, 0) \r\n            eps = x - denoised\r\n            x = LG.process_guides_poststep(x, denoised, eps, step, extra_options)\r\n            \r\n\r\n\r\n\r\n        # DIAGONALLY IMPLICIT\r\n        elif implicit_sampler_name==\"explicit_diagonal_alt\" or any(irk_type.startswith(prefix) for prefix in {\"crouzeix\", \"irk_exp_diag\", \"pareschi_russo\", \"kraaijevanger_spijker\", \"qin_zhang\",}):\r\n            s_irk = [torch.full_like(s_irk[0], sigma.item())] + s_irk\r\n\r\n            for row in range(irk.rows - irk.multistep_stages):\r\n                \r\n                sub_sigma_up, sub_sigma, sub_sigma_next, sub_sigma_down, sub_alpha_ratio = 0.0, s_irk[row], s_irk[row+1], s_irk[row+1], 1.0\r\n                if irk.c[row] > 0:\r\n                    sub_sigma_up, sub_sigma, sub_sigma_down, sub_alpha_ratio = get_res4lyf_step_with_model(model, s_irk[row], s_irk[row+1], substep_eta, substep_noise_mode)\r\n                \r\n                if not extra_options_flag(\"diagonal_implicit_skip_initial\", extra_options):\r\n                    # MODEL CALL\r\n                    eps_[row], data_[row] = irk(x_0, x_[row], s_irk[row], h_irk, **extra_args) \r\n                    \r\n                    # GUIDES \r\n                    eps_, x_ = LG.process_guides_substep(x_0, x_, eps_, data_, row, step, sigma, sigma_next, sigma_down, s_irk, unsample_resample_scale, irk, irk_type, extra_options, frame_weights_grp)\r\n                \r\n                for diag_iter in range(implicit_steps):\r\n                    h_new_irk = h.clone()\r\n                    if irk.c[row] > 0:\r\n                        h_new_irk = (irk.h_fn(sub_sigma_down, sigma) / irk.c[row])[0]\r\n                    \r\n                    # UPDATE\r\n                    x_[row+1] = x_0 + h_new_irk * irk.a_k_sum(eps_, row)\r\n                    \r\n                    # NOISE ADD              \r\n                    if is_RF_model(model) == True   or   (is_RF_model(model) == False and noise_mode != \"hard\"):      \r\n                        if (row > 0) and (sub_sigma_up > 0) and ((SUBSTEP_SKIP_LAST == False) or (row < irk.rows - irk.multistep_stages - 1)):\r\n                            data_tmp = denoised_prev if data_[row-1].sum() == 0 else data_[row-1]\r\n                            eps_tmp  = eps_prev      if  eps_[row-1].sum() == 0 else eps_ [row-1]\r\n                            Osde = NoiseStepHandlerOSDE(x_[row+1], eps_tmp, data_tmp, x_init, y0, y0_inv)\r\n                            if Osde.check_cossim_source(NOISE_SUBSTEP_COSSIM_SOURCE):\r\n                                noise = irk.noise_sampler(sigma=sub_sigma, sigma_next=sub_sigma_next) \r\n                                noise_osde = Osde.get_ortho_noise(noise, prev_noises, max_iter=noise_substep_cossim_max_iter, max_score=noise_substep_cossim_max_score, NOISE_COSSIM_SOURCE=NOISE_SUBSTEP_COSSIM_SOURCE)\r\n                                x_[row+1] = sub_alpha_ratio * x_[row+1] + sub_sigma_up * noise_osde * s_noise\r\n                            elif extra_options_flag(\"noise_substep_cossim\", extra_options):\r\n                                x_[row+1] = handle_tiled_etc_noise_steps(x_0, x_[row+1], x_prenoise, x_init, eps_tmp, data_tmp, y0, y0_inv, row, \r\n                                    irk_type, irk, sub_sigma_up, s_irk[row-1], s_irk[row], sub_alpha_ratio, s_noise, substep_noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t,\r\n                                    NOISE_SUBSTEP_COSSIM_SOURCE, NOISE_SUBSTEP_COSSIM_MODE, noise_substep_cossim_tile_size, noise_substep_cossim_iterations,\r\n                                    extra_options)\r\n                    else:\r\n                        x_[row+1] = irk.add_noise_post(x_[row+1], sub_sigma_up, sub_sigma, sub_sigma_next, sub_alpha_ratio, s_noise, substep_noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t)\r\n                    \r\n                    # MODEL CALL\r\n                    eps_[row], data_[row] = irk(x_0, x_[row+1], s_irk[row+1], h_irk, **extra_args)    \r\n                    if sub_sigma_up > 0 and not RK_Method.is_exponential(irk_type):\r\n                        eps_[row] = (x_0 - data_[row]) / sigma\r\n                    \r\n                    # GUIDES \r\n                    eps_, x_ = LG.process_guides_substep(x_0, x_, eps_, data_, row, step, sigma, sigma_next, sigma_down, s_irk, unsample_resample_scale, irk, irk_type, extra_options, frame_weights_grp)\r\n                    \r\n                    \r\n            x = x_0 + h_irk * irk.b_k_sum(eps_, 0) \r\n            \r\n            denoised = x_0 + (sigma / (sigma - sigma_down)) *  h_irk * irk.b_k_sum(eps_, 0) \r\n            eps = x - denoised\r\n            x = LG.process_guides_poststep(x, denoised, eps, step, extra_options)\r\n\r\n\r\n        # FULLY IMPLICIT\r\n        else:\r\n            s2 = s_irk_rk[:]\r\n            s2.append(sigma.unsqueeze(dim=0))\r\n            s_all = torch.sort(torch.stack(s2, dim=0).squeeze(dim=1).unique(), descending=True)[0]\r\n            sigmas_and = torch.cat( (sigmas[0:step], s_all), dim=0)\r\n            \r\n            data_[0].zero_()\r\n            eps_ [0].zero_()\r\n            eps_list = []\r\n            \r\n            if extra_options_flag(\"fast_implicit_guess\",  extra_options):\r\n                if denoised.sum() == 0:\r\n                    if extra_options_flag(\"fast_implicit_guess_use_guide\",  extra_options):\r\n                        data_s = y0\r\n                        eps_s = x_0 - data_s\r\n                    else:\r\n                        eps_s, data_s = rk(x_0, x_0, sigma, h, **extra_args)\r\n                else:\r\n                    eps_s, data_s = eps, denoised\r\n                for i in range(len(s_all)-1):\r\n                    eps_list.append(eps_s * s_all[i]/sigma)\r\n                if torch.allclose(s_all[-1], sigma_down, atol=1e-8):\r\n                    eps_list.append(eps_s * sigma_down/sigma)\r\n            else:\r\n                # EXPLICIT GUESS\r\n                x_mid = x\r\n                for i in range(len(s_all)-1):\r\n                    x_mid, eps_, data_ = get_explicit_rk_step(rk, rk_type, x_mid, LG, step, s_all[i], s_all[i+1], eta, eta_var, s_noise, noise_mode, c2, c3, step+i, sigmas_and, x_, eps_, data_, unsample_resample_scale, extra_options, frame_weights_grp, \r\n                                                              x_init, x_prenoise, NOISE_COSSIM_SOURCE, NOISE_COSSIM_MODE, noise_cossim_max_iter, noise_cossim_max_score, noise_cossim_tile_size, noise_cossim_iterations,SDE_NOISE_EXTERNAL,sde_noise_t,MODEL_SAMPLING,\r\n                                                              **extra_args)\r\n\r\n                    eps_list.append(eps_[0])\r\n                    data_[0].zero_()\r\n                    eps_ [0].zero_()\r\n                    \r\n                if torch.allclose(s_all[-1], sigma_down, atol=1e-8):\r\n                    eps_down, data_down = rk(x_0, x_mid, sigma_down, h, **extra_args) #should h_irk = h? going to change it for now.\r\n                    eps_list.append(eps_down)\r\n\r\n            s_all = [s for s in s_all if s in s_irk_rk]\r\n\r\n            eps_list = [eps_list[s_all.index(s)].clone() for s in s_irk_rk]\r\n            eps2_ = torch.stack(eps_list, dim=0)\r\n\r\n            # FULLY IMPLICIT LOOP\r\n            for implicit_iter in range(implicit_steps):\r\n                for row in range(irk.rows):\r\n                    x_[row+1] = x_0 + h_irk * irk.a_k_sum(eps2_, row)\r\n                    eps2_[row], data_[row] = irk(x_0, x_[row+1], s_irk[row], h_irk, **extra_args)\r\n                    if not extra_options_flag(\"implicit_loop_skip_guide\", extra_options):\r\n                        eps2_, x_ = LG.process_guides_substep(x_0, x_, eps2_, data_, row, step, sigma, sigma_next, sigma_down, s_irk, unsample_resample_scale, irk, irk_type, extra_options, frame_weights_grp)\r\n                x = x_0 + h_irk * irk.b_k_sum(eps2_, 0)\r\n                denoised = x_0 + (sigma / (sigma - sigma_down)) *  h_irk * irk.b_k_sum(eps2_, 0) \r\n                eps = x - denoised\r\n                x = LG.process_guides_poststep(x, denoised, eps, step, extra_options)\r\n\r\n\r\n        preview_callback(x, eps, denoised, x_, eps_, data_, step, sigma, sigma_next, callback, extra_options)\r\n\r\n        sde_noise_t = None\r\n        if SDE_NOISE_EXTERNAL:\r\n            if step >= len(sde_noise):\r\n                SDE_NOISE_EXTERNAL=False\r\n            else:\r\n                sde_noise_t = sde_noise[step]\r\n                \r\n        if is_RF_model(model) == True   or   (is_RF_model(model) == False and noise_mode != \"hard\"):\r\n            if sigma_up > 0:\r\n                #print(\"NOISE_FULL: sigma_up, sigma, sigma_next, sigma_down, alpha_ratio: \", sigma_up.item(), sigma.item(), sigma_next.item(), sigma_down.item(), alpha_ratio.item())\r\n                if implicit_steps==0:\r\n                    rk_or_irk = rk\r\n                    rk_or_irk_type = rk_type\r\n                else:\r\n                    rk_or_irk = irk\r\n                    rk_or_irk_type = irk_type\r\n                Osde = NoiseStepHandlerOSDE(x, eps, denoised, x_init, y0, y0_inv)\r\n                if Osde.check_cossim_source(NOISE_COSSIM_SOURCE):\r\n                    noise = rk_or_irk.noise_sampler(sigma=sigma, sigma_next=sigma_next)\r\n                    noise_osde = Osde.get_ortho_noise(noise, prev_noises, max_iter=noise_cossim_max_iter, max_score=noise_cossim_max_score, NOISE_COSSIM_SOURCE=NOISE_COSSIM_SOURCE)\r\n                    x = alpha_ratio * x + sigma_up * noise_osde * s_noise\r\n                elif extra_options_flag(\"noise_cossim\", extra_options):\r\n                    x = handle_tiled_etc_noise_steps(x_0, x, x_prenoise, x_init, eps, denoised, y0, y0_inv, step, \r\n                                    rk_or_irk_type, rk_or_irk, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t,\r\n                                    NOISE_COSSIM_SOURCE, NOISE_COSSIM_MODE, noise_cossim_tile_size, noise_cossim_iterations,\r\n                                    extra_options)\r\n                else:\r\n                    x = rk_or_irk.add_noise_post(x, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t)\r\n\r\n        if PRINT_DEBUG:\r\n            print(\"Data vs. y0 cossim score: \", get_cosine_similarity(data_[0], y0).item())\r\n\r\n        for ms in range(rk.multistep_stages):\r\n            if RK_Method.is_exponential(rk_type):\r\n                eps_[rk.multistep_stages - ms] = data_[rk.multistep_stages - ms - 1] - x\r\n            else:\r\n                eps_[rk.multistep_stages - ms] = (x - data_[rk.multistep_stages - ms - 1]) / sigma\r\n                \r\n            #eps_ [rk.multistep_stages - ms] = eps_ [rk.multistep_stages - ms - 1]\r\n            data_[rk.multistep_stages - ms] = data_[rk.multistep_stages - ms - 1]\r\n                \r\n        eps_ [0] = torch.zeros_like(eps_ [0])\r\n        data_[0] = torch.zeros_like(data_[0])\r\n        \r\n        denoised_prev = denoised\r\n        eps_prev = eps\r\n        \r\n    preview_callback(x, eps, denoised, x_, eps_, data_, step, sigma, sigma_next, callback, extra_options, FINAL_STEP=True)\r\n    return x\r\n\r\n\r\n\r\ndef get_explicit_rk_step(rk, rk_type, x, LG, step, sigma, sigma_next, eta, eta_var, s_noise, noise_mode, c2, c3, stepcount, sigmas, x_, eps_, data_, unsample_resample_scale, extra_options, frame_weights_grp, \r\n                         x_init, x_prenoise, NOISE_COSSIM_SOURCE, NOISE_COSSIM_MODE, noise_cossim_max_iter, noise_cossim_max_score, noise_cossim_tile_size, noise_cossim_iterations,SDE_NOISE_EXTERNAL,sde_noise_t,MODEL_SAMPLING,\r\n                         **extra_args):\r\n\r\n    extra_args = {} if extra_args is None else extra_args\r\n    s_in = x.new_ones([x.shape[0]])\r\n    \r\n    eta = float(get_extra_options_kv(\"implicit_substep_eta\", eta, extra_options))\r\n\r\n    sigma_up, sigma, sigma_down, alpha_ratio = get_res4lyf_step_with_model(rk.model, sigma, sigma_next, eta, noise_mode)\r\n    h = rk.h_fn(sigma_down, sigma)\r\n    c2, c3 = get_res4lyf_half_step3(sigma, sigma_down, c2, c3, t_fn=rk.t_fn, sigma_fn=rk.sigma_fn)\r\n    \r\n    rk.set_coeff(rk_type, h, c2=c2, c3=c3, stepcount=stepcount, sigmas=sigmas, sigma_down=sigma_down, extra_options=extra_options)\r\n\r\n    s_ = [(sigma + h * c_) * s_in for c_ in rk.c]\r\n    x_[0] = rk.add_noise_pre(x, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, noise_mode)\r\n    \r\n    x_0 = x_[0].clone()\r\n    \r\n    for ms in range(rk.multistep_stages):\r\n        if RK_Method.is_exponential(rk_type):\r\n            eps_ [rk.multistep_stages - ms] = data_ [rk.multistep_stages - ms] - x_0\r\n        else:\r\n            eps_ [rk.multistep_stages - ms] = (x_0 - data_ [rk.multistep_stages - ms]) / sigma\r\n        \r\n    for row in range(rk.rows - rk.multistep_stages):\r\n        x_[row+1] = x_0 + h * rk.a_k_sum(eps_, row)\r\n        eps_[row], data_[row] = rk(x_0, x_[row+1], s_[row], h, **extra_args)\r\n        \r\n        eps_, x_ = LG.process_guides_substep(x_0, x_, eps_, data_, row, step, sigma, sigma_next, sigma_down, s_, unsample_resample_scale, rk, rk_type, extra_options, frame_weights_grp)        \r\n        \r\n    x = x_0 + h * rk.b_k_sum(eps_, 0)\r\n    \r\n    denoised = x_0 + (sigma / (sigma - sigma_down)) *  h * rk.b_k_sum(eps_, 0) \r\n    eps = x - denoised\r\n    \r\n    y0 = LG.y0\r\n    if LG.y0.shape[0] > 1:\r\n        y0 = LG.y0[min(step, LG.y0.shape[0]-1)].unsqueeze(0)  \r\n        \r\n    x = LG.process_guides_poststep(x, denoised, eps, step, extra_options)\r\n\r\n    #x = rk.add_noise_post(x, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, noise_mode)\r\n    \r\n    if is_RF_model(rk.model) == True   or   (is_RF_model(rk.model) == False and noise_mode != \"hard\"):\r\n        if sigma_up > 0:\r\n            Osde = NoiseStepHandlerOSDE(x, eps, denoised, x_init, y0, LG.y0_inv)\r\n            if Osde.check_cossim_source(NOISE_COSSIM_SOURCE):\r\n                noise = rk.noise_sampler(sigma=sigma, sigma_next=sigma_next)\r\n                noise_osde = Osde.get_ortho_noise(noise, [], max_iter=noise_cossim_max_iter, max_score=noise_cossim_max_score, NOISE_COSSIM_SOURCE=NOISE_COSSIM_SOURCE)\r\n                x = alpha_ratio * x + sigma_up * noise_osde * s_noise\r\n            elif extra_options_flag(\"noise_cossim\", extra_options):\r\n                x = handle_tiled_etc_noise_steps(x_0, x, x_prenoise, x_init, eps, denoised, y0, LG.y0_inv, step, \r\n                                    rk_type, rk, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t,\r\n                                    NOISE_COSSIM_SOURCE, NOISE_COSSIM_MODE, noise_cossim_tile_size, noise_cossim_iterations,\r\n                                    extra_options)\r\n            else:\r\n                x = rk.add_noise_post(x, sigma_up, sigma, sigma_next, alpha_ratio, s_noise, noise_mode, SDE_NOISE_EXTERNAL, sde_noise_t)\r\n\r\n    for ms in range(rk.multistep_stages): # NEEDS ADJUSTING?\r\n        eps_ [rk.multistep_stages - ms] = eps_ [rk.multistep_stages - ms - 1]\r\n        data_[rk.multistep_stages - ms] = data_[rk.multistep_stages - ms - 1]\r\n\r\n    return x, eps_, data_\r\n\r\n\r\n\r\ndef preview_callback(x, eps, denoised, x_, eps_, data_, step, sigma, sigma_next, callback, extra_options, FINAL_STEP=False):\r\n    \r\n    if FINAL_STEP:\r\n        denoised_callback = denoised\r\n        \r\n    elif extra_options_flag(\"eps_substep_preview\", extra_options):\r\n        row_callback = int(get_extra_options_kv(\"eps_substep_preview\", \"0\", extra_options))\r\n        denoised_callback = eps_[row_callback]\r\n        \r\n    elif extra_options_flag(\"denoised_substep_preview\", extra_options):\r\n        row_callback = int(get_extra_options_kv(\"denoised_substep_preview\", \"0\", extra_options))\r\n        denoised_callback = data_[row_callback]\r\n        \r\n    elif extra_options_flag(\"x_substep_preview\", extra_options):\r\n        row_callback = int(get_extra_options_kv(\"x_substep_preview\", \"0\", extra_options))\r\n        denoised_callback = x_[row_callback]\r\n        \r\n    elif extra_options_flag(\"eps_preview\", extra_options):\r\n        denoised_callback = eps\r\n        \r\n    elif extra_options_flag(\"denoised_preview\", extra_options):\r\n        denoised_callback = denoised\r\n        \r\n    elif extra_options_flag(\"x_preview\", extra_options):\r\n        denoised_callback = x\r\n        \r\n    else:\r\n        denoised_callback = data_[0]\r\n        \r\n    callback({'x': x, 'i': step, 'sigma': sigma, 'sigma_next': sigma_next, 'denoised': denoised_callback.to(torch.float32)}) if callback is not None else None\r\n    \r\n    return\r\n\r\n\r\n\r\n\r\ndef sample_res_2m(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"res_2m\", eta=0.0, )\r\ndef sample_res_2s(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"res_2s\", eta=0.0, )\r\ndef sample_res_3s(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"res_3s\", eta=0.0, )\r\ndef sample_res_5s(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"res_5s\", eta=0.0, )\r\ndef sample_res_6s(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"res_6s\", eta=0.0, )\r\n\r\ndef sample_res_2m_sde(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"res_2m\", eta=0.5, eta_substep=0.5, )\r\ndef sample_res_2s_sde(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"res_2s\", eta=0.5, eta_substep=0.5, )\r\ndef sample_res_3s_sde(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"res_3s\", eta=0.5, eta_substep=0.5, )\r\ndef sample_res_5s_sde(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"res_5s\", eta=0.5, eta_substep=0.5, )\r\ndef sample_res_6s_sde(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"res_6s\", eta=0.5, eta_substep=0.5, )\r\n\r\ndef sample_deis_2m(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"deis_2m\", eta=0.0, )\r\ndef sample_deis_3m(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"deis_3m\", eta=0.0, )\r\ndef sample_deis_4m(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"deis_4m\", eta=0.0, )\r\n\r\ndef sample_deis_2m_sde(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"deis_2m\", eta=0.5, eta_substep=0.5, )\r\ndef sample_deis_3m_sde(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"deis_3m\", eta=0.5, eta_substep=0.5, )\r\ndef sample_deis_4m_sde(model, x, sigmas, extra_args=None, callback=None, disable=None):\r\n    return sample_rk(model, x, sigmas, extra_args, callback, disable, noise_sampler_type=\"gaussian\", noise_mode=\"hard\", noise_seed=-1, rk_type=\"deis_4m\", eta=0.5, eta_substep=0.5, )\r\n\r\n"
  },
  {
    "path": "legacy/samplers.py",
    "content": "from .noise_classes import prepare_noise, NOISE_GENERATOR_CLASSES_SIMPLE, NOISE_GENERATOR_NAMES_SIMPLE, NOISE_GENERATOR_NAMES\nfrom .sigmas import get_sigmas\n\n\nfrom .constants import MAX_STEPS\n\nimport comfy.samplers\nimport comfy.sample\nimport comfy.sampler_helpers\nimport comfy.model_sampling\nimport comfy.latent_formats\nimport comfy.sd\nimport comfy.supported_models\n\nimport latent_preview\nimport torch\nimport torch.nn.functional as F\n\nimport math\nimport copy\n\nfrom .helper import get_extra_options_kv, extra_options_flag, get_res4lyf_scheduler_list\nfrom .latents import initialize_or_scale\n\nfrom .noise_classes import prepare_noise, NOISE_GENERATOR_CLASSES_SIMPLE, NOISE_GENERATOR_NAMES_SIMPLE, NOISE_GENERATOR_NAMES\nfrom .sigmas import get_sigmas\n\n\nfrom .rk_sampler import sample_rk\nfrom .rk_coefficients import RK_SAMPLER_NAMES, IRK_SAMPLER_NAMES\nfrom .rk_guide_func import get_orthogonal\nfrom .noise_sigmas_timesteps_scaling import NOISE_MODE_NAMES\n\n\ndef move_to_same_device(*tensors):\n    if not tensors:\n        return tensors\n\n    device = tensors[0].device\n    return tuple(tensor.to(device) for tensor in tensors)\n\n\n#SCHEDULER_NAMES = comfy.samplers.SCHEDULER_NAMES + [\"beta57\"]\n\n\n\nclass ClownSamplerAdvanced:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\"required\":\n                    {\n                    \"noise_type_sde\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\n                    \"noise_type_sde_substep\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\n                    \"noise_mode_sde\": (NOISE_MODE_NAMES, {\"default\": 'hard', \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\n                    \"noise_mode_sde_substep\": (NOISE_MODE_NAMES, {\"default\": 'hard', \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\n                    \"eta\": (\"FLOAT\", {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\n                    \"eta_substep\": (\"FLOAT\", {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\n                    \"s_noise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01, \"tooltip\": \"Adds extra SDE noise. Values around 1.03-1.07 can lead to a moderate boost in detail and paint textures.\"}),\n                    \"d_noise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01, \"tooltip\": \"Downscales the sigma schedule. Values around 0.98-0.95 can lead to a large boost in detail and paint textures.\"}),\n                    \"noise_seed_sde\": (\"INT\", {\"default\": -1, \"min\": -1, \"max\": 0xffffffffffffffff}),\n                    \"sampler_name\": (RK_SAMPLER_NAMES, {\"default\": \"res_2m\"}), \n                    \"implicit_sampler_name\": (IRK_SAMPLER_NAMES, {\"default\": \"explicit_diagonal\"}), \n                    \"implicit_steps\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 10000}),\n                     },\n                \"optional\": \n                    {\n                    \"guides\": (\"GUIDES\", ),     \n                    \"options\": (\"OPTIONS\", ),   \n                    \"automation\": (\"AUTOMATION\", ),\n                    \"extra_options\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),   \n                    }\n                }\n\n    RETURN_TYPES = (\"SAMPLER\",)\n    RETURN_NAMES = (\"sampler\", ) \n\n    FUNCTION = \"main\"\n\n    CATEGORY = \"RES4LYF/legacy/samplers\"\n    DEPRECATED = True\n    \n    def main(self, \n             noise_type_sde=\"gaussian\", noise_type_sde_substep=\"gaussian\", noise_mode_sde=\"hard\",\n             eta=0.25, eta_var=0.0, d_noise=1.0, s_noise=1.0, alpha_sde=-1.0, k_sde=1.0, cfgpp=0.0, c1=0.0, c2=0.5, c3=1.0, noise_seed_sde=-1, sampler_name=\"res_2m\", implicit_sampler_name=\"gauss-legendre_2s\",\n                    t_fn_formula=None, sigma_fn_formula=None, implicit_steps=0,\n                    latent_guide=None, latent_guide_inv=None, guide_mode=\"\", latent_guide_weights=None, latent_guide_weights_inv=None, latent_guide_mask=None, latent_guide_mask_inv=None, rescale_floor=True, sigmas_override=None, \n                    guides=None, options=None, sde_noise=None,sde_noise_steps=1, \n                    extra_options=\"\", automation=None, etas=None, s_noises=None,unsample_resample_scales=None, regional_conditioning_weights=None,frame_weights_grp=None, eta_substep=0.5, noise_mode_sde_substep=\"hard\",\n                    ): \n            if implicit_sampler_name == \"none\":\n                implicit_steps = 0 \n                implicit_sampler_name = \"gauss-legendre_2s\"\n\n            if noise_mode_sde == \"none\":\n                eta, eta_var = 0.0, 0.0\n                noise_mode_sde = \"hard\"\n        \n            default_dtype = getattr(torch, get_extra_options_kv(\"default_dtype\", \"float64\", extra_options), torch.float64)\n\n            unsample_resample_scales_override = unsample_resample_scales\n\n            if options is not None:\n                noise_type_sde = options.get('noise_type_sde', noise_type_sde)\n                noise_mode_sde = options.get('noise_mode_sde', noise_mode_sde)\n                eta = options.get('eta', eta)\n                s_noise = options.get('s_noise', s_noise)\n                d_noise = options.get('d_noise', d_noise)\n                alpha_sde = options.get('alpha_sde', alpha_sde)\n                k_sde = options.get('k_sde', k_sde)\n                c1 = options.get('c1', c1)\n                c2 = options.get('c2', c2)\n                c3 = options.get('c3', c3)\n                t_fn_formula = options.get('t_fn_formula', t_fn_formula)\n                sigma_fn_formula = options.get('sigma_fn_formula', sigma_fn_formula)\n                frame_weights_grp = options.get('frame_weights_grp', frame_weights_grp)\n                sde_noise = options.get('sde_noise', sde_noise)\n                sde_noise_steps = options.get('sde_noise_steps', sde_noise_steps)\n\n            #noise_seed_sde = torch.initial_seed()+1 if noise_seed_sde < 0 else noise_seed_sde \n\n            rescale_floor = extra_options_flag(\"rescale_floor\", extra_options)\n\n            if automation is not None:\n                etas = automation['etas'] if 'etas' in automation else None\n                s_noises = automation['s_noises'] if 's_noises' in automation else None\n                unsample_resample_scales = automation['unsample_resample_scales'] if 'unsample_resample_scales' in automation else None\n                frame_weights_grp = automation['frame_weights_grp'] if 'frame_weights_grp' in automation else None\n\n            etas = initialize_or_scale(etas, eta, MAX_STEPS).to(default_dtype)\n            etas = F.pad(etas, (0, MAX_STEPS), value=0.0)\n            s_noises = initialize_or_scale(s_noises, s_noise, MAX_STEPS).to(default_dtype)\n            s_noises = F.pad(s_noises, (0, MAX_STEPS), value=0.0)\n        \n            if sde_noise is None:\n                sde_noise = []\n            else:\n                sde_noise = copy.deepcopy(sde_noise)\n                for i in range(len(sde_noise)):\n                    sde_noise[i] = sde_noise[i]\n                    for j in range(sde_noise[i].shape[1]):\n                        sde_noise[i][0][j] = ((sde_noise[i][0][j] - sde_noise[i][0][j].mean()) / sde_noise[i][0][j].std())\n                        \n            if unsample_resample_scales_override is not None:\n                unsample_resample_scales = unsample_resample_scales_override\n\n            sampler = comfy.samplers.ksampler(\"rk\", {\"eta\": eta, \"eta_var\": eta_var, \"s_noise\": s_noise, \"d_noise\": d_noise, \"alpha\": alpha_sde, \"k\": k_sde, \"c1\": c1, \"c2\": c2, \"c3\": c3, \"cfgpp\": cfgpp, \n                                                    \"noise_sampler_type\": noise_type_sde, \"noise_mode\": noise_mode_sde, \"noise_seed\": noise_seed_sde, \"rk_type\": sampler_name, \"implicit_sampler_name\": implicit_sampler_name,\n                                                            \"t_fn_formula\": t_fn_formula, \"sigma_fn_formula\": sigma_fn_formula, \"implicit_steps\": implicit_steps,\n                                                            \"latent_guide\": latent_guide, \"latent_guide_inv\": latent_guide_inv, \"mask\": latent_guide_mask, \"mask_inv\": latent_guide_mask_inv,\n                                                            \"latent_guide_weights\": latent_guide_weights, \"latent_guide_weights_inv\": latent_guide_weights_inv, \"guide_mode\": guide_mode,\n                                                            \"LGW_MASK_RESCALE_MIN\": rescale_floor, \"sigmas_override\": sigmas_override, \"sde_noise\": sde_noise,\n                                                            \"extra_options\": extra_options,\n                                                            \"etas\": etas, \"s_noises\": s_noises, \"unsample_resample_scales\": unsample_resample_scales, \"regional_conditioning_weights\": regional_conditioning_weights,\n                                                            \"guides\": guides, \"frame_weights_grp\": frame_weights_grp, \"eta_substep\": eta_substep, \"noise_mode_sde_substep\": noise_mode_sde_substep,\n                                                            })\n\n            return (sampler, )\n\n\n\n\nclass ClownSampler:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\"required\":\n                    {\n                    \"noise_type_sde\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\n                    \"noise_mode_sde\": (NOISE_MODE_NAMES, {\"default\": 'hard', \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\n                    \"eta\": (\"FLOAT\", {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\n                    \"s_noise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\n                    \"d_noise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\n                    \"noise_seed_sde\": (\"INT\", {\"default\": -1, \"min\": -1, \"max\": 0xffffffffffffffff}),\n                    \"sampler_name\": (RK_SAMPLER_NAMES, {\"default\": \"res_2m\"}), \n                    \"implicit_sampler_name\": (IRK_SAMPLER_NAMES, {\"default\": \"explicit_diagonal\"}), \n                    \"implicit_steps\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 10000}),\n                     },\n                \"optional\": \n                    {\n                    \"guides\": (\"GUIDES\", ),     \n                    \"options\": (\"OPTIONS\", ),   \n                    \"automation\": (\"AUTOMATION\", ),\n                    \"extra_options\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),   \n                    }\n                }\n\n    RETURN_TYPES = (\"SAMPLER\",)\n    RETURN_NAMES = (\"sampler\", ) \n\n    FUNCTION = \"main\"\n\n    CATEGORY = \"RES4LYF/legacy/samplers\"\n    DEPRECATED = True\n    \n    def main(self, \n             noise_type_sde=\"gaussian\", noise_type_sde_substep=\"gaussian\", noise_mode_sde=\"hard\",\n             eta=0.25, eta_var=0.0, d_noise=1.0, s_noise=1.0, alpha_sde=-1.0, k_sde=1.0, cfgpp=0.0, c1=0.0, c2=0.5, c3=1.0, noise_seed_sde=-1, sampler_name=\"res_2m\", implicit_sampler_name=\"gauss-legendre_2s\",\n                    t_fn_formula=None, sigma_fn_formula=None, implicit_steps=0,\n                    latent_guide=None, latent_guide_inv=None, guide_mode=\"\", latent_guide_weights=None, latent_guide_weights_inv=None, latent_guide_mask=None, latent_guide_mask_inv=None, rescale_floor=True, sigmas_override=None,\n                    guides=None, options=None, sde_noise=None,sde_noise_steps=1, \n                    extra_options=\"\", automation=None, etas=None, s_noises=None,unsample_resample_scales=None, regional_conditioning_weights=None,frame_weights_grp=None,eta_substep=0.0, noise_mode_sde_substep=\"hard\",\n                    ): \n\n        eta_substep = eta\n        noise_mode_sde_substep = noise_mode_sde\n        noise_type_sde_substep = noise_type_sde\n\n        sampler = ClownSamplerAdvanced().main(\n                noise_type_sde=noise_type_sde, noise_type_sde_substep=noise_type_sde_substep, noise_mode_sde=noise_mode_sde,\n             eta=eta, eta_var=eta_var, d_noise=d_noise, s_noise=s_noise, alpha_sde=alpha_sde, k_sde=k_sde, cfgpp=cfgpp, c1=c1, c2=c2, c3=c3, noise_seed_sde=noise_seed_sde, sampler_name=sampler_name, implicit_sampler_name=implicit_sampler_name,\n                    t_fn_formula=t_fn_formula, sigma_fn_formula=sigma_fn_formula, implicit_steps=implicit_steps,\n                    latent_guide=latent_guide, latent_guide_inv=latent_guide_inv, guide_mode=guide_mode, latent_guide_weights=latent_guide_weights, latent_guide_weights_inv=latent_guide_weights_inv, latent_guide_mask=latent_guide_mask, latent_guide_mask_inv=latent_guide_mask_inv, rescale_floor=rescale_floor, sigmas_override=sigmas_override,\n                    guides=guides, options=options, sde_noise=sde_noise,sde_noise_steps=sde_noise_steps, \n                    extra_options=extra_options, automation=automation, etas=etas, s_noises=s_noises,unsample_resample_scales=unsample_resample_scales, regional_conditioning_weights=regional_conditioning_weights,frame_weights_grp=frame_weights_grp, eta_substep=eta_substep, noise_mode_sde_substep=noise_mode_sde_substep,\n                    )\n        \n        return sampler\n\n\n\ndef process_sampler_name(selected_value):\n    processed_name = selected_value.split(\"/\")[-1]\n    \n    if selected_value.startswith(\"fully_implicit\") or selected_value.startswith(\"diag_implicit\"):\n        implicit_sampler_name = processed_name\n        sampler_name = \"buehler\"\n    else:\n        sampler_name = processed_name\n        implicit_sampler_name = \"use_explicit\"\n    \n    return sampler_name, implicit_sampler_name\n\n\ndef copy_cond(positive):\n    new_positive = []\n    for embedding, cond in positive:\n        cond_copy = {}\n        for k, v in cond.items():\n            if isinstance(v, torch.Tensor):\n                cond_copy[k] = v.clone()\n            else:\n                cond_copy[k] = v  # ensure we're not copying huge shit like controlnets\n        new_positive.append([embedding.clone(), cond_copy])\n    return new_positive\n\n\n\nclass SharkSamplerAlpha:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\"required\":\n                    {\"model\": (\"MODEL\",),\n                    \"noise_type_init\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\n                    \"noise_stdev\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, }),\n                    \"noise_seed\": (\"INT\", {\"default\": 0, \"min\": -1, \"max\": 0xffffffffffffffff}),\n                    \"sampler_mode\": (['standard', 'unsample', 'resample'],),\n                    \"scheduler\": (get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\n                    \"steps\": (\"INT\", {\"default\": 30, \"min\": 1, \"max\": 10000}),\n                    \"denoise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\n                    \"denoise_alt\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\n                    \"cfg\": (\"FLOAT\", {\"default\": 3.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Negative values use channelwise CFG.\" }),\n                     },\n                \"optional\": \n                    {\n                    \"positive\": (\"CONDITIONING\", ),\n                    \"negative\": (\"CONDITIONING\", ),\n                    \"sampler\": (\"SAMPLER\", ),\n                    \"sigmas\": (\"SIGMAS\", ),\n                    \"latent_image\": (\"LATENT\", ),     \n                    \"options\": (\"OPTIONS\", ),   \n                    \"extra_options\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),   \n                    }\n                }\n\n    RETURN_TYPES = (\"LATENT\",\"LATENT\", \"LATENT\",)\n    RETURN_NAMES = (\"output\", \"denoised\",\"sde_noise\",) \n\n    FUNCTION = \"main\"\n\n    CATEGORY = \"RES4LYF/legacy/samplers\"\n    DEPRECATED = True\n    \n    def main(self, model, cfg, scheduler, steps, sampler_mode=\"standard\",denoise=1.0, denoise_alt=1.0,\n             noise_type_init=\"gaussian\", latent_image=None, \n             positive=None, negative=None, sampler=None, sigmas=None, latent_noise=None, latent_noise_match=None,\n             noise_stdev=1.0, noise_mean=0.0, noise_normalize=True, \n             d_noise=1.0, alpha_init=-1.0, k_init=1.0, cfgpp=0.0, noise_seed=-1,\n                    options=None, sde_noise=None,sde_noise_steps=1, \n                    extra_options=\"\", \n                    ): \n            # blame comfy here\n            raw_x = latent_image['raw_x'] if 'raw_x' in latent_image else None\n            last_seed = latent_image['last_seed'] if 'last_seed' in latent_image else None\n            \n            pos_cond = copy_cond(positive)\n            neg_cond = copy_cond(negative)\n\n            if sampler is None:\n                raise ValueError(\"sampler is required\")\n            else:\n                sampler = copy.deepcopy(sampler)\n\n            default_dtype = getattr(torch, get_extra_options_kv(\"default_dtype\", \"float64\", extra_options), torch.float64)\n                     \n            model = model.clone()\n            if pos_cond[0][1] is not None: \n                if \"regional_conditioning_weights\" in pos_cond[0][1]:\n                    sampler.extra_options['regional_conditioning_weights'] = pos_cond[0][1]['regional_conditioning_weights']\n                    sampler.extra_options['regional_conditioning_floors']  = pos_cond[0][1]['regional_conditioning_floors']\n                    regional_generate_conditionings_and_masks_fn = pos_cond[0][1]['regional_generate_conditionings_and_masks_fn']\n                    regional_conditioning, regional_mask = regional_generate_conditionings_and_masks_fn(latent_image['samples'])\n                    regional_conditioning = copy.deepcopy(regional_conditioning)\n                    regional_mask = copy.deepcopy(regional_mask)\n                    model.set_model_patch(regional_conditioning, 'regional_conditioning_positive')\n                    model.set_model_patch(regional_mask,         'regional_conditioning_mask')\n                    \n            if \"noise_seed\" in sampler.extra_options:\n                if sampler.extra_options['noise_seed'] == -1 and noise_seed != -1:\n                    sampler.extra_options['noise_seed'] = noise_seed + 1\n                    #print(\"Shark: setting clown noise seed to: \", sampler.extra_options['noise_seed'])\n\n            if \"sampler_mode\" in sampler.extra_options:\n                sampler.extra_options['sampler_mode'] = sampler_mode\n\n            if \"extra_options\" in sampler.extra_options:\n                extra_options += \" \"\n                extra_options += sampler.extra_options['extra_options']\n                sampler.extra_options['extra_options'] = extra_options\n\n            batch_size = int(get_extra_options_kv(\"batch_size\", \"1\", extra_options))\n            if batch_size > 1:\n                latent_image['samples'] = latent_image['samples'].repeat(batch_size, 1, 1, 1) \n            \n            latent_image_batch = {\"samples\": latent_image['samples']}\n            out_samples, out_samples_fp64, out_denoised_samples, out_denoised_samples_fp64 = [], [], [], []\n            for batch_num in range(latent_image_batch['samples'].shape[0]):\n                latent_unbatch = copy.deepcopy(latent_image)\n                latent_unbatch['samples'] = latent_image_batch['samples'][batch_num].clone().unsqueeze(0)\n                \n\n\n                if noise_seed == -1:\n                    seed = torch.initial_seed() + 1 + batch_num\n                else:\n                    seed = noise_seed + batch_num\n                    torch.manual_seed(seed)\n                    torch.cuda.manual_seed(seed)\n                    #torch.cuda.manual_seed_all(seed)\n\n\n                if options is not None:\n                    noise_stdev     = options.get('noise_init_stdev', noise_stdev)\n                    noise_mean      = options.get('noise_init_mean',  noise_mean)\n                    noise_type_init = options.get('noise_type_init',  noise_type_init)\n                    d_noise         = options.get('d_noise',          d_noise)\n                    alpha_init      = options.get('alpha_init',       alpha_init)\n                    k_init          = options.get('k_init',           k_init)\n                    sde_noise       = options.get('sde_noise',        sde_noise)\n                    sde_noise_steps = options.get('sde_noise_steps',  sde_noise_steps)\n\n                latent_image_dtype = latent_unbatch['samples'].dtype\n\n                if isinstance(model.model.model_config, comfy.supported_models.Flux) or isinstance(model.model.model_config, comfy.supported_models.FluxSchnell):\n                    if pos_cond is None:\n                        pos_cond = [[\n                            torch.zeros((1, 256, 4096)),\n                            {'pooled_output': torch.zeros((1, 768))}\n                            ]]\n\n                    if extra_options_flag(\"uncond_ortho_flux\", extra_options):\n                        if neg_cond is None:\n                            print(\"uncond_ortho_flux: using random negative conditioning...\")\n                            neg_cond = [[\n                                torch.randn((1, 256, 4096)),\n                                {'pooled_output': torch.randn((1, 768))}\n                                ]]\n                        #neg_cond[0][0] = get_orthogonal(neg_cond[0][0].to(torch.bfloat16), pos_cond[0][0].to(torch.bfloat16))\n                        #neg_cond[0][1]['pooled_output'] = get_orthogonal(neg_cond[0][1]['pooled_output'].to(torch.bfloat16), pos_cond[0][1]['pooled_output'].to(torch.bfloat16))\n                        neg_cond[0][0] = get_orthogonal(neg_cond[0][0], pos_cond[0][0])\n                        neg_cond[0][1]['pooled_output'] = get_orthogonal(neg_cond[0][1]['pooled_output'], pos_cond[0][1]['pooled_output'])\n                        \n                    if neg_cond is None:\n                        neg_cond = [[\n                            torch.zeros((1, 256, 4096)),\n                            {'pooled_output': torch.zeros((1, 768))}\n                            ]]\n                else:\n                    if pos_cond is None:\n                        pos_cond = [[\n                            torch.zeros((1, 154, 4096)),\n                            {'pooled_output': torch.zeros((1, 2048))}\n                            ]]\n\n                    if extra_options_flag(\"uncond_ortho_sd35\", extra_options):\n                        if neg_cond is None:\n                            neg_cond = [[\n                                torch.randn((1, 154, 4096)),\n                                {'pooled_output': torch.randn((1, 2048))}\n                                ]]\n                        \n                        neg_cond[0][0] = get_orthogonal(neg_cond[0][0], pos_cond[0][0])\n                        neg_cond[0][1]['pooled_output'] = get_orthogonal(neg_cond[0][1]['pooled_output'], pos_cond[0][1]['pooled_output'])\n                        \n\n                    if neg_cond is None:\n                        neg_cond = [[\n                            torch.zeros((1, 154, 4096)),\n                            {'pooled_output': torch.zeros((1, 2048))}\n                            ]]\n                        \n                        \n                if extra_options_flag(\"zero_uncond_t5\", extra_options):\n                    neg_cond[0][0] = torch.zeros_like(neg_cond[0][0])\n                    \n                if extra_options_flag(\"zero_uncond_pooled_output\", extra_options):\n                    neg_cond[0][1]['pooled_output'] = torch.zeros_like(neg_cond[0][1]['pooled_output'])\n                        \n                if extra_options_flag(\"zero_pooled_output\", extra_options):\n                    pos_cond[0][1]['pooled_output'] = torch.zeros_like(pos_cond[0][1]['pooled_output'])\n                    neg_cond[0][1]['pooled_output'] = torch.zeros_like(neg_cond[0][1]['pooled_output'])\n\n                if denoise_alt < 0:\n                    d_noise = denoise_alt = -denoise_alt\n                if options is not None:\n                    d_noise = options.get('d_noise', d_noise)\n\n                if sigmas is not None:\n                    sigmas = sigmas.clone().to(default_dtype)\n                else: \n                    sigmas = get_sigmas(model, scheduler, steps, denoise).to(default_dtype)\n                sigmas *= denoise_alt\n\n                if sampler_mode.startswith(\"unsample\"): \n                    null = torch.tensor([0.0], device=sigmas.device, dtype=sigmas.dtype)\n                    sigmas = torch.flip(sigmas, dims=[0])\n                    sigmas = torch.cat([sigmas, null])\n                elif sampler_mode.startswith(\"resample\"):\n                    null = torch.tensor([0.0], device=sigmas.device, dtype=sigmas.dtype)\n                    sigmas = torch.cat([null, sigmas])\n                    sigmas = torch.cat([sigmas, null])\n\n                x = latent_unbatch[\"samples\"].clone().to(default_dtype) \n                if latent_unbatch is not None:\n                    if \"samples_fp64\" in latent_unbatch:\n                        if latent_unbatch['samples'].shape == latent_unbatch['samples_fp64'].shape:\n                            if torch.norm(latent_unbatch['samples'] - latent_unbatch['samples_fp64']) < 0.01:\n                                x = latent_unbatch[\"samples_fp64\"].clone()\n\n                if latent_noise is not None:\n                    latent_noise_samples = latent_noise[\"samples\"].clone().to(default_dtype)  \n                if latent_noise_match is not None:\n                    latent_noise_match_samples = latent_noise_match[\"samples\"].clone().to(default_dtype)\n\n                truncate_conditioning = extra_options_flag(\"truncate_conditioning\", extra_options)\n                if truncate_conditioning == \"true\" or truncate_conditioning == \"true_and_zero_neg\":\n                    if pos_cond is not None:\n                        pos_cond[0][0] = pos_cond[0][0].clone().to(default_dtype)\n                        pos_cond[0][1][\"pooled_output\"] = pos_cond[0][1][\"pooled_output\"].clone().to(default_dtype)\n                    if neg_cond is not None:\n                        neg_cond[0][0] = neg_cond[0][0].clone().to(default_dtype)\n                        neg_cond[0][1][\"pooled_output\"] = neg_cond[0][1][\"pooled_output\"].clone().to(default_dtype)\n                    c = []\n                    for t in pos_cond:\n                        d = t[1].copy()\n                        pooled_output = d.get(\"pooled_output\", None)\n\n                    for t in neg_cond:\n                        d = t[1].copy()\n                        pooled_output = d.get(\"pooled_output\", None)\n                        if pooled_output is not None:\n                            if truncate_conditioning == \"true_and_zero_neg\":\n                                d[\"pooled_output\"] = torch.zeros((1,2048), dtype=t[0].dtype, device=t[0].device)\n                                n = [torch.zeros((1,154,4096), dtype=t[0].dtype, device=t[0].device), d]\n                            else:\n                                d[\"pooled_output\"] = d[\"pooled_output\"][:, :2048]\n                                n = [t[0][:, :154, :4096], d]\n                        c.append(n)\n                    neg_cond = c\n                \n                sigmin = model.model.model_sampling.sigma_min\n                sigmax = model.model.model_sampling.sigma_max\n\n                if sde_noise is None and sampler_mode.startswith(\"unsample\"):\n                    total_steps = len(sigmas)+1\n                    sde_noise = []\n                else:\n                    total_steps = 1\n\n                for total_steps_iter in range (sde_noise_steps):\n                        \n                    if noise_type_init == \"none\":\n                        noise = torch.zeros_like(x)\n                    elif latent_noise is None:\n                        print(\"Initial latent noise seed: \", seed)\n                        noise_sampler_init = NOISE_GENERATOR_CLASSES_SIMPLE.get(noise_type_init)(x=x, seed=seed, sigma_min=sigmin, sigma_max=sigmax)\n                    \n                        if noise_type_init == \"fractal\":\n                            noise_sampler_init.alpha = alpha_init\n                            noise_sampler_init.k = k_init\n                            noise_sampler_init.scale = 0.1\n                        noise = noise_sampler_init(sigma=sigmax, sigma_next=sigmin)\n                    else:\n                        noise = latent_noise_samples\n\n                    if noise_normalize and noise.std() > 0:\n                        noise = (noise - noise.mean(dim=(-2, -1), keepdim=True)) / noise.std(dim=(-2, -1), keepdim=True)\n                        #noise.sub_(noise.mean()).div_(noise.std())\n                    noise *= noise_stdev\n                    noise = (noise - noise.mean()) + noise_mean\n                    \n                    if latent_noise_match is not None:\n                        for i in range(latent_noise_match_samples.shape[1]):\n                            noise[0][i] = (noise[0][i] - noise[0][i].mean())\n                            noise[0][i] = (noise[0][i]) + latent_noise_match_samples[0][i].mean()\n\n                    noise_mask = latent_unbatch[\"noise_mask\"] if \"noise_mask\" in latent_unbatch else None\n\n                    x0_output = {}\n\n\n                    if cfg < 0:\n                        sampler.extra_options['cfg_cw'] = -cfg\n                        cfg = 1.0\n                    else:\n                        sampler.extra_options.pop(\"cfg_cw\", None) \n                        \n                    \n                    if sde_noise is None:\n                        sde_noise = []\n                    else:\n                        sde_noise = copy.deepcopy(sde_noise)\n                        for i in range(len(sde_noise)):\n                            sde_noise[i] = sde_noise[i]\n                            for j in range(sde_noise[i].shape[1]):\n                                sde_noise[i][0][j] = ((sde_noise[i][0][j] - sde_noise[i][0][j].mean()) / sde_noise[i][0][j].std())\n                                \n                    callback = latent_preview.prepare_callback(model, sigmas.shape[-1] - 1, x0_output)\n\n                    disable_pbar = not comfy.utils.PROGRESS_BAR_ENABLED\n                    \n                    model.model.diffusion_model.raw_x = raw_x\n                    model.model.diffusion_model.last_seed = last_seed\n                    samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, pos_cond, neg_cond, x.clone(), noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed)\n\n                    out = latent_unbatch.copy()\n                    out[\"samples\"] = samples\n                    if \"x0\" in x0_output:\n                        out_denoised = latent_unbatch.copy()\n                        out_denoised[\"samples\"] = model.model.process_latent_out(x0_output[\"x0\"].cpu())\n                    else:\n                        out_denoised = out\n                    \n                    out[\"samples_fp64\"] = out[\"samples\"].clone()\n                    out[\"samples\"]      = out[\"samples\"].to(latent_image_dtype)\n                    \n                    out_denoised[\"samples_fp64\"] = out_denoised[\"samples\"].clone()\n                    out_denoised[\"samples\"]      = out_denoised[\"samples\"].to(latent_image_dtype)\n                    \n                    out_samples.     append(out[\"samples\"])\n                    out_samples_fp64.append(out[\"samples_fp64\"])\n                    \n                    out_denoised_samples.     append(out_denoised[\"samples\"])\n                    out_denoised_samples_fp64.append(out_denoised[\"samples_fp64\"])\n                    \n                    seed += 1\n                    torch.manual_seed(seed)\n                    if total_steps_iter > 1: \n                        sde_noise.append(out[\"samples_fp64\"])\n                        \n            out_samples               = [tensor.squeeze(0) for tensor in out_samples]\n            out_samples_fp64          = [tensor.squeeze(0) for tensor in out_samples_fp64]\n            out_denoised_samples      = [tensor.squeeze(0) for tensor in out_denoised_samples]\n            out_denoised_samples_fp64 = [tensor.squeeze(0) for tensor in out_denoised_samples_fp64]\n\n            out['samples']      = torch.stack(out_samples,     dim=0)\n            out['samples_fp64'] = torch.stack(out_samples_fp64, dim=0)\n            \n            out_denoised['samples']      = torch.stack(out_denoised_samples,     dim=0)\n            out_denoised['samples_fp64'] = torch.stack(out_denoised_samples_fp64, dim=0)\n\n            out['raw_x'] = None\n            if hasattr(model.model.diffusion_model, \"raw_x\"):\n                if model.model.diffusion_model.raw_x is not None:\n                    out['raw_x'] = model.model.diffusion_model.raw_x.clone()\n                    del model.model.diffusion_model.raw_x\n\n            out['last_seed'] = None\n            if hasattr(model.model.diffusion_model, \"last_seed\"):\n                if model.model.diffusion_model.last_seed is not None:\n                    out['last_seed'] = model.model.diffusion_model.last_seed\n                    del model.model.diffusion_model.last_seed\n\n            return ( out, out_denoised, sde_noise,)\n\n\n\nclass ClownsharKSampler:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\"required\":\n                    {\"model\": (\"MODEL\",),\n                    \"noise_type_init\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\n                    \"noise_type_sde\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\n                    \"noise_mode_sde\": (NOISE_MODE_NAMES, {\"default\": 'hard', \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\n                    \"eta\": (\"FLOAT\", {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\n                    \"noise_seed\": (\"INT\", {\"default\": 0, \"min\": -1, \"max\": 0xffffffffffffffff}),\n                    \"sampler_mode\": (['standard', 'unsample', 'resample'],),\n                    \"sampler_name\": (RK_SAMPLER_NAMES, {\"default\": \"res_2m\"}), \n                    \"implicit_sampler_name\": (IRK_SAMPLER_NAMES, {\"default\": \"explicit_diagonal\"}), \n                    \"scheduler\": (get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\n                    \"steps\": (\"INT\", {\"default\": 30, \"min\": 1, \"max\": 10000}),\n                    \"implicit_steps\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 10000}),\n                    \"denoise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\n                    \"denoise_alt\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\n                    \"cfg\": (\"FLOAT\", {\"default\": 3.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, }),\n                    \"extra_options\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),   \n                     },\n                \"optional\": \n                    {\n                    \"positive\": (\"CONDITIONING\", ),\n                    \"negative\": (\"CONDITIONING\", ),\n                    \"sigmas\": (\"SIGMAS\", ),\n                    \"latent_image\": (\"LATENT\", ),     \n                    \"guides\": (\"GUIDES\", ),     \n                    \"options\": (\"OPTIONS\", ),   \n                    \"automation\": (\"AUTOMATION\", ),\n                    }\n                }\n\n    RETURN_TYPES = (\"LATENT\",\"LATENT\", \"LATENT\",)\n    RETURN_NAMES = (\"output\", \"denoised\",\"sde_noise\",) \n\n    FUNCTION = \"main\"\n\n    CATEGORY = \"RES4LYF/legacy/samplers\"\n    DEPRECATED = True\n    \n    def main(self, model, cfg, sampler_mode, scheduler, steps, denoise=1.0, denoise_alt=1.0,\n             noise_type_init=\"gaussian\", noise_type_sde=\"brownian\", noise_mode_sde=\"hard\", latent_image=None, \n             positive=None, negative=None, sigmas=None, latent_noise=None, latent_noise_match=None,\n             noise_stdev=1.0, noise_mean=0.0, noise_normalize=True, noise_is_latent=False, \n             eta=0.25, eta_var=0.0, d_noise=1.0, s_noise=1.0, alpha_init=-1.0, k_init=1.0, alpha_sde=-1.0, k_sde=1.0, cfgpp=0.0, c1=0.0, c2=0.5, c3=1.0, noise_seed=-1, sampler_name=\"res_2m\", implicit_sampler_name=\"default\",\n                    t_fn_formula=None, sigma_fn_formula=None, implicit_steps=0,\n                    latent_guide=None, latent_guide_inv=None, guide_mode=\"blend\", latent_guide_weights=None, latent_guide_weights_inv=None, latent_guide_mask=None, latent_guide_mask_inv=None, rescale_floor=True, sigmas_override=None, \n                    shift=3.0, base_shift=0.85, guides=None, options=None, sde_noise=None,sde_noise_steps=1, shift_scaling=\"exponential\",\n                    extra_options=\"\", automation=None, etas=None, s_noises=None,unsample_resample_scales=None, regional_conditioning_weights=None,frame_weights_grp=None,\n                    ): \n\n        if noise_seed >= 0:\n            noise_seed_sde = noise_seed + 1\n        else:\n            noise_seed_sde = -1\n        \n        eta_substep = eta\n        noise_mode_sde_substep = noise_mode_sde\n        noise_type_sde_substep = noise_type_sde \n\n        sampler = ClownSamplerAdvanced().main(\n                noise_type_sde=noise_type_sde, noise_type_sde_substep=noise_type_sde_substep, noise_mode_sde=noise_mode_sde,\n             eta=eta, eta_var=eta_var, d_noise=d_noise, s_noise=s_noise, alpha_sde=alpha_sde, k_sde=k_sde, cfgpp=cfgpp, c1=c1, c2=c2, c3=c3, noise_seed_sde=noise_seed_sde, sampler_name=sampler_name, implicit_sampler_name=implicit_sampler_name,\n                    t_fn_formula=t_fn_formula, sigma_fn_formula=sigma_fn_formula, implicit_steps=implicit_steps,\n                    latent_guide=latent_guide, latent_guide_inv=latent_guide_inv, guide_mode=guide_mode, latent_guide_weights=latent_guide_weights, latent_guide_weights_inv=latent_guide_weights_inv, latent_guide_mask=latent_guide_mask, latent_guide_mask_inv=latent_guide_mask_inv, rescale_floor=rescale_floor, sigmas_override=sigmas_override, \n                    guides=guides, options=options, sde_noise=sde_noise,sde_noise_steps=sde_noise_steps, \n                    extra_options=extra_options, automation=automation, etas=etas, s_noises=s_noises,unsample_resample_scales=unsample_resample_scales, regional_conditioning_weights=regional_conditioning_weights,frame_weights_grp=frame_weights_grp, eta_substep=eta_substep, noise_mode_sde_substep=noise_mode_sde_substep,\n                    )\n\n        return SharkSamplerAlpha().main(\n            model=model, cfg=cfg, sampler_mode=sampler_mode, scheduler=scheduler, steps=steps, \n            denoise=denoise, denoise_alt=denoise_alt, noise_type_init=noise_type_init, \n            latent_image=latent_image, positive=positive, negative=negative, sampler=sampler[0], \n            sigmas=sigmas, latent_noise=latent_noise, latent_noise_match=latent_noise_match, \n            noise_stdev=noise_stdev, noise_mean=noise_mean, noise_normalize=noise_normalize, \n            d_noise=d_noise, alpha_init=alpha_init, k_init=k_init, cfgpp=cfgpp, noise_seed=noise_seed, \n            options=options, sde_noise=sde_noise, sde_noise_steps=sde_noise_steps, \n            extra_options=extra_options\n        )\n\n\n\n\n\nclass UltraSharkSampler:  \n    # for use with https://github.com/ClownsharkBatwing/UltraCascade\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\n            \"required\": {\n                \"model\": (\"MODEL\",),\n                \"add_noise\": (\"BOOLEAN\", {\"default\": True}),\n                \"normalize_noise\": (\"BOOLEAN\", {\"default\": False}),\n                \"noise_type\": (NOISE_GENERATOR_NAMES, ),\n                \"alpha\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.1, \"round\": 0.01}),\n                \"k\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":2.0, \"round\": 0.01}),\n                \"noise_seed\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\n                \"cfg\": (\"FLOAT\", {\"default\": 6.0, \"min\": 0.0, \"max\": 100.0, \"step\":0.5, \"round\": 0.01}),\n                \"positive\": (\"CONDITIONING\", ),\n                \"negative\": (\"CONDITIONING\", ),\n                \"sampler\": (\"SAMPLER\", ),\n                \"sigmas\": (\"SIGMAS\", ),\n                \"latent_image\": (\"LATENT\", ),               \n                \"guide_type\": (['residual', 'weighted'], ),\n                \"guide_weight\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": 0.01}),\n            },\n            \"optional\": {\n                #\"latent_noise\": (\"LATENT\", ),\n                \"guide\": (\"LATENT\",),\n                \"guide_weights\": (\"SIGMAS\",),\n                #\"style\": (\"CONDITIONING\", ),\n                #\"img_style\": (\"CONDITIONING\", ),\n            }\n        }\n\n    RETURN_TYPES = (\"LATENT\",\"LATENT\",\"LATENT\")\n    RETURN_NAMES = (\"output\", \"denoised_output\", \"latent_batch\")\n\n    FUNCTION = \"main\"\n\n    CATEGORY = \"RES4LYF/legacy/samplers/UltraCascade\"\n    DESCRIPTION = \"For use with Stable Cascade and UltraCascade.\"\n    DEPRECATED = True\n    \n    def main(self, model, add_noise, normalize_noise, noise_type, noise_seed, cfg, alpha, k, positive, negative, sampler, \n               sigmas, guide_type, guide_weight, latent_image, latent_noise=None, guide=None, guide_weights=None, style=None, img_style=None): \n\n            if model.model.model_config.unet_config.get('stable_cascade_stage') == 'up':\n                model = model.clone()\n                x_lr = guide['samples'] if guide is not None else None\n                guide_weights = initialize_or_scale(guide_weights, guide_weight, 10000)#(\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\n                #model.model.diffusion_model.set_guide_weights(guide_weights=guide_weights)\n                #model.model.diffusion_model.set_guide_type(guide_type=guide_type)\n                #model.model.diffusion_model.set_x_lr(x_lr=x_lr)\n                patch = model.model_options.get(\"transformer_options\", {}).get(\"patches_replace\", {}).get(\"ultracascade\", {}).get(\"main\")\n                if patch is not None:\n                    patch.update(x_lr=x_lr, guide_weights=guide_weights, guide_type=guide_type)\n                else:\n                    model.model.diffusion_model.set_sigmas_schedule(sigmas_schedule=sigmas)\n                    model.model.diffusion_model.set_sigmas_prev(sigmas_prev=sigmas[:1])\n                    model.model.diffusion_model.set_guide_weights(guide_weights=guide_weights)\n                    model.model.diffusion_model.set_guide_type(guide_type=guide_type)\n                    model.model.diffusion_model.set_x_lr(x_lr=x_lr)\n                \n            elif model.model.model_config.unet_config['stable_cascade_stage'] == 'b':\n                c_pos, c_neg = [], []\n                for t in positive:\n                    d_pos = t[1].copy()\n                    d_neg = t[1].copy()\n                    \n                    d_pos['stable_cascade_prior'] = guide['samples']\n\n                    pooled_output = d_neg.get(\"pooled_output\", None)\n                    if pooled_output is not None:\n                        d_neg[\"pooled_output\"] = torch.zeros_like(pooled_output)\n                    \n                    c_pos.append([t[0], d_pos])            \n                    c_neg.append([torch.zeros_like(t[0]), d_neg])\n                positive = c_pos\n                negative = c_neg\n                \n            if style is not None:\n                model.set_model_patch(style, 'style_cond')\n            if img_style is not None:\n                model.set_model_patch(img_style,'img_style_cond')\n        \n            # 1, 768      clip_style[0][0][1]['unclip_conditioning'][0]['clip_vision_output'].image_embeds.shape\n            # 1, 1280     clip_style[0][0][1]['pooled_output'].shape \n            # 1, 77, 1280 clip_style[0][0][0].shape\n        \n            latent = latent_image\n            latent_image = latent[\"samples\"]\n            torch.manual_seed(noise_seed)\n\n            if not add_noise:\n                noise = torch.zeros(latent_image.size(), dtype=latent_image.dtype, layout=latent_image.layout, device=\"cpu\")\n            elif latent_noise is None:\n                batch_inds = latent[\"batch_index\"] if \"batch_index\" in latent else None\n                noise = prepare_noise(latent_image, noise_seed, noise_type, batch_inds, alpha, k)\n            else:\n                noise = latent_noise[\"samples\"]#.to(torch.float64)\n\n            if normalize_noise and noise.std() > 0:\n                noise = (noise - noise.mean(dim=(-2, -1), keepdim=True)) / noise.std(dim=(-2, -1), keepdim=True)\n\n            noise_mask = None\n            if \"noise_mask\" in latent:\n                noise_mask = latent[\"noise_mask\"]\n\n            x0_output = {}\n            callback = latent_preview.prepare_callback(model, sigmas.shape[-1] - 1, x0_output)\n            disable_pbar = False\n\n            samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, \n                                                 noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, \n                                                 seed=noise_seed)\n\n            out = latent.copy()\n            out[\"samples\"] = samples\n            if \"x0\" in x0_output:\n                out_denoised = latent.copy()\n                out_denoised[\"samples\"] = model.model.process_latent_out(x0_output[\"x0\"].cpu())\n            else:\n                out_denoised = out\n                \n            return (out, out_denoised)\n\n\n"
  },
  {
    "path": "legacy/samplers_extensions.py",
    "content": "from .noise_classes import NOISE_GENERATOR_CLASSES, NOISE_GENERATOR_CLASSES_SIMPLE, NOISE_GENERATOR_NAMES, NOISE_GENERATOR_NAMES_SIMPLE\r\n\r\nimport comfy.sample\r\nimport comfy.sampler_helpers\r\nimport comfy.model_sampling\r\nimport comfy.latent_formats\r\nimport comfy.sd\r\nimport comfy.supported_models\r\nfrom .conditioning import FluxRegionalPrompt, FluxRegionalConditioning\r\nfrom .models import ReFluxPatcher\r\n\r\n\r\nimport torch\r\nimport torch.nn.functional as F\r\n\r\nimport copy\r\n\r\nfrom .helper import initialize_or_scale, get_res4lyf_scheduler_list\r\n\r\n\r\ndef move_to_same_device(*tensors):\r\n    if not tensors:\r\n        return tensors\r\n\r\n    device = tensors[0].device\r\n    return tuple(tensor.to(device) for tensor in tensors)\r\n\r\n\r\n\r\n\r\n\r\n    \r\nclass SamplerOptions_TimestepScaling:\r\n    # for patching the t_fn and sigma_fn (sigma <-> timestep) formulas to allow picking Runge-Kutta Ci values (\"midpoints\") with different scaling.\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {\r\n                     \"sampler\": (\"SAMPLER\", ),\r\n                     \"t_fn_formula\": (\"STRING\", {\"default\": \"1/((sigma).exp()+1)\", \"multiline\": True}),\r\n                     \"sigma_fn_formula\": (\"STRING\", {\"default\": \"((1-t)/t).log()\", \"multiline\": True}),\r\n                    },\r\n                     \"optional\": \r\n                    {\r\n                    }  \r\n               }\r\n    RETURN_TYPES = (\"SAMPLER\",)\r\n    RETURN_NAMES = (\"sampler\",)\r\n    FUNCTION = \"set_sampler_extra_options\"\r\n    \r\n    CATEGORY = \"RES4LYF/legacy/sampler_extensions\"\r\n    DESCRIPTION = \"Patches ClownSampler's t_fn and sigma_fn (sigma <-> timestep) formulas to allow picking Runge-Kutta Ci values (midpoints) with different scaling.\"\r\n    DEPRECATED = True\r\n\r\n    def set_sampler_extra_options(self, sampler, t_fn_formula=None, sigma_fn_formula=None, ):\r\n\r\n        sampler = copy.deepcopy(sampler)\r\n\r\n        sampler.extra_options['t_fn_formula']     = t_fn_formula\r\n        sampler.extra_options['sigma_fn_formula'] = sigma_fn_formula\r\n\r\n        return (sampler, )\r\n\r\n\r\n\r\nclass SamplerOptions_GarbageCollection:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {\r\n                     \"sampler\": (\"SAMPLER\", ),\r\n                     \"garbage_collection\": (\"BOOLEAN\", {\"default\": True}),\r\n                    },\r\n                     \"optional\": \r\n                    {\r\n                    }  \r\n               }\r\n    RETURN_TYPES = (\"SAMPLER\",)\r\n    RETURN_NAMES = (\"sampler\",)\r\n    FUNCTION = \"set_sampler_extra_options\"\r\n    CATEGORY = \"RES4LYF/legacy/sampler_extensions\"\r\n    DESCRIPTION = \"Patches ClownSampler to use garbage collection after every step. This can help with OOM issues during inference for large models like Flux. The tradeoff is slower sampling.\"\r\n    DEPRECATED = True\r\n\r\n    def set_sampler_extra_options(self, sampler, garbage_collection):\r\n        sampler = copy.deepcopy(sampler)\r\n        sampler.extra_options['GARBAGE_COLLECT'] = garbage_collection\r\n        return (sampler, )\r\n\r\n\r\n\r\nGUIDE_MODE_NAMES = [\"unsample\", \r\n                    \"resample\", \r\n                    \"epsilon\",\r\n                    \"epsilon_projection\",\r\n                    \"epsilon_dynamic_mean\",\r\n                    \"epsilon_dynamic_mean_std\", \r\n                    \"epsilon_dynamic_mean_from_bkg\", \r\n                    \"epsilon_guide_mean_std_from_bkg\",\r\n                    \"hard_light\", \r\n                    \"blend\", \r\n                    \"blend_projection\",\r\n                    \"mean_std\", \r\n                    \"mean\", \r\n                    \"mean_tiled\",\r\n                    \"std\", \r\n                    \"data\",\r\n                    #\"data_projection\",\r\n                    \"none\",\r\n]\r\n\r\n\r\n\r\n\r\nclass ClownInpaint: ##################################################################################################################################\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {#\"guide_mode\": (GUIDE_MODE_NAMES, {\"default\": 'epsilon', \"tooltip\": \"Recommended: epsilon or mean/mean_std with sampler_mode = standard, and unsample/resample with sampler_mode = unsample/resample. Epsilon_dynamic_mean, etc. are only used with two latent inputs and a mask. Blend/hard_light/mean/mean_std etc. require low strengths, start with 0.01-0.02.\"}),\r\n                     \"guide_weight\":               (\"FLOAT\", {\"default\": 0.10, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                     \"guide_weight_bkg\":           (\"FLOAT\", {\"default\": 1.00, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                    \"guide_weight_scheduler\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"guide_weight_scheduler_bkg\": ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"guide_end_step\":              (\"INT\", {\"default\": 15, \"min\": 1, \"max\": 10000}),\r\n                    \"guide_bkg_end_step\":          (\"INT\", {\"default\": 10000, \"min\": 1, \"max\": 10000}),\r\n                    },\r\n                    \"optional\": \r\n                    {\r\n                        \"model\":             (\"MODEL\", ),\r\n                        \"positive_inpaint\":  (\"CONDITIONING\", ),\r\n                        \"positive_bkg\":      (\"CONDITIONING\", ),\r\n                        \"negative\":          (\"CONDITIONING\", ),\r\n                        \"latent_image\":      (\"LATENT\", ),\r\n                        \"mask\":              (\"MASK\", ),\r\n                        \"guide_weights\":     (\"SIGMAS\", ),\r\n                        \"guide_weights_bkg\": (\"SIGMAS\", ),\r\n                    }  \r\n               }\r\n    RETURN_TYPES = (\"MODEL\",\"CONDITIONING\",\"CONDITIONING\",\"LATENT\",\"GUIDES\",)\r\n    RETURN_NAMES = (\"model\",\"positive\"    ,\"negative\"    ,\"latent\",\"guides\",)\r\n    CATEGORY = \"RES4LYF/legacy/sampler_extensions\"\r\n\r\n    FUNCTION = \"main\"\r\n    DEPRECATED = True\r\n\r\n    def main(self, guide_weight_scheduler=\"constant\", guide_weight_scheduler_bkg=\"constant\", guide_end_step=10000, guide_bkg_end_step=30, guide_weight_scale=1.0, guide_weight_bkg_scale=1.0, guide=None, guide_bkg=None, guide_weight=1.0, guide_weight_bkg=1.0, \r\n                    guide_mode=\"epsilon\", guide_weights=None, guide_weights_bkg=None, guide_mask_bkg=None,\r\n                    model=None, positive_inpaint=None, positive_bkg=None, negative=None, latent_image=None, mask=None, \r\n                    ):\r\n        default_dtype = torch.float64\r\n        guide = latent_image\r\n        guide_bkg = {'samples': latent_image['samples'].clone()}\r\n        \r\n        max_steps = 10000\r\n        \r\n        denoise, denoise_bkg = guide_weight_scale, guide_weight_bkg_scale\r\n        \r\n        if guide_mode.startswith(\"epsilon_\") and not guide_mode.startswith(\"epsilon_projection\") and guide_bkg == None:\r\n            print(\"Warning: need two latent inputs for guide_mode=\",guide_mode,\" to work. Falling back to epsilon.\")\r\n            guide_mode = \"epsilon\"\r\n        \r\n        if guide_weight_scheduler == \"constant\": \r\n            guide_weights = initialize_or_scale(None, guide_weight, guide_end_step).to(default_dtype)\r\n            guide_weights = F.pad(guide_weights, (0, max_steps), value=0.0)\r\n        \r\n        if guide_weight_scheduler_bkg == \"constant\": \r\n            guide_weights_bkg = initialize_or_scale(None, guide_weight_bkg, guide_bkg_end_step).to(default_dtype)\r\n            guide_weights_bkg = F.pad(guide_weights_bkg, (0, max_steps), value=0.0)\r\n            \r\n        guides = (guide_mode, guide_weight, guide_weight_bkg, guide_weights, guide_weights_bkg, guide, guide_bkg, mask, guide_mask_bkg,\r\n                  guide_weight_scheduler, guide_weight_scheduler_bkg, guide_end_step, guide_bkg_end_step, denoise, denoise_bkg)\r\n        \r\n        latent = {'samples': torch.zeros_like(latent_image['samples'])}\r\n        if (positive_inpaint is None) and (positive_bkg is None):\r\n            positive = None\r\n        else:\r\n            if positive_bkg is None:\r\n                if positive_bkg is None:\r\n                    positive_bkg = [[\r\n                        torch.zeros((1, 256, 4096)),\r\n                        {'pooled_output': torch.zeros((1, 768))}\r\n                        ]]\r\n            cond_regional, mask_inv     = FluxRegionalPrompt().main(cond=positive_inpaint,                              mask=mask)\r\n            cond_regional, mask_inv_inv = FluxRegionalPrompt().main(cond=positive_bkg    , cond_regional=cond_regional, mask=mask_inv)\r\n            \r\n            positive, = FluxRegionalConditioning().main(conditioning_regional=cond_regional, self_attn_floor=0.0)\r\n            \r\n        model, = ReFluxPatcher().main(model, enable=True)\r\n        \r\n        return (model, positive, negative, latent, guides, )\r\n    \r\n    \r\nclass ClownInpaintSimple: ##################################################################################################################################\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {#\"guide_mode\": (GUIDE_MODE_NAMES, {\"default\": 'epsilon', \"tooltip\": \"Recommended: epsilon or mean/mean_std with sampler_mode = standard, and unsample/resample with sampler_mode = unsample/resample. Epsilon_dynamic_mean, etc. are only used with two latent inputs and a mask. Blend/hard_light/mean/mean_std etc. require low strengths, start with 0.01-0.02.\"}),\r\n                     \"guide_weight\":               (\"FLOAT\", {\"default\": 0.10, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"guide_weight_scheduler\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"guide_end_step\":              (\"INT\", {\"default\": 15, \"min\": 1, \"max\": 10000}),\r\n                    },\r\n                    \"optional\": \r\n                    {\r\n                        \"model\":             (\"MODEL\", ),\r\n                        \"positive_inpaint\":  (\"CONDITIONING\", ),\r\n                        \"negative\":          (\"CONDITIONING\", ),\r\n                        \"latent_image\":      (\"LATENT\", ),\r\n                        \"mask\":              (\"MASK\", ),\r\n                    }\r\n               }\r\n    RETURN_TYPES = (\"MODEL\",\"CONDITIONING\",\"CONDITIONING\",\"LATENT\",\"GUIDES\",)\r\n    RETURN_NAMES = (\"model\",\"positive\"    ,\"negative\"    ,\"latent\",\"guides\",)\r\n    CATEGORY = \"RES4LYF/legacy/sampler_extensions\"\r\n\r\n    FUNCTION = \"main\"\r\n    DEPRECATED = True\r\n\r\n    def main(self, guide_weight_scheduler=\"constant\", guide_weight_scheduler_bkg=\"constant\", guide_end_step=10000, guide_bkg_end_step=30, guide_weight_scale=1.0, guide_weight_bkg_scale=1.0, guide=None, guide_bkg=None, guide_weight=1.0, guide_weight_bkg=1.0, \r\n                    guide_mode=\"epsilon\", guide_weights=None, guide_weights_bkg=None, guide_mask_bkg=None,\r\n                    model=None, positive_inpaint=None, positive_bkg=None, negative=None, latent_image=None, mask=None, \r\n                    ):\r\n        default_dtype = torch.float64\r\n        guide = latent_image\r\n        guide_bkg = {'samples': latent_image['samples'].clone()}\r\n        \r\n        max_steps = 10000\r\n        \r\n        denoise, denoise_bkg = guide_weight_scale, guide_weight_bkg_scale\r\n        \r\n        if guide_mode.startswith(\"epsilon_\") and not guide_mode.startswith(\"epsilon_projection\") and guide_bkg == None:\r\n            print(\"Warning: need two latent inputs for guide_mode=\",guide_mode,\" to work. Falling back to epsilon.\")\r\n            guide_mode = \"epsilon\"\r\n        \r\n        if guide_weight_scheduler == \"constant\": \r\n            guide_weights = initialize_or_scale(None, guide_weight, guide_end_step).to(default_dtype)\r\n            guide_weights = F.pad(guide_weights, (0, max_steps), value=0.0)\r\n        \r\n        if guide_weight_scheduler_bkg == \"constant\": \r\n            guide_weights_bkg = initialize_or_scale(None, guide_weight_bkg, guide_bkg_end_step).to(default_dtype)\r\n            guide_weights_bkg = F.pad(guide_weights_bkg, (0, max_steps), value=0.0)\r\n            \r\n        guides = (guide_mode, guide_weight, guide_weight_bkg, guide_weights, guide_weights_bkg, guide, guide_bkg, mask, guide_mask_bkg,\r\n                  guide_weight_scheduler, guide_weight_scheduler_bkg, guide_end_step, guide_bkg_end_step, denoise, denoise_bkg)\r\n        \r\n        latent = {'samples': torch.zeros_like(latent_image['samples'])}\r\n        if (positive_inpaint is None) and (positive_bkg is None):\r\n            positive = None\r\n        else:\r\n            if positive_bkg is None:\r\n                if positive_bkg is None:\r\n                    positive_bkg = [[\r\n                        torch.zeros((1, 256, 4096)),\r\n                        {'pooled_output': torch.zeros((1, 768))}\r\n                        ]]\r\n            cond_regional, mask_inv     = FluxRegionalPrompt().main(cond=positive_inpaint,                              mask=mask)\r\n            cond_regional, mask_inv_inv = FluxRegionalPrompt().main(cond=positive_bkg    , cond_regional=cond_regional, mask=mask_inv)\r\n            \r\n            positive, = FluxRegionalConditioning().main(conditioning_regional=cond_regional, self_attn_floor=1.0)\r\n            \r\n        model, = ReFluxPatcher().main(model, enable=True)\r\n        \r\n        return (model, positive, negative, latent, guides, )\r\n    \r\n\r\n##################################################################################################################################\r\n\r\n\r\nclass ClownsharKSamplerGuide:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {\"guide_mode\": (GUIDE_MODE_NAMES, {\"default\": 'epsilon_projection', \"tooltip\": \"Recommended: epsilon or mean/mean_std with sampler_mode = standard, and unsample/resample with sampler_mode = unsample/resample. Epsilon_dynamic_mean, etc. are only used with two latent inputs and a mask. Blend/hard_light/mean/mean_std etc. require low strengths, start with 0.01-0.02.\"}),\r\n                     \"guide_weight\": (\"FLOAT\", {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                     #\"guide_weight_bkg\": (\"FLOAT\", {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                     \"guide_weight_scale\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 1.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                     #\"guide_weight_bkg_scale\": (\"FLOAT\", {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"guide_weight_scheduler\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    #\"guide_weight_scheduler_bkg\": ([\"constant\"] + comfy.samplers.SCHEDULER_NAMES + [\"beta57\"], {\"default\": \"beta57\"},),\r\n                    \"guide_end_step\": (\"INT\", {\"default\": 15, \"min\": 1, \"max\": 10000}),\r\n                    #\"guide_bkg_end_step\": (\"INT\", {\"default\": 15, \"min\": 1, \"max\": 10000}),\r\n                    },\r\n                    \"optional\": \r\n                    {\r\n                        \"guide\": (\"LATENT\", ),\r\n                        #\"guide_bkg\": (\"LATENT\", ),\r\n                        \"guide_mask\": (\"MASK\", ),\r\n                        #\"guide_mask_bkg\": (\"MASK\", ),\r\n                        \"guide_weights\": (\"SIGMAS\", ),\r\n                        #\"guide_weights_bkg\": (\"SIGMAS\", ),\r\n                    }  \r\n               }\r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    CATEGORY     = \"RES4LYF/legacy/sampler_extensions\"\r\n\r\n    FUNCTION     = \"main\"\r\n    DEPRECATED = True\r\n\r\n    def main(self, guide_weight_scheduler=\"constant\", guide_weight_scheduler_bkg=\"constant\", guide_end_step=30, guide_bkg_end_step=30, guide_weight_scale=1.0, guide_weight_bkg_scale=1.0, guide=None, guide_bkg=None, guide_weight=0.0, guide_weight_bkg=0.0, \r\n                    guide_mode=\"blend\", guide_weights=None, guide_weights_bkg=None, guide_mask=None, guide_mask_bkg=None,\r\n                    ):\r\n        default_dtype = torch.float64\r\n        \r\n        max_steps = 10000\r\n        \r\n        denoise, denoise_bkg = guide_weight_scale, guide_weight_bkg_scale\r\n        \r\n        if guide_mode.startswith(\"epsilon_\") and not guide_mode.startswith(\"epsilon_projection\") and guide_bkg == None:\r\n            print(\"Warning: need two latent inputs for guide_mode=\",guide_mode,\" to work. Falling back to epsilon.\")\r\n            guide_mode = \"epsilon\"\r\n      \r\n        if guide_weight_scheduler == \"constant\" and guide_weights == None: \r\n            guide_weights = initialize_or_scale(None, 1.0, guide_end_step).to(default_dtype)\r\n            #guide_weights = initialize_or_scale(None, guide_weight, guide_end_step).to(default_dtype)\r\n            guide_weights = F.pad(guide_weights, (0, max_steps), value=0.0)\r\n        \r\n        if guide_weight_scheduler_bkg == \"constant\": \r\n            guide_weights_bkg = initialize_or_scale(None, 0.0, guide_bkg_end_step).to(default_dtype)\r\n            #guide_weights_bkg = initialize_or_scale(None, guide_weight_bkg, guide_bkg_end_step).to(default_dtype)\r\n            guide_weights_bkg = F.pad(guide_weights_bkg, (0, max_steps), value=0.0)\r\n            \r\n        guides = (guide_mode, guide_weight, guide_weight_bkg, guide_weights, guide_weights_bkg, guide, guide_bkg, guide_mask, guide_mask_bkg,\r\n                  guide_weight_scheduler, guide_weight_scheduler_bkg, guide_end_step, guide_bkg_end_step, denoise, denoise_bkg)\r\n        return (guides, )\r\n\r\n\r\n\r\n\r\nclass ClownsharKSamplerGuides:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {\"guide_mode\": (GUIDE_MODE_NAMES, {\"default\": 'epsilon_projection', \"tooltip\": \"Recommended: epsilon or mean/mean_std with sampler_mode = standard, and unsample/resample with sampler_mode = unsample/resample. Epsilon_dynamic_mean, etc. are only used with two latent inputs and a mask. Blend/hard_light/mean/mean_std etc. require low strengths, start with 0.01-0.02.\"}),\r\n                     \"guide_weight\": (\"FLOAT\", {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                     \"guide_weight_bkg\": (\"FLOAT\", {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                     \"guide_weight_scale\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 1.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                     \"guide_weight_bkg_scale\": (\"FLOAT\", {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"guide_weight_scheduler\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"guide_weight_scheduler_bkg\": ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"guide_end_step\": (\"INT\", {\"default\": 15, \"min\": 1, \"max\": 10000}),\r\n                    \"guide_bkg_end_step\": (\"INT\", {\"default\": 15, \"min\": 1, \"max\": 10000}),\r\n                    },\r\n                    \"optional\": \r\n                    {\r\n                        \"guide\": (\"LATENT\", ),\r\n                        \"guide_bkg\": (\"LATENT\", ),\r\n                        \"guide_mask\": (\"MASK\", ),\r\n                        \"guide_mask_bkg\": (\"MASK\", ),\r\n                        \"guide_weights\": (\"SIGMAS\", ),\r\n                        \"guide_weights_bkg\": (\"SIGMAS\", ),\r\n                    }  \r\n               }\r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    CATEGORY     = \"RES4LYF/legacy/sampler_extensions\"\r\n\r\n    FUNCTION     = \"main\"\r\n    DEPRECATED = True\r\n\r\n    def main(self, guide_weight_scheduler=\"constant\", guide_weight_scheduler_bkg=\"constant\", guide_end_step=30, guide_bkg_end_step=30, guide_weight_scale=1.0, guide_weight_bkg_scale=1.0, guide=None, guide_bkg=None, guide_weight=0.0, guide_weight_bkg=0.0, \r\n                    guide_mode=\"blend\", guide_weights=None, guide_weights_bkg=None, guide_mask=None, guide_mask_bkg=None,\r\n                    ):\r\n        default_dtype = torch.float64\r\n        \r\n        max_steps = 10000\r\n        \r\n        denoise, denoise_bkg = guide_weight_scale, guide_weight_bkg_scale\r\n        \r\n        if guide_mode.startswith(\"epsilon_\") and not guide_mode.startswith(\"epsilon_projection\") and guide_bkg == None:\r\n            print(\"Warning: need two latent inputs for guide_mode=\",guide_mode,\" to work. Falling back to epsilon.\")\r\n            guide_mode = \"epsilon\"\r\n        \r\n        if guide_weight_scheduler == \"constant\" and guide_weights == None: \r\n            guide_weights = initialize_or_scale(None, 1.0, guide_end_step).to(default_dtype)\r\n            guide_weights = F.pad(guide_weights, (0, max_steps), value=0.0)\r\n        \r\n        if guide_weight_scheduler_bkg == \"constant\" and guide_weights_bkg == None: \r\n            guide_weights_bkg = initialize_or_scale(None, 1.0, guide_bkg_end_step).to(default_dtype)\r\n            guide_weights_bkg = F.pad(guide_weights_bkg, (0, max_steps), value=0.0)\r\n    \r\n        guides = (guide_mode, guide_weight, guide_weight_bkg, guide_weights, guide_weights_bkg, guide, guide_bkg, guide_mask, guide_mask_bkg,\r\n                  guide_weight_scheduler, guide_weight_scheduler_bkg, guide_end_step, guide_bkg_end_step, denoise, denoise_bkg)\r\n        return (guides, )\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownsharKSamplerAutomation:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {\r\n                    },\r\n                    \"optional\": \r\n                    {\r\n                        \"etas\": (\"SIGMAS\", ),\r\n                        \"s_noises\": (\"SIGMAS\", ),\r\n                        \"unsample_resample_scales\": (\"SIGMAS\", ),\r\n\r\n                    }  \r\n               }\r\n    RETURN_TYPES = (\"AUTOMATION\",)\r\n    RETURN_NAMES = (\"automation\",)\r\n    CATEGORY = \"RES4LYF/legacy/sampler_extensions\"\r\n    \r\n    FUNCTION = \"main\"\r\n    DEPRECATED = True\r\n\r\n    def main(self, etas=None, s_noises=None, unsample_resample_scales=None,):\r\n        automation = (etas, s_noises, unsample_resample_scales)\r\n        return (automation, )\r\n\r\n\r\n\r\n\r\nclass ClownsharKSamplerAutomation_Advanced:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {\r\n                    },\r\n                    \"optional\": \r\n                    {\r\n                        \"automation\": (\"AUTOMATION\", ),\r\n                        \"etas\": (\"SIGMAS\", ),\r\n                        \"etas_substep\": (\"SIGMAS\", ),\r\n                        \"s_noises\": (\"SIGMAS\", ),\r\n                        \"unsample_resample_scales\": (\"SIGMAS\", ),\r\n                        \"frame_weights\": (\"SIGMAS\", ),\r\n                        \"frame_weights_bkg\": (\"SIGMAS\", ),\r\n                    }  \r\n               }\r\n    RETURN_TYPES = (\"AUTOMATION\",)\r\n    RETURN_NAMES = (\"automation\",)\r\n    CATEGORY = \"RES4LYF/legacy/sampler_extensions\"\r\n    \r\n    FUNCTION = \"main\"\r\n    DEPRECATED = True\r\n\r\n    def main(self, automation=None, etas=None, etas_substep=None, s_noises=None, unsample_resample_scales=None, frame_weights=None, frame_weights_bkg=None):\r\n        \r\n        if automation is None:\r\n            automation = {}\r\n        \r\n        frame_weights_grp = (frame_weights, frame_weights_bkg)\r\n\r\n        automation['etas'] = etas\r\n        automation['etas_substep'] = etas_substep\r\n        automation['s_noises'] = s_noises\r\n        automation['unsample_resample_scales'] = unsample_resample_scales\r\n        automation['frame_weights_grp'] = frame_weights_grp\r\n\r\n        return (automation, )\r\n\r\n\r\n\r\nclass ClownsharKSamplerOptions:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"noise_init_stdev\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, }),\r\n                \"noise_init_mean\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, }),\r\n                \"noise_type_init\": (NOISE_GENERATOR_NAMES, {\"default\": \"gaussian\"}),\r\n                \"noise_type_sde\": (NOISE_GENERATOR_NAMES, {\"default\": \"brownian\"}),\r\n                \"noise_mode_sde\": ([\"hard\", \"hard_var\", \"hard_sq\", \"soft\", \"softer\", \"exp\"], {\"default\": 'hard', \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\r\n                \"eta\": (\"FLOAT\", {\"default\": 0.25, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False}),\r\n                \"s_noise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01, \"round\": False}),\r\n                \"d_noise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                \"alpha_init\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.1}),\r\n                \"k_init\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 2}),      \r\n                \"alpha_sde\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.1}),\r\n                \"k_sde\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 2}),      \r\n                \"noise_seed\": (\"INT\", {\"default\": -1, \"min\": -1, \"max\": 0xffffffffffffffff, \"tooltip\": \"Seed for the SDE noise that is added after each step if eta or eta_var are non-zero. If set to -1, it will use the increment the seed most recently used by the workflow.\"}),\r\n                \"c1\": (\"FLOAT\", {\"default\": 0.0, \"min\": -1.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"c2\": (\"FLOAT\", {\"default\": 0.5, \"min\": -1.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"c3\": (\"FLOAT\", {\"default\": 1.0, \"min\": -1.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"t_fn_formula\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),\r\n                \"sigma_fn_formula\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),   \r\n                #\"unsampler_type\": (['linear', 'exponential', 'constant'],),\r\n            },\r\n            \"optional\": {\r\n                \"options\": (\"OPTIONS\",),\r\n            }\r\n        }\r\n    \r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    CATEGORY = \"RES4LYF/legacy/sampler_extensions\"\r\n\r\n    FUNCTION = \"main\"\r\n    DEPRECATED = True\r\n\r\n    def main(self, noise_init_stdev, noise_init_mean, c1, c2, c3, eta, s_noise, d_noise, noise_type_init, noise_type_sde, noise_mode_sde, noise_seed,\r\n                    alpha_init, k_init, alpha_sde, k_sde, t_fn_formula=None, sigma_fn_formula=None, unsampler_type=\"linear\",\r\n                    alphas=None, etas=None, s_noises=None, d_noises=None, c2s=None, c3s=None,\r\n                    options=None,\r\n                    ):\r\n    \r\n        if options is None:\r\n            options = {}\r\n\r\n        options['noise_init_stdev'] = noise_init_stdev\r\n        options['noise_init_mean'] = noise_init_mean\r\n        options['noise_type_init'] = noise_type_init\r\n        options['noise_type_sde'] = noise_type_sde\r\n        options['noise_mode_sde'] = noise_mode_sde\r\n        options['eta'] = eta\r\n        options['s_noise'] = s_noise\r\n        options['d_noise'] = d_noise\r\n        options['alpha_init'] = alpha_init\r\n        options['k_init'] = k_init\r\n        options['alpha_sde'] = alpha_sde\r\n        options['k_sde'] = k_sde\r\n        options['noise_seed_sde'] = noise_seed\r\n        options['c1'] = c1\r\n        options['c2'] = c2\r\n        options['c3'] = c3\r\n        options['t_fn_formula'] = t_fn_formula\r\n        options['sigma_fn_formula'] = sigma_fn_formula\r\n        options['unsampler_type'] = unsampler_type\r\n        \r\n        return (options,)\r\n    \r\n\r\n\r\nclass ClownOptions_SDE_Noise:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sde_noise_steps\": (\"INT\", {\"default\": 1, \"min\": 1, \"max\": 10000}),\r\n            },\r\n            \"optional\": {\r\n                \"sde_noise\": (\"LATENT\",),\r\n                \"options\"  : (\"OPTIONS\",),\r\n            }\r\n        }\r\n    \r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    CATEGORY     = \"RES4LYF/legacy/sampler_options\"\r\n    FUNCTION     = \"main\"\r\n    DEPRECATED = True\r\n\r\n    def main(self, sde_noise_steps, sde_noise, options=None,):\r\n    \r\n        if options is None:\r\n            options = {}\r\n\r\n        options['sde_noise_steps'] = sde_noise_steps\r\n        options['sde_noise'] = sde_noise\r\n        \r\n        return (options,)\r\n\r\n\r\n\r\nclass ClownOptions_FrameWeights:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"frame_weights\": (\"SIGMAS\", ),\r\n            },\r\n            \"optional\": {\r\n                \"options\": (\"OPTIONS\",),\r\n            }\r\n        }\r\n    \r\n    DEPRECATED = True\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    CATEGORY = \"RES4LYF/legacy/sampler_options\"\r\n\r\n    FUNCTION = \"main\"\r\n    DEPRECATED = True\r\n\r\n    def main(self, frame_weights, options=None,):\r\n\r\n        if options is None:\r\n            options = {}\r\n\r\n        frame_weights_grp = (frame_weights, frame_weights)\r\n        options['frame_weights_grp'] = frame_weights_grp\r\n\r\n        return (options,)\r\n\r\n\r\n"
  },
  {
    "path": "legacy/samplers_tiled.py",
    "content": "# tiled sampler code adapted from https://github.com/BlenderNeko/ComfyUI_TiledKSampler \n# and heavily modified for use with https://github.com/ClownsharkBatwing/UltraCascade\n\nimport sys\nimport os\nimport copy\nfrom functools import partial\n\nfrom tqdm.auto import tqdm\n\nimport torch\n\nsys.path.insert(0, os.path.join(os.path.dirname(os.path.realpath(__file__)), \"comfy\"))\nimport comfy.sd\nimport comfy.controlnet\nimport comfy.model_management\nimport comfy.sample\nimport comfy.sampler_helpers\nimport latent_preview\n\nfrom nodes import MAX_RESOLUTION\n#MAX_RESOLUTION=8192\n\nimport comfy.clip_vision\nimport folder_paths\n\nfrom . import tiling\nfrom .noise_classes import *\n\ndef initialize_or_scale(tensor, value, steps):\n    if tensor is None:\n        return torch.full((steps,), value)\n    else:\n        return value * tensor\n\ndef cv_cond(cv_out, conditioning, strength, noise_augmentation): \n\n    c = []\n    for t in conditioning:\n        o = t[1].copy()\n        x = {\"clip_vision_output\": cv_out, \"strength\": strength, \"noise_augmentation\": noise_augmentation}\n        if \"unclip_conditioning\" in o:\n            o[\"unclip_conditioning\"] = o[\"unclip_conditioning\"][:] + [x]\n        else:\n            o[\"unclip_conditioning\"] = [x]\n        n = [t[0], o]\n        c.append(n)\n    \n    return c\n\n\ndef recursion_to_list(obj, attr):\n    current = obj\n    yield current\n    while True:\n        current = getattr(current, attr, None)\n        if current is not None:\n            yield current\n        else:\n            return\n\ndef copy_cond(cond):\n    return [[c1,c2.copy()] for c1,c2 in cond]\n\ndef slice_cond(tile_h, tile_h_len, tile_w, tile_w_len, cond, area):\n    tile_h_end = tile_h + tile_h_len\n    tile_w_end = tile_w + tile_w_len\n    coords = area[0] #h_len, w_len, h, w,\n    mask = area[1]\n    if coords is not None:\n        h_len, w_len, h, w = coords\n        h_end = h + h_len\n        w_end = w + w_len\n        if h < tile_h_end and h_end > tile_h and w < tile_w_end and w_end > tile_w:\n            new_h = max(0, h - tile_h)\n            new_w = max(0, w - tile_w)\n            new_h_end = min(tile_h_end, h_end - tile_h)\n            new_w_end = min(tile_w_end, w_end - tile_w)\n            cond[1]['area'] = (new_h_end - new_h, new_w_end - new_w, new_h, new_w)\n        else:\n            return (cond, True)\n    if mask is not None:\n        new_mask = tiling.get_slice(mask, tile_h,tile_h_len,tile_w,tile_w_len)\n        if new_mask.sum().cpu() == 0.0 and 'mask' in cond[1]:\n            return (cond, True)\n        else:\n            cond[1]['mask'] = new_mask\n    return (cond, False)\n\ndef slice_gligen(tile_h, tile_h_len, tile_w, tile_w_len, cond, gligen):\n    tile_h_end = tile_h + tile_h_len\n    tile_w_end = tile_w + tile_w_len\n    if gligen is None:\n        return\n    gligen_type = gligen[0]\n    gligen_model = gligen[1]\n    gligen_areas = gligen[2]\n    \n    gligen_areas_new = []\n    for emb, h_len, w_len, h, w in gligen_areas:\n        h_end = h + h_len\n        w_end = w + w_len\n        if h < tile_h_end and h_end > tile_h and w < tile_w_end and w_end > tile_w:\n            new_h = max(0, h - tile_h)\n            new_w = max(0, w - tile_w)\n            new_h_end = min(tile_h_end, h_end - tile_h)\n            new_w_end = min(tile_w_end, w_end - tile_w)\n            gligen_areas_new.append((emb, new_h_end - new_h, new_w_end - new_w, new_h, new_w))\n\n    if len(gligen_areas_new) == 0:\n        del cond['gligen']\n    else:\n        cond['gligen'] = (gligen_type, gligen_model, gligen_areas_new)\n\ndef slice_cnet(h, h_len, w, w_len, model:comfy.controlnet.ControlBase, img):\n    if img is None:\n        img = model.cond_hint_original\n    hint = tiling.get_slice(img, h*8, h_len*8, w*8, w_len*8)\n    if isinstance(model, comfy.controlnet.ControlLora):\n        model.cond_hint = hint.float().to(model.device)\n    else:\n        model.cond_hint = hint.to(model.control_model.dtype).to(model.device)\n\ndef slices_T2I(h, h_len, w, w_len, model:comfy.controlnet.ControlBase, img):\n    model.control_input = None\n    if img is None:\n        img = model.cond_hint_original\n    model.cond_hint = tiling.get_slice(img, h*8, h_len*8, w*8, w_len*8).float().to(model.device)\n\n# TODO: refactor some of the mess\n\n\ndef cnets_and_cnet_imgs(positive, negative, shape): \n    # cnets\n    cnets =  [c['control'] for (_, c) in positive + negative if 'control' in c]\n    # unroll recursion\n    cnets = list(set([x for m in cnets for x in recursion_to_list(m, \"previous_controlnet\")]))\n    # filter down to only cnets\n    cnets = [x for x in cnets if isinstance(x, comfy.controlnet.ControlNet)]\n    cnet_imgs = [\n        torch.nn.functional.interpolate(m.cond_hint_original, (shape[-2] * 8, shape[-1] * 8), mode='nearest-exact').to('cpu')\n        if m.cond_hint_original.shape[-2] != shape[-2] * 8 or m.cond_hint_original.shape[-1] != shape[-1] * 8 else None\n        for m in cnets]\n    return cnets, cnet_imgs\n\ndef T2Is_and_T2I_imgs(positive, negative, shape): \n    # T2I\n    T2Is =  [c['control'] for (_, c) in positive + negative if 'control' in c]\n    # unroll recursion\n    T2Is = [x for m in T2Is for x in recursion_to_list(m, \"previous_controlnet\")]\n    # filter down to only T2I\n    T2Is = [x for x in T2Is if isinstance(x, comfy.controlnet.T2IAdapter)]\n    T2I_imgs = [\n        torch.nn.functional.interpolate(m.cond_hint_original, (shape[-2] * 8, shape[-1] * 8), mode='nearest-exact').to('cpu')\n        if m.cond_hint_original.shape[-2] != shape[-2] * 8 or m.cond_hint_original.shape[-1] != shape[-1] * 8 or (m.channels_in == 1 and m.cond_hint_original.shape[1] != 1) else None\n        for m in T2Is\n    ]\n    T2I_imgs = [\n        torch.mean(img, 1, keepdim=True) if img is not None and m.channels_in == 1 and m.cond_hint_original.shape[1] else img\n        for m, img in zip(T2Is, T2I_imgs)\n    ]\n    return T2Is, T2I_imgs\n\ndef spatial_conds_posneg(positive, negative, shape, device): #cond area and mask\n    spatial_conds_pos = [\n        (c[1]['area'] if 'area' in c[1] else None, \n            comfy.sample.prepare_mask(c[1]['mask'], shape, device) if 'mask' in c[1] else None)\n        for c in positive\n    ]\n    spatial_conds_neg = [\n        (c[1]['area'] if 'area' in c[1] else None, \n            comfy.sample.prepare_mask(c[1]['mask'], shape, device) if 'mask' in c[1] else None)\n        for c in negative\n    ]\n    return spatial_conds_pos, spatial_conds_neg \n\ndef gligen_posneg(positive, negative):\n    #gligen\n    gligen_pos = [\n        c[1]['gligen'] if 'gligen' in c[1] else None\n        for c in positive\n    ]\n    gligen_neg = [\n        c[1]['gligen'] if 'gligen' in c[1] else None\n        for c in negative\n    ]\n    return gligen_pos, gligen_neg\n\n\ndef cascade_tiles(x, input_x, tile_h, tile_w, tile_h_len, tile_w_len):\n    h_cascade = input_x.shape[-2]\n    w_cascade = input_x.shape[-1]\n    \n    h_samples = x.shape[-2]\n    w_samples = x.shape[-1]\n    \n    tile_h_cascade = (h_cascade * tile_h) // h_samples\n    tile_w_cascade = (w_cascade * tile_w) // w_samples\n    \n    tile_h_len_cascade = (h_cascade * tile_h_len) // h_samples\n    tile_w_len_cascade = (w_cascade * tile_w_len) // w_samples\n    \n    return tile_h_cascade, tile_w_cascade, tile_h_len_cascade, tile_w_len_cascade\n\n\n\ndef sample_common(model, x, noise, noise_mask, noise_seed, tile_width, tile_height, tiling_strategy, cfg, positive, negative,\n                  preview=False, sampler=None, sigmas=None,\n                  clip_name=None, strength=1.0, noise_augment=1.0, image_cv=None, max_tile_batch_size=3,\n                  guide=None, guide_type='residual', guide_weight=1.0, guide_weights=None,\n                  ):\n\n    device = comfy.model_management.get_torch_device()\n    steps = len(sigmas)-1\n    \n    conds0 = \\\n        {\"positive\": comfy.sampler_helpers.convert_cond(positive),\n         \"negative\": comfy.sampler_helpers.convert_cond(negative)}\n\n    conds = {}\n    for k in conds0:\n        conds[k] = list(map(lambda a: a.copy(), conds0[k]))\n\n    modelPatches, inference_memory = comfy.sampler_helpers.get_additional_models(conds, model.model_dtype())\n    comfy.model_management.load_models_gpu([model] + modelPatches, model.memory_required(noise.shape) + inference_memory)\n    \n\n    if model.model.model_config.unet_config['stable_cascade_stage'] == 'up':\n        compression = 1\n        guide_weight = 1.0 if guide_weight is None else guide_weight\n        guide_type = 'residual' if guide_type is None else guide_type\n        guide = guide['samples'] if guide is not None else None\n        guide_weights = initialize_or_scale(guide_weights, guide_weight, 10000)\n\n        patch = model.model_options.get(\"transformer_options\", {}).get(\"patches_replace\", {}).get(\"ultracascade\", {}).get(\"main\")  #CHANGED HERE\n        if patch is not None:\n            patch.update(x_lr=guide, guide_weights=guide_weights, guide_type=guide_type)\n        else:\n            model = model.clone()\n            model.model.diffusion_model.set_sigmas_prev(sigmas_prev=sigmas[:1])\n            model.model.diffusion_model.set_guide_weights(guide_weights=guide_weights)\n            model.model.diffusion_model.set_guide_type(guide_type=guide_type)\n        \n    elif model.model.model_config.unet_config['stable_cascade_stage'] == 'c':\n        compression = 1\n        \n    elif model.model.model_config.unet_config['stable_cascade_stage'] == 'b':\n        compression = 4\n        \n        c_pos, c_neg = [], []\n        for t in positive:\n            d_pos = t[1].copy()\n            d_neg = t[1].copy()\n            \n            d_pos['stable_cascade_prior'] = guide['samples']\n\n            pooled_output = d_neg.get(\"pooled_output\", None)\n            if pooled_output is not None:\n                d_neg[\"pooled_output\"] = torch.zeros_like(pooled_output)\n            \n            c_pos.append([t[0], d_pos])            \n            c_neg.append([torch.zeros_like(t[0]), d_neg])\n        positive = c_pos\n        negative = c_neg\n        effnet_samples = positive[0][1]['stable_cascade_prior'].clone()\n        effnet_interpolated = nn.functional.interpolate(effnet_samples.clone().to(torch.float16).to(device), size=torch.Size((x.shape[-2] // 2, x.shape[-1] // 2,)), mode='bilinear', align_corners=True)\n        effnet_full_map = model.model.diffusion_model.effnet_mapper(effnet_interpolated)\n    else:\n        compression = 8 #sd1.5, sdxl, sd3, flux, etc\n        \n    \n    if image_cv is not None: #CLIP VISION LOAD\n        clip_path = folder_paths.get_full_path(\"clip_vision\", clip_name)\n        clip_vision = comfy.clip_vision.load(clip_path)\n        \n\n\n\n    cnets,             cnet_imgs         = cnets_and_cnet_imgs (positive, negative, x.shape)\n    T2Is,              T2I_imgs          = T2Is_and_T2I_imgs   (positive, negative, x.shape)\n    spatial_conds_pos, spatial_conds_neg = spatial_conds_posneg(positive, negative, x.shape, device)\n    gligen_pos,        gligen_neg        = gligen_posneg       (positive, negative)\n    \n    \n    \n    tile_width  = min(x.shape[-1] * compression, tile_width) \n    tile_height = min(x.shape[2]  * compression, tile_height)\n    \n    if tiling_strategy != 'padded':\n        if noise_mask is not None:\n            x += sigmas[0] * noise_mask * model.model.process_latent_out(noise)\n        else:\n            x += sigmas[0] * model.model.process_latent_out(noise)\n    \n\n\n    if tiling_strategy == 'random' or tiling_strategy == 'random strict':\n        tiles = tiling.get_tiles_and_masks_rgrid(steps, x.shape, tile_height, tile_width, torch.manual_seed(noise_seed), compression=compression)\n    elif tiling_strategy == 'padded':\n        tiles = tiling.get_tiles_and_masks_padded(steps, x.shape, tile_height, tile_width, compression=compression)\n    else:\n        tiles = tiling.get_tiles_and_masks_simple(steps, x.shape, tile_height, tile_width, compression=compression)\n\n\n\n    total_steps = sum([num_steps for img_pass in tiles for steps_list in img_pass for _,_,_,_,num_steps,_ in steps_list])\n    current_step = [0]\n    with tqdm(total=total_steps) as pbar_tqdm:\n        pbar = comfy.utils.ProgressBar(total_steps)\n        def callback(step, x0, x, total_steps, step_inc=1):\n            current_step[0] += step_inc\n            preview_bytes = None\n            if preview == True:\n                previewer = latent_preview.get_previewer(device, model.model.latent_format)\n                preview_bytes = previewer.decode_latent_to_preview_image(\"JPEG\", x0)\n            pbar.update_absolute(current_step[0], preview=preview_bytes)\n            pbar_tqdm.update(step_inc)\n            \n            \n            \n        if tiling_strategy == \"random strict\":\n            x_next = x.clone()\n            \n        for img_pass in tiles: # img_pass is a set of non-intersecting tiles\n            effnet_slices, effnet_map_slices, tiled_noise_list, tiled_latent_list, tiled_mask_list, tile_h_list, tile_w_list, tile_h_len_list, tile_w_len_list = [],[],[],[],[],[],[],[],[]\n\n            for i in range(len(img_pass)):     \n                for iteration, (tile_h, tile_h_len, tile_w, tile_w_len, tile_steps, tile_mask) in enumerate(img_pass[i]):\n                    tiled_mask = None\n                    if noise_mask is not None:\n                        tiled_mask = tiling.get_slice(noise_mask, tile_h, tile_h_len, tile_w, tile_w_len).to(device)\n                    if tile_mask is not None:\n                        if tiled_mask is not None:\n                            tiled_mask *= tile_mask.to(device)\n                        else:\n                            tiled_mask  = tile_mask.to(device)\n                    \n                    if tiling_strategy == 'padded' or tiling_strategy == 'random strict':\n                        tile_h, tile_h_len, tile_w, tile_w_len, tiled_mask = tiling.mask_at_boundary(   tile_h, tile_h_len, tile_w, tile_w_len, \n                                                                                                        tile_height, tile_width, x.shape[-2], x.shape[-1],\n                                                                                                        tiled_mask, device, compression=compression)\n                        \n                    if tiled_mask is not None and tiled_mask.sum().cpu() == 0.0:\n                            continue\n                            \n                    tiled_latent = tiling.get_slice(x, tile_h, tile_h_len, tile_w, tile_w_len).to(device)\n                    \n                    if tiling_strategy == 'padded':\n                        tiled_noise = tiling.get_slice(noise, tile_h, tile_h_len, tile_w, tile_w_len).to(device)\n                    else:\n                        if tiled_mask is None or noise_mask is None:\n                            tiled_noise = torch.zeros_like(tiled_latent)\n                        else:\n                            tiled_noise = tiling.get_slice(noise, tile_h, tile_h_len, tile_w, tile_w_len).to(device) * (1 - tiled_mask)\n                    \n                    #TODO: all other condition based stuff like area sets and GLIGEN should also happen here\n\n                    #cnets\n                    for m, img in zip(cnets, cnet_imgs):\n                        slice_cnet(tile_h, tile_h_len, tile_w, tile_w_len, m, img)\n                    \n                    #T2I\n                    for m, img in zip(T2Is, T2I_imgs):\n                        slices_T2I(tile_h, tile_h_len, tile_w, tile_w_len, m, img)\n\n                    pos = copy.deepcopy(positive)\n                    neg = copy.deepcopy(negative)\n\n                    #cond areas\n                    pos = [slice_cond(tile_h, tile_h_len, tile_w, tile_w_len, c, area) for c, area in zip(pos, spatial_conds_pos)]\n                    pos = [c for c, ignore in pos if not ignore]\n                    neg = [slice_cond(tile_h, tile_h_len, tile_w, tile_w_len, c, area) for c, area in zip(neg, spatial_conds_neg)]\n                    neg = [c for c, ignore in neg if not ignore]\n\n                    #gligen\n                    for cond, gligen in zip(pos, gligen_pos):\n                        slice_gligen(tile_h, tile_h_len, tile_w, tile_w_len, cond, gligen)\n                    for cond, gligen in zip(neg, gligen_neg):\n                        slice_gligen(tile_h, tile_h_len, tile_w, tile_w_len, cond, gligen)\n                    \n                    start_step = i * tile_steps\n                    last_step  = i * tile_steps + tile_steps\n\n                    if last_step is not None and last_step < (len(sigmas) - 1):\n                        sigmas = sigmas[:last_step + 1]\n\n                    if start_step is not None:\n                        if start_step < (len(sigmas) - 1):\n                            sigmas = sigmas[start_step:]\n                        else:\n                            if tiled_latent is not None:\n                                return tiled_latent\n                            else:\n                                return torch.zeros_like(noise)\n                \n                    # SLICE, DICE, AND DENOISE\n                    if image_cv is not None: #slice and dice ClipVision for tiling\n                        image_cv    = image_cv.   permute(0,3,1,2)\n                        tile_h_cascade, tile_w_cascade, tile_h_len_cascade, tile_w_len_cascade = cascade_tiles(x, image_cv, tile_h, tile_w, tile_h_len, tile_w_len)\n\n                        image_slice = copy.deepcopy(image_cv)\n                        image_slice = tiling.get_slice(image_slice, tile_h_cascade, tile_h_len_cascade, tile_w_cascade, tile_w_len_cascade).to(device)\n                        image_slice = image_slice.permute(0,2,3,1)\n                        image_cv    = image_cv.   permute(0,2,3,1)\n                        \n                        cv_out_slice = clip_vision.encode_image(image_slice)\n                        pos = cv_cond(cv_out_slice, pos, strength, noise_augment) \n                    \n                    if model.model.model_config.unet_config['stable_cascade_stage'] == 'up': #slice and dice stage UP guide\n                        tile_h_cascade, tile_w_cascade, tile_h_len_cascade, tile_w_len_cascade = cascade_tiles(x, guide, tile_h, tile_w, tile_h_len, tile_w_len)\n\n                        guide_slice = copy.deepcopy(guide)\n                        guide_slice = tiling.get_slice(guide_slice.clone(), tile_h_cascade, tile_h_len_cascade, tile_w_cascade, tile_w_len_cascade).to(device)\n                        model.model.diffusion_model.set_x_lr(x_lr=guide_slice)\n                        \n                        tile_result = comfy.sample.sample_custom(model, tiled_noise, cfg, sampler, sigmas, pos, neg, tiled_latent, noise_mask=tiled_mask, callback=callback, disable_pbar=True, seed=noise_seed)  \n\n                    elif model.model.model_config.unet_config['stable_cascade_stage'] == 'b':  #slice and dice stage B conditioning\n                        tile_h_cascade, tile_w_cascade, tile_h_len_cascade, tile_w_len_cascade = cascade_tiles(x, effnet_samples.clone(), tile_h, tile_w, tile_h_len, tile_w_len)\n                        effnet_slice = tiling.get_slice(effnet_samples.clone(), tile_h_cascade, tile_h_len_cascade, tile_w_cascade, tile_w_len_cascade).to(device)\n                        effnet_slices.append(effnet_slice)\n                        \n                        tile_h_cascade, tile_w_cascade, tile_h_len_cascade, tile_w_len_cascade = cascade_tiles(x, effnet_full_map.clone(), tile_h, tile_w, tile_h_len, tile_w_len)\n                        effnet_map_slice = tiling.get_slice(effnet_full_map.clone(), tile_h_cascade, tile_h_len_cascade, tile_w_cascade, tile_w_len_cascade).to(device)\n                        effnet_map_slices.append(effnet_map_slice)\n\n                    else: # not stage UP or stage B, default\n                        tile_result = comfy.sample.sample_custom(model, tiled_noise, cfg, sampler, sigmas, pos, neg, tiled_latent, noise_mask=tiled_mask, callback=callback, disable_pbar=True, seed=noise_seed)  \n\n                    if model.model.model_config.unet_config['stable_cascade_stage'] != 'b':\n                        tile_result = tile_result.cpu()\n                        if tiled_mask is not None:\n                            tiled_mask = tiled_mask.cpu()\n                        if tiling_strategy == \"random strict\":\n                            tiling.set_slice(x_next, tile_result, tile_h, tile_h_len, tile_w, tile_w_len, tiled_mask)\n                        else:\n                            tiling.set_slice(x, tile_result, tile_h, tile_h_len, tile_w, tile_w_len, tiled_mask)\n                        \n\n                    tiled_noise_list .append(tiled_noise)\n                    tiled_latent_list.append(tiled_latent)\n                    tiled_mask_list  .append(tiled_mask)\n                    tile_h_list      .append(tile_h)\n                    tile_w_list      .append(tile_w)\n                    tile_h_len_list  .append(tile_h_len)\n                    tile_w_len_list  .append(tile_w_len)\n                    \n                    #END OF NON-INTERSECTING SET OF TILES\n                    \n                if tiling_strategy == \"random strict\":   # IS THIS ONE LEVEL OVER??\n                    x = x_next.clone()\n            \n            if model.model.model_config.unet_config['stable_cascade_stage'] == 'b':\n\n                for start_idx in range(0, len(tiled_latent_list), max_tile_batch_size):\n                    \n                    end_idx = start_idx + max_tile_batch_size\n                    \n                    #print(\"Tiled batch size: \", min(max_tile_batch_size, len(tiled_latent_list))) #end_idx - start_idx)\n                    \n                    tiled_noise_batch  = torch.cat(tiled_noise_list [start_idx:end_idx])\n                    tiled_latent_batch = torch.cat(tiled_latent_list[start_idx:end_idx])\n                    tiled_mask_batch   = torch.cat(tiled_mask_list  [start_idx:end_idx])\n                    \n                    print(\"Tiled batch size: \", tiled_latent_batch.shape[0])\n\n                    pos[0][1]['stable_cascade_prior'] = torch.cat(effnet_slices[start_idx:end_idx])\n                    neg[0][1]['stable_cascade_prior'] = torch.cat(effnet_slices[start_idx:end_idx])\n                    \n                    tile_result = comfy.sample.sample_custom(model, tiled_noise_batch, cfg, sampler, sigmas, pos, neg, tiled_latent_batch, noise_mask=tiled_mask_batch, callback=partial(callback, step_inc=tiled_latent_batch.shape[0]), disable_pbar=True, seed=noise_seed)\n                    \n                    for i in range(tile_result.shape[0]):\n                        idx = start_idx + i\n                        \n                        single_tile = tile_result[i].unsqueeze(dim=0)\n                        single_mask = tiled_mask_batch[i].unsqueeze(dim=0)\n                        \n                        tiling.set_slice(x, single_tile, tile_h_list[idx], tile_h_len_list[idx], tile_w_list[idx], tile_w_len_list[idx], single_mask.cpu())\n\n                x = x.to('cpu') \n\n    comfy.sampler_helpers.cleanup_additional_models(modelPatches)\n\n    return x.cpu()\n\n\n\nclass UltraSharkSampler_Tiled: #this is for use with https://github.com/ClownsharkBatwing/UltraCascade\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\"required\":\n                    {\n                    \"add_noise\": (\"BOOLEAN\", {\"default\": True}),\n                    \"noise_is_latent\": (\"BOOLEAN\", {\"default\": False}),\n                    \"noise_type\": (NOISE_GENERATOR_NAMES, ),\n                    \"alpha\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.1, \"round\": 0.01}),\n                    \"k\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":2.0, \"round\": 0.01}),\n                    \"noise_seed\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\n                    \"cfg\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 100.0}),\n                    \"guide_type\": (['residual', 'weighted'], ),\n                    \"guide_weight\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": 0.01}),\n                    \n                    \"tile_width\": (\"INT\", {\"default\": 1024, \"min\": 2, \"max\": MAX_RESOLUTION, \"step\": 1}),\n                    \"tile_height\": (\"INT\", {\"default\": 1024, \"min\": 2, \"max\": MAX_RESOLUTION, \"step\": 1}),\n                    \"tiling_strategy\": ([\"padded\", \"random\", \"random strict\",  'simple'], ),\n                    \"max_tile_batch_size\": (\"INT\", {\"default\": 64, \"min\": 1, \"max\": 256, \"step\": 1}),\n\n                    \"model\": (\"MODEL\",),\n                    \"positive\": (\"CONDITIONING\", ),\n                    \"negative\": (\"CONDITIONING\", ),\n                    \"sampler\": (\"SAMPLER\",),\n                    \"sigmas\": (\"SIGMAS\",),\n                    \"latent_image\": (\"LATENT\", ),\n                    \n                    \"clip_name\":            (folder_paths.get_filename_list(\"clip_vision\"), {'default': \"clip-vit-large-patch14.safetensors\"}),\n                    \"strength\":           (\"FLOAT\", {\"default\": 1.0, \"min\": -10.0, \"max\": 10.0, \"step\": 0.01}),\n                    \"noise_augment\":      (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01}),\n\n                    },\n                    \"optional\": {\n                        \"latent_noise\": (\"LATENT\", ),\n                        \"guide\": (\"LATENT\", ),\n                        \"guide_weights\": (\"SIGMAS\",),\n                        \"image_cv\": (\"IMAGE\",),\n\n                    },\n                    \n                    }\n\n    RETURN_TYPES = (\"LATENT\",)\n    FUNCTION = \"sample\"\n\n    CATEGORY = \"RES4LYF/legacy/samplers/ultracascade\"\n    DESCRIPTION = \"For use with UltraCascade.\"\n    DEPRECATED = True\n\n    def sample(self, model, noise_seed, add_noise, noise_is_latent, noise_type, alpha, k, tile_width, tile_height, tiling_strategy, cfg, positive, negative, latent_image, latent_noise=None, sampler=None, sigmas=None, guide=None,\n               clip_name=None, strength=1.0, noise_augment=1.0, image_cv=None, max_tile_batch_size=3,\n               guide_type='residual', guide_weight=1.0, guide_weights=None,\n               ):\n        \n        x = latent_image[\"samples\"].clone()\n\n        torch.manual_seed(noise_seed)\n\n        if not add_noise:\n            noise = torch.zeros(x.size(), dtype=x.dtype, layout=x.layout, device=\"cpu\")\n        elif latent_noise is None:\n            skip = latent_image[\"batch_index\"] if \"batch_index\" in latent_image else None\n            noise = prepare_noise(x, noise_seed, noise_type, skip, alpha, k)\n        else:\n            noise = latent_noise[\"samples\"]\n\n        if noise_is_latent: #add noise and latent together and normalize --> noise\n            noise += x.cpu()\n            noise.sub_(noise.mean()).div_(noise.std())\n\n        noise_mask = latent_image[\"noise_mask\"].clone() if \"noise_mask\" in latent_image else None\n\n        latent_out = latent_image.copy()\n        latent_out['samples'] = sample_common(model, x=x, noise=noise, noise_mask=noise_mask, noise_seed=noise_seed, tile_width=tile_width, tile_height=tile_height, tiling_strategy=tiling_strategy, cfg=cfg, positive=positive, negative=negative, \n                             preview=True, sampler=sampler, sigmas=sigmas,\n                             clip_name=clip_name, strength=strength, noise_augment=noise_augment, image_cv=image_cv, max_tile_batch_size=max_tile_batch_size,\n                             guide=guide, guide_type=guide_type, guide_weight=guide_weight, guide_weights=guide_weights,\n                             )\n        return (latent_out,)\n\n\n"
  },
  {
    "path": "legacy/sigmas.py",
    "content": "import torch\r\n\r\nimport numpy as np\r\nfrom math import *\r\nimport builtins\r\nfrom scipy.interpolate import CubicSpline\r\nimport torch.nn.functional as F\r\nimport torch.nn as nn\r\nimport torch.optim as optim\r\n\r\nfrom comfy.k_diffusion.sampling import get_sigmas_polyexponential, get_sigmas_karras\r\nimport comfy.samplers\r\n\r\ndef rescale_linear(input, input_min, input_max, output_min, output_max):\r\n    output = ((input - input_min) / (input_max - input_min)) * (output_max - output_min) + output_min;\r\n    return output\r\n\r\nclass set_precision_sigmas:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                    \"sigmas\": (\"SIGMAS\", ),   \r\n                    \"precision\": ([\"16\", \"32\", \"64\"], ),\r\n                    \"set_default\": (\"BOOLEAN\", {\"default\": False})\r\n                     },\r\n                }\r\n\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"passthrough\",)\r\n    CATEGORY = \"RES4LYF/precision\"\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    def main(self, precision=\"32\", sigmas=None, set_default=False):\r\n        match precision:\r\n            case \"16\":\r\n                if set_default is True:\r\n                    torch.set_default_dtype(torch.float16)\r\n                sigmas = sigmas.to(torch.float16)\r\n            case \"32\":\r\n                if set_default is True:\r\n                    torch.set_default_dtype(torch.float32)\r\n                sigmas = sigmas.to(torch.float32)\r\n            case \"64\":\r\n                if set_default is True:\r\n                    torch.set_default_dtype(torch.float64)\r\n                sigmas = sigmas.to(torch.float64)\r\n        return (sigmas, )\r\n\r\n\r\nclass SimpleInterpolator(nn.Module):\r\n    def __init__(self):\r\n        super(SimpleInterpolator, self).__init__()\r\n        self.net = nn.Sequential(\r\n            nn.Linear(1, 16),\r\n            nn.ReLU(),\r\n            nn.Linear(16, 32),\r\n            nn.ReLU(),\r\n            nn.Linear(32, 1)\r\n        )\r\n\r\n    def forward(self, x):\r\n        return self.net(x)\r\n\r\ndef train_interpolator(model, sigma_schedule, steps, epochs=5000, lr=0.01):\r\n    with torch.inference_mode(False):\r\n        model = SimpleInterpolator()\r\n        sigma_schedule = sigma_schedule.clone()\r\n\r\n        criterion = nn.MSELoss()\r\n        optimizer = optim.Adam(model.parameters(), lr=lr)\r\n        \r\n        x_train = torch.linspace(0, 1, steps=steps).unsqueeze(1)\r\n        y_train = sigma_schedule.unsqueeze(1)\r\n\r\n        # disable inference mode for training\r\n        model.train()\r\n        for epoch in range(epochs):\r\n            optimizer.zero_grad()\r\n\r\n            # fwd pass\r\n            outputs = model(x_train)\r\n            loss = criterion(outputs, y_train)\r\n            loss.backward()\r\n            optimizer.step()\r\n\r\n    return model\r\n\r\ndef interpolate_sigma_schedule_model(sigma_schedule, target_steps):\r\n    model = SimpleInterpolator()\r\n    sigma_schedule = sigma_schedule.float().detach()\r\n\r\n    # train on original sigma schedule\r\n    trained_model = train_interpolator(model, sigma_schedule, len(sigma_schedule))\r\n\r\n    # generate target steps for interpolation\r\n    x_interpolated = torch.linspace(0, 1, target_steps).unsqueeze(1)\r\n\r\n    # inference w/o gradients\r\n    trained_model.eval()\r\n    with torch.no_grad():\r\n        interpolated_sigma = trained_model(x_interpolated).squeeze()\r\n\r\n    return interpolated_sigma\r\n\r\n\r\n\r\n\r\nclass sigmas_interpolate:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas_0\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_1\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"mode\": ([\"linear\", \"nearest\", \"polynomial\", \"exponential\", \"power\", \"model\"],),\r\n                \"order\": (\"INT\", {\"default\": 8, \"min\": 1,\"max\": 64,\"step\": 1}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas_0\", \"sigmas_1\")\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n\r\n\r\n\r\n    def interpolate_sigma_schedule_poly(self, sigma_schedule, target_steps):\r\n        order = self.order\r\n        sigma_schedule_np = sigma_schedule.cpu().numpy()\r\n\r\n        # orig steps (assuming even spacing)\r\n        original_steps = np.linspace(0, 1, len(sigma_schedule_np))\r\n\r\n        # fit polynomial of the given order\r\n        coefficients = np.polyfit(original_steps, sigma_schedule_np, deg=order)\r\n\r\n        # generate new steps where we want to interpolate the data\r\n        target_steps_np = np.linspace(0, 1, target_steps)\r\n\r\n        # eval polynomial at new steps\r\n        interpolated_sigma_np = np.polyval(coefficients, target_steps_np)\r\n\r\n        interpolated_sigma = torch.tensor(interpolated_sigma_np, device=sigma_schedule.device, dtype=sigma_schedule.dtype)\r\n        return interpolated_sigma\r\n\r\n    def interpolate_sigma_schedule_constrained(self, sigma_schedule, target_steps):\r\n        sigma_schedule_np = sigma_schedule.cpu().numpy()\r\n\r\n        # orig steps\r\n        original_steps = np.linspace(0, 1, len(sigma_schedule_np))\r\n\r\n        # target steps for interpolation\r\n        target_steps_np = np.linspace(0, 1, target_steps)\r\n\r\n        # fit cubic spline with fixed start and end values\r\n        cs = CubicSpline(original_steps, sigma_schedule_np, bc_type=((1, 0.0), (1, 0.0)))\r\n\r\n        # eval spline at the target steps\r\n        interpolated_sigma_np = cs(target_steps_np)\r\n\r\n        interpolated_sigma = torch.tensor(interpolated_sigma_np, device=sigma_schedule.device, dtype=sigma_schedule.dtype)\r\n\r\n        return interpolated_sigma\r\n    \r\n    def interpolate_sigma_schedule_exp(self, sigma_schedule, target_steps):\r\n        # transform to log space\r\n        log_sigma_schedule = torch.log(sigma_schedule)\r\n\r\n        # define the original and target step ranges\r\n        original_steps = torch.linspace(0, 1, steps=len(sigma_schedule))\r\n        target_steps = torch.linspace(0, 1, steps=target_steps)\r\n\r\n        # interpolate in log space\r\n        interpolated_log_sigma = F.interpolate(\r\n            log_sigma_schedule.unsqueeze(0).unsqueeze(0),  # Add fake batch and channel dimensions\r\n            size=target_steps.shape[0],\r\n            mode='linear',\r\n            align_corners=True\r\n        ).squeeze()\r\n\r\n        # transform back to exponential space\r\n        interpolated_sigma_schedule = torch.exp(interpolated_log_sigma)\r\n\r\n        return interpolated_sigma_schedule\r\n    \r\n    def interpolate_sigma_schedule_power(self, sigma_schedule, target_steps):\r\n        sigma_schedule_np = sigma_schedule.cpu().numpy()\r\n        original_steps = np.linspace(1, len(sigma_schedule_np), len(sigma_schedule_np))\r\n\r\n        # power regression using a log-log transformation\r\n        log_x = np.log(original_steps)\r\n        log_y = np.log(sigma_schedule_np)\r\n\r\n        # linear regression on log-log data\r\n        coefficients = np.polyfit(log_x, log_y, deg=1)  # degree 1 for linear fit in log-log space\r\n        a = np.exp(coefficients[1])  # a = \"b\" = intercept (exp because of the log transform)\r\n        b = coefficients[0]  # b = \"m\" = slope\r\n\r\n        target_steps_np = np.linspace(1, len(sigma_schedule_np), target_steps)\r\n\r\n        # power law prediction: y = a * x^b\r\n        interpolated_sigma_np = a * (target_steps_np ** b)\r\n\r\n        interpolated_sigma = torch.tensor(interpolated_sigma_np, device=sigma_schedule.device, dtype=sigma_schedule.dtype)\r\n\r\n        return interpolated_sigma\r\n            \r\n    def interpolate_sigma_schedule_linear(self, sigma_schedule, target_steps):\r\n        return F.interpolate(sigma_schedule.unsqueeze(0).unsqueeze(0), target_steps, mode='linear').squeeze(0).squeeze(0)\r\n\r\n    def interpolate_sigma_schedule_nearest(self, sigma_schedule, target_steps):\r\n        return F.interpolate(sigma_schedule.unsqueeze(0).unsqueeze(0), target_steps, mode='nearest').squeeze(0).squeeze(0)    \r\n    \r\n    def interpolate_nearest_neighbor(self, sigma_schedule, target_steps):\r\n        original_steps = torch.linspace(0, 1, steps=len(sigma_schedule))\r\n        target_steps = torch.linspace(0, 1, steps=target_steps)\r\n\r\n        # interpolate original -> target steps using nearest neighbor\r\n        indices = torch.searchsorted(original_steps, target_steps)\r\n        indices = torch.clamp(indices, 0, len(sigma_schedule) - 1)  # clamp indices to valid range\r\n\r\n        # set nearest neighbor via indices\r\n        interpolated_sigma = sigma_schedule[indices]\r\n\r\n        return interpolated_sigma\r\n\r\n\r\n    def main(self, sigmas_0, sigmas_1, mode, order):\r\n\r\n        self.order = order\r\n\r\n        if   mode == \"linear\": \r\n            interpolate = self.interpolate_sigma_schedule_linear\r\n        if   mode == \"nearest\": \r\n            interpolate = self.interpolate_nearest_neighbor\r\n        elif mode == \"polynomial\":\r\n            interpolate = self.interpolate_sigma_schedule_poly\r\n        elif mode == \"exponential\":\r\n            interpolate = self.interpolate_sigma_schedule_exp\r\n        elif mode == \"power\":\r\n            interpolate = self.interpolate_sigma_schedule_power\r\n        elif mode == \"model\":\r\n            with torch.inference_mode(False):\r\n                interpolate = interpolate_sigma_schedule_model\r\n        \r\n        sigmas_0 = interpolate(sigmas_0, len(sigmas_1))\r\n        return (sigmas_0, sigmas_1,)\r\n    \r\nclass sigmas_noise_inversion:\r\n    # flip sigmas for unsampling, and pad both fwd/rev directions with null bytes to disable noise scaling, etc from the model.\r\n    # will cause model to return epsilon prediction instead of calculated denoised latent image.\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas_fwd\",\"sigmas_rev\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    DESCRIPTION = \"For use with unsampling. Connect sigmas_fwd to the unsampling (first) node, and sigmas_rev to the sampling (second) node.\"\r\n    \r\n    def main(self, sigmas):\r\n        sigmas = sigmas.clone().to(torch.float64)\r\n        \r\n        null = torch.tensor([0.0], device=sigmas.device, dtype=sigmas.dtype)\r\n        sigmas_fwd = torch.flip(sigmas, dims=[0])\r\n        sigmas_fwd = torch.cat([sigmas_fwd, null])\r\n        \r\n        sigmas_rev = torch.cat([null, sigmas])\r\n        sigmas_rev = torch.cat([sigmas_rev, null])\r\n        \r\n        return (sigmas_fwd, sigmas_rev,)\r\n\r\n\r\ndef compute_sigma_next_variance_floor(sigma):\r\n    return (-1 + torch.sqrt(1 + 4 * sigma)) / 2\r\n\r\nclass sigmas_variance_floor:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    DESCRIPTION = (\"Process a sigma schedule so that any steps that are too large for variance-locked SDE sampling are replaced with the maximum permissible value.\"\r\n        \"Will be very difficult to approach sigma = 0 due to the nature of the math, as steps become very small much below approximately sigma = 0.15 to 0.2.\")\r\n    \r\n    def main(self, sigmas):\r\n        dtype = sigmas.dtype\r\n        sigmas = sigmas.clone().to(torch.float64)\r\n        for i in range(len(sigmas) - 1):\r\n            sigma_next = (-1 + torch.sqrt(1 + 4 * sigmas[i])) / 2\r\n            \r\n            if sigmas[i+1] < sigma_next and sigmas[i+1] > 0.0:\r\n                print(\"swapped i+1 with sigma_next+0.001: \", sigmas[i+1], sigma_next + 0.001)\r\n                sigmas[i+1] = sigma_next + 0.001\r\n        return (sigmas.to(dtype),)\r\n\r\n\r\nclass sigmas_from_text:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"text\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, text):\r\n        text_list = [float(val) for val in text.replace(\",\", \" \").split()]\r\n        #text_list = [float(val.strip()) for val in text.split(\",\")]\r\n\r\n        sigmas = torch.tensor(text_list).to('cuda').to(torch.float64)\r\n        \r\n        return (sigmas,)\r\n\r\n\r\n\r\nclass sigmas_concatenate:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas_1\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_2\": (\"SIGMAS\", {\"forceInput\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas_1, sigmas_2):\r\n        return (torch.cat((sigmas_1, sigmas_2)),)\r\n\r\nclass sigmas_truncate:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_until\": (\"INT\", {\"default\": 10, \"min\": 0,\"max\": 1000,\"step\": 1}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, sigmas_until):\r\n        return (sigmas[:sigmas_until],)\r\n\r\nclass sigmas_start:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_until\": (\"INT\", {\"default\": 10, \"min\": 0,\"max\": 1000,\"step\": 1}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, sigmas_until):\r\n        return (sigmas[sigmas_until:],)\r\n        \r\nclass sigmas_split:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_start\": (\"INT\", {\"default\": 0, \"min\": 0,\"max\": 1000,\"step\": 1}),\r\n                \"sigmas_end\": (\"INT\", {\"default\": 1000, \"min\": 0,\"max\": 1000,\"step\": 1}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, sigmas_start, sigmas_end):\r\n        return (sigmas[sigmas_start:sigmas_end],)\r\n\r\n        sigmas_stop_step = sigmas_end - sigmas_start\r\n        return (sigmas[sigmas_start:][:sigmas_stop_step],)\r\n    \r\nclass sigmas_pad:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"value\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000,\"max\": 10000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, value):\r\n        return (torch.cat((sigmas, torch.tensor([value], dtype=sigmas.dtype))),)\r\n    \r\nclass sigmas_unpad:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas):\r\n        return (sigmas[:-1],)\r\n\r\nclass sigmas_set_floor:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"floor\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"new_floor\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    FUNCTION = \"set_floor\"\r\n\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n\r\n    def set_floor(self, sigmas, floor, new_floor):\r\n        sigmas[sigmas <= floor] = new_floor\r\n        return (sigmas,)    \r\n    \r\nclass sigmas_delete_below_floor:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"floor\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    FUNCTION = \"delete_below_floor\"\r\n\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n\r\n    def delete_below_floor(self, sigmas, floor):\r\n        return (sigmas[sigmas >= floor],)    \r\n\r\nclass sigmas_delete_value:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"value\": (\"FLOAT\", {\"default\": 0.0, \"min\": -1000,\"max\": 1000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    FUNCTION = \"delete_value\"\r\n\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n\r\n    def delete_value(self, sigmas, value):\r\n        return (sigmas[sigmas != value],) \r\n\r\nclass sigmas_delete_consecutive_duplicates:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas_1\": (\"SIGMAS\", {\"forceInput\": True})\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    FUNCTION = \"delete_consecutive_duplicates\"\r\n\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n\r\n    def delete_consecutive_duplicates(self, sigmas_1):\r\n        mask = sigmas_1[:-1] != sigmas_1[1:]\r\n        mask = torch.cat((mask, torch.tensor([True])))\r\n        return (sigmas_1[mask],) \r\n\r\nclass sigmas_cleanup:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmin\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": 0,\"max\": 1000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    FUNCTION = \"cleanup\"\r\n\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n\r\n    def cleanup(self, sigmas, sigmin):\r\n        sigmas_culled = sigmas[sigmas >= sigmin]\r\n    \r\n        mask = sigmas_culled[:-1] != sigmas_culled[1:]\r\n        mask = torch.cat((mask, torch.tensor([True])))\r\n        filtered_sigmas = sigmas_culled[mask]\r\n        return (torch.cat((filtered_sigmas,torch.tensor([0]))),)\r\n\r\nclass sigmas_mult:\r\n    def __init__(self):\r\n        pass   \r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"multiplier\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01})\r\n            },\r\n            \"optional\": {\r\n                \"sigmas2\": (\"SIGMAS\", {\"forceInput\": False})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, multiplier, sigmas2=None):\r\n        if sigmas2 is not None:\r\n            return (sigmas * sigmas2 * multiplier,)\r\n        else:\r\n            return (sigmas * multiplier,)    \r\n\r\nclass sigmas_modulus:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"divisor\": (\"FLOAT\", {\"default\": 1, \"min\": -1000,\"max\": 1000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, divisor):\r\n        return (sigmas % divisor,)\r\n        \r\nclass sigmas_quotient:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"divisor\": (\"FLOAT\", {\"default\": 1, \"min\": -1000,\"max\": 1000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, divisor):\r\n        return (sigmas // divisor,)\r\n\r\nclass sigmas_add:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"addend\": (\"FLOAT\", {\"default\": 1, \"min\": -1000,\"max\": 1000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, addend):\r\n        return (sigmas + addend,)\r\n\r\nclass sigmas_power:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"power\": (\"FLOAT\", {\"default\": 1, \"min\": -100,\"max\": 100,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, power):\r\n        return (sigmas ** power,)\r\n\r\nclass sigmas_abs:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas):\r\n        return (abs(sigmas),)\r\n\r\nclass sigmas2_mult:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas_1\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_2\": (\"SIGMAS\", {\"forceInput\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas_1, sigmas_2):\r\n        return (sigmas_1 * sigmas_2,)\r\n\r\nclass sigmas2_add:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas_1\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_2\": (\"SIGMAS\", {\"forceInput\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas_1, sigmas_2):\r\n        return (sigmas_1 + sigmas_2,)\r\n\r\nclass sigmas_rescale:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"start\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"end\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"sigmas\": (\"SIGMAS\", ),\r\n            },\r\n            \"optional\": {\r\n            }\r\n        }\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas_rescaled\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    DESCRIPTION = (\"Can be used to set denoise. Results are generally better than with the approach used by KSampler and most nodes with denoise values \"\r\n                   \"(which slice the sigmas schedule according to step count, not the noise level). Will also flip the sigma schedule if the start and end values are reversed.\" \r\n                   )\r\n      \r\n    def main(self, start=0, end=-1, sigmas=None):\r\n\r\n        s_out_1 = ((sigmas - sigmas.min()) * (start - end)) / (sigmas.max() - sigmas.min()) + end     \r\n        \r\n        return (s_out_1,)\r\n\r\n\r\nclass sigmas_math1:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"start\": (\"INT\", {\"default\": 0, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"stop\": (\"INT\", {\"default\": 0, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"trim\": (\"INT\", {\"default\": 0, \"min\": -10000,\"max\": 0,\"step\": 1}),\r\n                \"x\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"y\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"z\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"f1\": (\"STRING\", {\"default\": \"s\", \"multiline\": True}),\r\n                \"rescale\" : (\"BOOLEAN\", {\"default\": False}),\r\n                \"max1\": (\"FLOAT\", {\"default\": 14.614642, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"min1\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n            },\r\n            \"optional\": {\r\n                \"a\": (\"SIGMAS\", {\"forceInput\": False}),\r\n                \"b\": (\"SIGMAS\", {\"forceInput\": False}),               \r\n                \"c\": (\"SIGMAS\", {\"forceInput\": False}),\r\n            }\r\n        }\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    def main(self, start=0, stop=0, trim=0, a=None, b=None, c=None, x=1.0, y=1.0, z=1.0, f1=\"s\", rescale=False, min1=1.0, max1=1.0):\r\n        if stop == 0:\r\n            t_lens = [len(tensor) for tensor in [a, b, c] if tensor is not None]\r\n            t_len = stop = min(t_lens) if t_lens else 0\r\n        else:\r\n            stop = stop + 1\r\n            t_len = stop - start \r\n            \r\n        stop = stop + trim\r\n        t_len = t_len + trim\r\n        \r\n        t_a = t_b = t_c = None\r\n        if a is not None:\r\n            t_a = a[start:stop]\r\n        if b is not None:\r\n            t_b = b[start:stop]\r\n        if c is not None:\r\n            t_c = c[start:stop]               \r\n            \r\n        t_s = torch.arange(0.0, t_len)\r\n    \r\n        t_x = torch.full((t_len,), x)\r\n        t_y = torch.full((t_len,), y)\r\n        t_z = torch.full((t_len,), z)\r\n        eval_namespace = {\"__builtins__\": None, \"round\": builtins.round, \"np\": np, \"a\": t_a, \"b\": t_b, \"c\": t_c, \"x\": t_x, \"y\": t_y, \"z\": t_z, \"s\": t_s, \"torch\": torch}\r\n        eval_namespace.update(np.__dict__)\r\n        \r\n        s_out_1 = eval(f1, eval_namespace)\r\n        \r\n        if rescale == True:\r\n            s_out_1 = ((s_out_1 - min(s_out_1)) * (max1 - min1)) / (max(s_out_1) - min(s_out_1)) + min1     \r\n        \r\n        return (s_out_1,)\r\n\r\nclass sigmas_math3:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"start\": (\"INT\", {\"default\": 0, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"stop\": (\"INT\", {\"default\": 0, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"trim\": (\"INT\", {\"default\": 0, \"min\": -10000,\"max\": 0,\"step\": 1}),\r\n            },\r\n            \"optional\": {\r\n                \"a\": (\"SIGMAS\", {\"forceInput\": False}),\r\n                \"b\": (\"SIGMAS\", {\"forceInput\": False}),               \r\n                \"c\": (\"SIGMAS\", {\"forceInput\": False}),\r\n                \"x\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"y\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"z\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"f1\": (\"STRING\", {\"default\": \"s\", \"multiline\": True}),\r\n                \"rescale1\" : (\"BOOLEAN\", {\"default\": False}),\r\n                \"max1\": (\"FLOAT\", {\"default\": 14.614642, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"min1\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"f2\": (\"STRING\", {\"default\": \"s\", \"multiline\": True}),\r\n                \"rescale2\" : (\"BOOLEAN\", {\"default\": False}),\r\n                \"max2\": (\"FLOAT\", {\"default\": 14.614642, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"min2\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"f3\": (\"STRING\", {\"default\": \"s\", \"multiline\": True}),\r\n                \"rescale3\" : (\"BOOLEAN\", {\"default\": False}),\r\n                \"max3\": (\"FLOAT\", {\"default\": 14.614642, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"min3\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n            }\r\n        }\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",\"SIGMAS\",\"SIGMAS\")\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    def main(self, start=0, stop=0, trim=0, a=None, b=None, c=None, x=1.0, y=1.0, z=1.0, f1=\"s\", f2=\"s\", f3=\"s\", rescale1=False, rescale2=False, rescale3=False, min1=1.0, max1=1.0, min2=1.0, max2=1.0, min3=1.0, max3=1.0):\r\n        if stop == 0:\r\n            t_lens = [len(tensor) for tensor in [a, b, c] if tensor is not None]\r\n            t_len = stop = min(t_lens) if t_lens else 0\r\n        else:\r\n            stop = stop + 1\r\n            t_len = stop - start \r\n            \r\n        stop = stop + trim\r\n        t_len = t_len + trim\r\n        \r\n        t_a = t_b = t_c = None\r\n        if a is not None:\r\n            t_a = a[start:stop]\r\n        if b is not None:\r\n            t_b = b[start:stop]\r\n        if c is not None:\r\n            t_c = c[start:stop]               \r\n            \r\n        t_s = torch.arange(0.0, t_len)\r\n    \r\n        t_x = torch.full((t_len,), x)\r\n        t_y = torch.full((t_len,), y)\r\n        t_z = torch.full((t_len,), z)\r\n        eval_namespace = {\"__builtins__\": None, \"np\": np, \"a\": t_a, \"b\": t_b, \"c\": t_c, \"x\": t_x, \"y\": t_y, \"z\": t_z, \"s\": t_s, \"torch\": torch}\r\n        eval_namespace.update(np.__dict__)\r\n        \r\n        s_out_1 = eval(f1, eval_namespace)\r\n        s_out_2 = eval(f2, eval_namespace)\r\n        s_out_3 = eval(f3, eval_namespace)\r\n        \r\n        if rescale1 == True:\r\n            s_out_1 = ((s_out_1 - min(s_out_1)) * (max1 - min1)) / (max(s_out_1) - min(s_out_1)) + min1\r\n        if rescale2 == True:\r\n            s_out_2 = ((s_out_2 - min(s_out_2)) * (max2 - min2)) / (max(s_out_2) - min(s_out_2)) + min2\r\n        if rescale3 == True:\r\n            s_out_3 = ((s_out_3 - min(s_out_3)) * (max3 - min3)) / (max(s_out_3) - min(s_out_3)) + min3        \r\n        \r\n        return s_out_1, s_out_2, s_out_3\r\n\r\nclass sigmas_iteration_karras:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps_up\": (\"INT\", {\"default\": 30, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"steps_down\": (\"INT\", {\"default\": 30, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"rho_up\": (\"FLOAT\", {\"default\": 3, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"rho_down\": (\"FLOAT\", {\"default\": 4, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"s_min_start\": (\"FLOAT\", {\"default\":0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"s_max\": (\"FLOAT\", {\"default\": 2, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"s_min_end\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n            },\r\n            \"optional\": {\r\n                \"momentums\": (\"SIGMAS\", {\"forceInput\": False}),\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": False}),             \r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",\"SIGMAS\")\r\n    RETURN_NAMES = (\"momentums\",\"sigmas\")\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n    \r\n    def main(self, steps_up, steps_down, rho_up, rho_down, s_min_start, s_max, s_min_end, sigmas=None, momentums=None):\r\n        s_up = get_sigmas_karras(steps_up, s_min_start, s_max, rho_up)\r\n        s_down = get_sigmas_karras(steps_down, s_min_end, s_max, rho_down) \r\n        s_up = s_up[:-1]\r\n        s_down = s_down[:-1]  \r\n        s_up = torch.flip(s_up, dims=[0])\r\n        sigmas_new = torch.cat((s_up, s_down), dim=0)\r\n        momentums_new = torch.cat((s_up, -1*s_down), dim=0)\r\n        \r\n        if sigmas is not None:\r\n            sigmas = torch.cat([sigmas, sigmas_new])\r\n        else:\r\n            sigmas = sigmas_new\r\n            \r\n        if momentums is not None:\r\n            momentums = torch.cat([momentums, momentums_new])\r\n        else:\r\n            momentums = momentums_new\r\n        \r\n        return (momentums,sigmas) \r\n \r\nclass sigmas_iteration_polyexp:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps_up\": (\"INT\", {\"default\": 30, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"steps_down\": (\"INT\", {\"default\": 30, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"rho_up\": (\"FLOAT\", {\"default\": 0.6, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"rho_down\": (\"FLOAT\", {\"default\": 0.8, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"s_min_start\": (\"FLOAT\", {\"default\":0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"s_max\": (\"FLOAT\", {\"default\": 2, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"s_min_end\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n            },\r\n            \"optional\": {\r\n                \"momentums\": (\"SIGMAS\", {\"forceInput\": False}),\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": False}),             \r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",\"SIGMAS\")\r\n    RETURN_NAMES = (\"momentums\",\"sigmas\")\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n    \r\n    def main(self, steps_up, steps_down, rho_up, rho_down, s_min_start, s_max, s_min_end, sigmas=None, momentums=None):\r\n        s_up = get_sigmas_polyexponential(steps_up, s_min_start, s_max, rho_up)\r\n        s_down = get_sigmas_polyexponential(steps_down, s_min_end, s_max, rho_down) \r\n        s_up = s_up[:-1]\r\n        s_down = s_down[:-1]\r\n        s_up = torch.flip(s_up, dims=[0])\r\n        sigmas_new = torch.cat((s_up, s_down), dim=0)\r\n        momentums_new = torch.cat((s_up, -1*s_down), dim=0)\r\n\r\n        if sigmas is not None:\r\n            sigmas = torch.cat([sigmas, sigmas_new])\r\n        else:\r\n            sigmas = sigmas_new\r\n\r\n        if momentums is not None:\r\n            momentums = torch.cat([momentums, momentums_new])\r\n        else:\r\n            momentums = momentums_new\r\n\r\n        return (momentums,sigmas) \r\n\r\nclass tan_scheduler:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 20, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"offset\": (\"FLOAT\", {\"default\": 20, \"min\": 0,\"max\": 100000,\"step\": 0.1}),\r\n                \"slope\": (\"FLOAT\", {\"default\": 20, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"start\": (\"FLOAT\", {\"default\": 20, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"end\": (\"FLOAT\", {\"default\": 20, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"sgm\" : (\"BOOLEAN\", {\"default\": False}),\r\n                \"pad\" : (\"BOOLEAN\", {\"default\": False}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n    \r\n    def main(self, steps, slope, offset, start, end, sgm, pad):\r\n        smax = ((2/pi)*atan(-slope*(0-offset))+1)/2\r\n        smin = ((2/pi)*atan(-slope*((steps-1)-offset))+1)/2\r\n\r\n        srange = smax-smin\r\n        sscale = start - end\r\n        \r\n        if sgm:\r\n            steps+=1\r\n\r\n        sigmas = [  ( (((2/pi)*atan(-slope*(x-offset))+1)/2) - smin) * (1/srange) * sscale + end    for x in range(steps)]\r\n        \r\n        if sgm:\r\n            sigmas = sigmas[:-1]\r\n        if pad:\r\n            sigmas = torch.tensor(sigmas+[0])\r\n        else:\r\n            sigmas = torch.tensor(sigmas)\r\n        return (sigmas,)\r\n\r\nclass tan_scheduler_2stage:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 40, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"midpoint\": (\"INT\", {\"default\": 20, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"pivot_1\": (\"INT\", {\"default\": 10, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"pivot_2\": (\"INT\", {\"default\": 30, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"slope_1\": (\"FLOAT\", {\"default\": 1, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"slope_2\": (\"FLOAT\", {\"default\": 1, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"start\": (\"FLOAT\", {\"default\": 1.0, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"middle\": (\"FLOAT\", {\"default\": 0.5, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"end\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"pad\" : (\"BOOLEAN\", {\"default\": False}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas\",)\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n\r\n    def get_tan_sigmas(self, steps, slope, pivot, start, end):\r\n        smax = ((2/pi)*atan(-slope*(0-pivot))+1)/2\r\n        smin = ((2/pi)*atan(-slope*((steps-1)-pivot))+1)/2\r\n\r\n        srange = smax-smin\r\n        sscale = start - end\r\n\r\n        sigmas = [  ( (((2/pi)*atan(-slope*(x-pivot))+1)/2) - smin) * (1/srange) * sscale + end    for x in range(steps)]\r\n        \r\n        return sigmas\r\n\r\n    def main(self, steps, midpoint, start, middle, end, pivot_1, pivot_2, slope_1, slope_2, pad):\r\n        steps += 2\r\n        stage_2_len = steps - midpoint\r\n        stage_1_len = steps - stage_2_len\r\n\r\n        tan_sigmas_1 = self.get_tan_sigmas(stage_1_len, slope_1, pivot_1, start, middle)\r\n        tan_sigmas_2 = self.get_tan_sigmas(stage_2_len, slope_2, pivot_2 - stage_1_len, middle, end)\r\n        \r\n        tan_sigmas_1 = tan_sigmas_1[:-1]\r\n        if pad:\r\n            tan_sigmas_2 = tan_sigmas_2+[0]\r\n\r\n        tan_sigmas = torch.tensor(tan_sigmas_1 + tan_sigmas_2)\r\n\r\n        return (tan_sigmas,)\r\n\r\nclass tan_scheduler_2stage_simple:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 40, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"pivot_1\": (\"FLOAT\", {\"default\": 1, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"pivot_2\": (\"FLOAT\", {\"default\": 1, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"slope_1\": (\"FLOAT\", {\"default\": 1, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"slope_2\": (\"FLOAT\", {\"default\": 1, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"start\": (\"FLOAT\", {\"default\": 1.0, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"middle\": (\"FLOAT\", {\"default\": 0.5, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"end\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"pad\" : (\"BOOLEAN\", {\"default\": False}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas\",)\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n\r\n    def get_tan_sigmas(self, steps, slope, pivot, start, end):\r\n        smax = ((2/pi)*atan(-slope*(0-pivot))+1)/2\r\n        smin = ((2/pi)*atan(-slope*((steps-1)-pivot))+1)/2\r\n\r\n        srange = smax-smin\r\n        sscale = start - end\r\n\r\n        sigmas = [  ( (((2/pi)*atan(-slope*(x-pivot))+1)/2) - smin) * (1/srange) * sscale + end    for x in range(steps)]\r\n        \r\n        return sigmas\r\n\r\n    def main(self, steps, start, middle, end, pivot_1, pivot_2, slope_1, slope_2, pad):\r\n        steps += 2\r\n\r\n        midpoint = int( (steps*pivot_1 + steps*pivot_2) / 2 )\r\n        pivot_1 = int(steps * pivot_1)\r\n        pivot_2 = int(steps * pivot_2)\r\n\r\n        slope_1 = slope_1 / (steps/40)\r\n        slope_2 = slope_2 / (steps/40)\r\n\r\n        stage_2_len = steps - midpoint\r\n        stage_1_len = steps - stage_2_len\r\n\r\n        tan_sigmas_1 = self.get_tan_sigmas(stage_1_len, slope_1, pivot_1, start, middle)\r\n        tan_sigmas_2 = self.get_tan_sigmas(stage_2_len, slope_2, pivot_2 - stage_1_len, middle, end)\r\n        \r\n        tan_sigmas_1 = tan_sigmas_1[:-1]\r\n        if pad:\r\n            tan_sigmas_2 = tan_sigmas_2+[0]\r\n\r\n        tan_sigmas = torch.tensor(tan_sigmas_1 + tan_sigmas_2)\r\n\r\n        return (tan_sigmas,)\r\n    \r\nclass linear_quadratic_advanced:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"model\": (\"MODEL\",),\r\n                \"steps\": (\"INT\", {\"default\": 40, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"denoise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"inflection_percent\": (\"FLOAT\", {\"default\": 0.5, \"min\": 0,\"max\": 1,\"step\": 0.01}),\r\n            },\r\n            # \"optional\": {\r\n            # }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas\",)\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n\r\n    def main(self, steps, denoise, inflection_percent, model=None):\r\n        sigmas = get_sigmas(model, \"linear_quadratic\", steps, denoise, inflection_percent)\r\n\r\n        return (sigmas, )\r\n\r\n\r\nclass constant_scheduler:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 40, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"value_start\": (\"FLOAT\", {\"default\": 1.0, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"value_end\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"cutoff_percent\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0,\"max\": 1,\"step\": 0.01}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas\",)\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n\r\n    def main(self, steps, value_start, value_end, cutoff_percent):\r\n        sigmas = torch.ones(steps + 1) * value_start\r\n        cutoff_step = int(round(steps * cutoff_percent)) + 1\r\n        sigmas = torch.concat((sigmas[:cutoff_step], torch.ones(steps + 1 - cutoff_step) * value_end), dim=0)\r\n\r\n        return (sigmas,)\r\n\r\n\r\ndef get_sigmas_simple_exponential(model, steps):\r\n    s = model.model_sampling\r\n    sigs = []\r\n    ss = len(s.sigmas) / steps\r\n    for x in range(steps):\r\n        sigs += [float(s.sigmas[-(1 + int(x * ss))])]\r\n    sigs += [0.0]\r\n    sigs = torch.FloatTensor(sigs)\r\n    exp = torch.exp(torch.log(torch.linspace(1, 0, steps + 1)))\r\n    return sigs * exp\r\n\r\nextra_schedulers = {\r\n    \"simple_exponential\": get_sigmas_simple_exponential\r\n}\r\n\r\n\r\n\r\n\r\ndef get_sigmas(model, scheduler, steps, denoise, lq_inflection_percent=0.5): #adapted from comfyui\r\n    total_steps = steps\r\n    if denoise < 1.0:\r\n        if denoise <= 0.0:\r\n            return (torch.FloatTensor([]),)\r\n        total_steps = int(steps/denoise)\r\n\r\n    #model_sampling = model.get_model_object(\"model_sampling\")\r\n    if hasattr(model, \"model\"):\r\n        model_sampling = model.model.model_sampling\r\n    elif hasattr(model, \"inner_model\"):\r\n        model_sampling = model.inner_model.inner_model.model_sampling\r\n    if scheduler == \"beta57\":\r\n        sigmas = comfy.samplers.beta_scheduler(model_sampling, total_steps, alpha=0.5, beta=0.7)\r\n    elif scheduler == \"linear_quadratic\":\r\n        linear_steps = int(total_steps * lq_inflection_percent)\r\n        sigmas = comfy.samplers.linear_quadratic_schedule(model_sampling, total_steps, threshold_noise=0.025, linear_steps=linear_steps)\r\n    else:\r\n        sigmas = comfy.samplers.calculate_sigmas(model_sampling, scheduler, total_steps).cpu()\r\n    \r\n    sigmas = sigmas[-(steps + 1):]\r\n    return sigmas\r\n\r\n\r\n\r\n"
  },
  {
    "path": "legacy/tiling.py",
    "content": "import torch\nimport itertools\nimport numpy as np\n\n# tiled sampler code adapted from https://github.com/BlenderNeko/ComfyUI_TiledKSampler \n# for use with https://github.com/ClownsharkBatwing/UltraCascade\n\ndef grouper(n, iterable):\n    it = iter(iterable)\n    while True:\n        chunk = list(itertools.islice(it, n))\n        if not chunk:\n            return\n        yield chunk\n\ndef create_batches(n, iterable):\n    groups = itertools.groupby(iterable, key= lambda x: (x[1], x[3]))\n    for _, x in groups:\n        for y in grouper(n, x):\n            yield y\n\n\ndef get_slice(tensor, h, h_len, w, w_len):\n    t = tensor.narrow(-2, h, h_len)\n    t = t.narrow(-1, w, w_len)\n    return t\n\ndef set_slice(tensor1,tensor2,  h, h_len, w, w_len, mask=None):\n    if mask is not None:\n        tensor1[:,:,h:h+h_len,w:w+w_len] = tensor1[:,:,h:h+h_len,w:w+w_len] * (1 - mask) +  tensor2 * mask\n    else:\n        tensor1[:,:,h:h+h_len,w:w+w_len] = tensor2\n\ndef get_tiles_and_masks_simple(steps, latent_shape, tile_height, tile_width, compression=4):\n    latent_size_h = latent_shape[-2]\n    latent_size_w = latent_shape[-1]\n    tile_size_h = int(tile_height // compression)   #CHANGED FROM 8\n    tile_size_w = int(tile_width // compression)   #CHANGED FROM 8\n\n    h = np.arange(0,latent_size_h, tile_size_h)\n    w = np.arange(0,latent_size_w, tile_size_w)\n\n    def create_tile(hs, ws, i, j):\n        h = int(hs[i])\n        w = int(ws[j])\n        h_len = min(tile_size_h, latent_size_h - h)\n        w_len = min(tile_size_w, latent_size_w - w)\n        return (h, h_len, w, w_len, steps, None)\n\n    passes = [\n        [[create_tile(h, w, i, j) for i in range(len(h)) for j in range(len(w))]],\n    ]\n    return passes\n\ndef get_tiles_and_masks_padded(steps, latent_shape, tile_height, tile_width, compression=4):\n    batch_size = latent_shape[0]\n    latent_size_h = latent_shape[-2]\n    latent_size_w = latent_shape[-1]\n\n    tile_size_h = int(tile_height // compression)   #CHANGED FROM 8\n    tile_size_w = int(tile_width // compression)    #CHANGED FROM 8\n    #if compression > 1:\n    tile_size_h = int((tile_size_h // 4) * 4)       #MIGHT BE A PROBLEM WITH STAGE C?\n    tile_size_w = int((tile_size_w // 4) * 4)\n\n    #masks\n    mask_h = [0,tile_size_h // 4, tile_size_h - tile_size_h // 4, tile_size_h]\n    mask_w = [0,tile_size_w // 4, tile_size_w - tile_size_w // 4, tile_size_w]\n    masks = [[] for _ in range(3)]\n    for i in range(3):\n        for j in range(3):\n            mask = torch.zeros((batch_size,1,tile_size_h, tile_size_w), dtype=torch.float32, device='cpu')\n            mask[:,:, mask_h[i]:mask_h[i+1], \n                      mask_w[j]:mask_w[j+1]] = 1.0\n            masks[i].append(mask)\n    \n    def create_mask(h_ind, w_ind, h_ind_max, w_ind_max, mask_h, mask_w, h_len, w_len):\n        mask = masks[1][1]\n        if not (h_ind == 0 or h_ind == h_ind_max or w_ind == 0 or w_ind == w_ind_max):\n            return get_slice(mask, 0, h_len, 0, w_len)\n        mask = mask.clone()\n        if h_ind == 0 and mask_h:\n            mask += masks[0][1]\n        if h_ind == h_ind_max and mask_h:\n            mask += masks[2][1]\n        if w_ind == 0 and mask_w:\n            mask += masks[1][0]\n        if w_ind == w_ind_max and mask_w:\n            mask += masks[1][2]\n        if h_ind == 0 and w_ind == 0 and mask_h and mask_w:\n            mask += masks[0][0]\n        if h_ind == 0 and w_ind == w_ind_max and mask_h and mask_w:\n            mask += masks[0][2]\n        if h_ind == h_ind_max and w_ind == 0 and mask_h and mask_w:\n            mask += masks[2][0]\n        if h_ind == h_ind_max and w_ind == w_ind_max and mask_h and mask_w:\n            mask += masks[2][2]\n        return get_slice(mask, 0, h_len, 0, w_len)\n\n    h = np.arange(0,latent_size_h, tile_size_h)\n    h_shift = np.arange(tile_size_h // 2, latent_size_h - tile_size_h // 2, tile_size_h)\n    w = np.arange(0,latent_size_w, tile_size_w)\n    w_shift = np.arange(tile_size_w // 2, latent_size_w - tile_size_h // 2, tile_size_w)\n    \n\n    def create_tile(hs, ws, mask_h, mask_w, i, j):\n        h = int(hs[i])\n        w = int(ws[j])\n        h_len = min(tile_size_h, latent_size_h - h)\n        w_len = min(tile_size_w, latent_size_w - w)\n        mask = create_mask(i,j,len(hs)-1, len(ws)-1, mask_h, mask_w, h_len, w_len)\n        return (h, h_len, w, w_len, steps, mask)\n    \n    passes = [\n        [[create_tile(h,       w,       True,  True,  i, j) for i in range(len(h))       for j in range(len(w))]],\n        [[create_tile(h_shift, w,       False, True,  i, j) for i in range(len(h_shift)) for j in range(len(w))]],\n        [[create_tile(h,       w_shift, True,  False, i, j) for i in range(len(h))       for j in range(len(w_shift))]],\n        [[create_tile(h_shift, w_shift, False, False, i,j) for i in range(len(h_shift)) for j in range(len(w_shift))]],\n    ]\n    \n    return passes\n\ndef mask_at_boundary(h, h_len, w, w_len, tile_size_h, tile_size_w, latent_size_h, latent_size_w, mask, device='cpu', compression=4):\n    tile_size_h = int(tile_size_h // compression)   #CHANGED FROM 8\n    tile_size_w = int(tile_size_w // compression)   #CHANGED FROM 8\n    \n    if (h_len == tile_size_h or h_len == latent_size_h) and (w_len == tile_size_w or w_len == latent_size_w):\n        return h, h_len, w, w_len, mask\n    h_offset = min(0, latent_size_h - (h + tile_size_h))\n    w_offset = min(0, latent_size_w - (w + tile_size_w))\n    new_mask = torch.zeros((1,1,tile_size_h, tile_size_w), dtype=torch.float32, device=device)\n    new_mask[:,:,-h_offset:h_len if h_offset == 0 else tile_size_h, -w_offset:w_len if w_offset == 0 else tile_size_w] =  1.0 if mask is None else mask\n    return h + h_offset, tile_size_h, w + w_offset, tile_size_w, new_mask\n\ndef get_tiles_and_masks_rgrid(steps, latent_shape, tile_height, tile_width, generator, compression=4):\n\n    def calc_coords(latent_size, tile_size, jitter):\n        tile_coords = int((latent_size + jitter - 1) // tile_size + 1)\n        tile_coords = [np.clip(tile_size * c - jitter, 0, latent_size) for c in range(tile_coords + 1)]\n        tile_coords = [(c1, c2-c1) for c1, c2 in zip(tile_coords, tile_coords[1:])]\n        return tile_coords\n    \n    #calc stuff\n    batch_size = latent_shape[0]\n    latent_size_h = latent_shape[-2]\n    latent_size_w = latent_shape[-1]\n    tile_size_h = int(tile_height // compression)   #CHANGED FROM 8\n    tile_size_w = int(tile_width // compression)   #CHANGED FROM 8\n\n    tiles_all = []\n\n    for s in range(steps):\n        rands = torch.rand((2,), dtype=torch.float32, generator=generator, device='cpu').numpy()\n\n        jitter_w1 = int(rands[0] * tile_size_w)\n        jitter_w2 = int(((rands[0] + .5) % 1.0) * tile_size_w)\n        jitter_h1 = int(rands[1] * tile_size_h)\n        jitter_h2 = int(((rands[1] + .5) % 1.0) * tile_size_h)\n\n        #calc number of tiles\n        tiles_h = [\n            calc_coords(latent_size_h, tile_size_h, jitter_h1),\n            calc_coords(latent_size_h, tile_size_h, jitter_h2)\n        ]\n        tiles_w = [\n            calc_coords(latent_size_w, tile_size_w, jitter_w1),\n            calc_coords(latent_size_w, tile_size_w, jitter_w2)\n        ]\n\n        tiles = []\n        if s % 2 == 0:\n            for i, h in enumerate(tiles_h[0]):\n                for w in tiles_w[i%2]:\n                    tiles.append((int(h[0]), int(h[1]), int(w[0]), int(w[1]), 1, None))\n        else:\n            for i, w in enumerate(tiles_w[0]):\n                for h in tiles_h[i%2]:\n                    tiles.append((int(h[0]), int(h[1]), int(w[0]), int(w[1]), 1, None))\n        tiles_all.append(tiles)\n    return [tiles_all]\n"
  },
  {
    "path": "lightricks/model.py",
    "content": "import torch\nfrom torch import nn\nimport torch.nn.functional as F\n\nimport comfy.ldm.modules.attention\nimport comfy.ldm.common_dit\nfrom einops import rearrange\nimport math\nfrom typing import Dict, Optional, Tuple, List\n\nfrom .symmetric_patchifier import SymmetricPatchifier, latent_to_pixel_coords\nfrom ..helper  import ExtraOptions\n\n\ndef get_timestep_embedding(\n    timesteps: torch.Tensor,\n    embedding_dim: int,\n    flip_sin_to_cos: bool = False,\n    downscale_freq_shift: float = 1,\n    scale: float = 1,\n    max_period: int = 10000,\n):\n    \"\"\"\n    This matches the implementation in Denoising Diffusion Probabilistic Models: Create sinusoidal timestep embeddings.\n\n    Args\n        timesteps (torch.Tensor):\n            a 1-D Tensor of N indices, one per batch element. These may be fractional.\n        embedding_dim (int):\n            the dimension of the output.\n        flip_sin_to_cos (bool):\n            Whether the embedding order should be `cos, sin` (if True) or `sin, cos` (if False)\n        downscale_freq_shift (float):\n            Controls the delta between frequencies between dimensions\n        scale (float):\n            Scaling factor applied to the embeddings.\n        max_period (int):\n            Controls the maximum frequency of the embeddings\n    Returns\n        torch.Tensor: an [N x dim] Tensor of positional embeddings.\n    \"\"\"\n    assert len(timesteps.shape) == 1, \"Timesteps should be a 1d-array\"\n\n    half_dim = embedding_dim // 2\n    exponent = -math.log(max_period) * torch.arange(\n        start=0, end=half_dim, dtype=torch.float32, device=timesteps.device\n    )\n    exponent = exponent / (half_dim - downscale_freq_shift)\n\n    emb = torch.exp(exponent)\n    emb = timesteps[:, None].float() * emb[None, :]\n\n    # scale embeddings\n    emb = scale * emb\n\n    # concat sine and cosine embeddings\n    emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=-1)\n\n    # flip sine and cosine embeddings\n    if flip_sin_to_cos:\n        emb = torch.cat([emb[:, half_dim:], emb[:, :half_dim]], dim=-1)\n\n    # zero pad\n    if embedding_dim % 2 == 1:\n        emb = torch.nn.functional.pad(emb, (0, 1, 0, 0))\n    return emb\n\n\nclass TimestepEmbedding(nn.Module):\n    def __init__(\n        self,\n        in_channels: int,\n        time_embed_dim: int,\n        act_fn: str = \"silu\",\n        out_dim: int = None,\n        post_act_fn: Optional[str] = None,\n        cond_proj_dim=None,\n        sample_proj_bias=True,\n        dtype=None, device=None, operations=None,\n    ):\n        super().__init__()\n\n        self.linear_1 = operations.Linear(in_channels, time_embed_dim, sample_proj_bias, dtype=dtype, device=device)\n\n        if cond_proj_dim is not None:\n            self.cond_proj = operations.Linear(cond_proj_dim, in_channels, bias=False, dtype=dtype, device=device)\n        else:\n            self.cond_proj = None\n\n        self.act = nn.SiLU()\n\n        if out_dim is not None:\n            time_embed_dim_out = out_dim\n        else:\n            time_embed_dim_out = time_embed_dim\n        self.linear_2 = operations.Linear(time_embed_dim, time_embed_dim_out, sample_proj_bias, dtype=dtype, device=device)\n\n        if post_act_fn is None:\n            self.post_act = None\n        # else:\n        #     self.post_act = get_activation(post_act_fn)\n\n    def forward(self, sample, condition=None):\n        if condition is not None:\n            sample = sample + self.cond_proj(condition)\n        sample = self.linear_1(sample)\n\n        if self.act is not None:\n            sample = self.act(sample)\n\n        sample = self.linear_2(sample)\n\n        if self.post_act is not None:\n            sample = self.post_act(sample)\n        return sample\n\n\nclass Timesteps(nn.Module):\n    def __init__(self, num_channels: int, flip_sin_to_cos: bool, downscale_freq_shift: float, scale: int = 1):\n        super().__init__()\n        self.num_channels = num_channels\n        self.flip_sin_to_cos = flip_sin_to_cos\n        self.downscale_freq_shift = downscale_freq_shift\n        self.scale = scale\n\n    def forward(self, timesteps):\n        t_emb = get_timestep_embedding(\n            timesteps,\n            self.num_channels,\n            flip_sin_to_cos=self.flip_sin_to_cos,\n            downscale_freq_shift=self.downscale_freq_shift,\n            scale=self.scale,\n        )\n        return t_emb\n\n\nclass PixArtAlphaCombinedTimestepSizeEmbeddings(nn.Module):\n    \"\"\"\n    For PixArt-Alpha.\n\n    Reference:\n    https://github.com/PixArt-alpha/PixArt-alpha/blob/0f55e922376d8b797edd44d25d0e7464b260dcab/diffusion/model/nets/PixArtMS.py#L164C9-L168C29\n    \"\"\"\n\n    def __init__(self, embedding_dim, size_emb_dim, use_additional_conditions: bool = False, dtype=None, device=None, operations=None):\n        super().__init__()\n\n        self.outdim = size_emb_dim\n        self.time_proj = Timesteps(num_channels=256, flip_sin_to_cos=True, downscale_freq_shift=0)\n        self.timestep_embedder = TimestepEmbedding(in_channels=256, time_embed_dim=embedding_dim, dtype=dtype, device=device, operations=operations)\n\n    def forward(self, timestep, resolution, aspect_ratio, batch_size, hidden_dtype):\n        timesteps_proj = self.time_proj(timestep)\n        timesteps_emb = self.timestep_embedder(timesteps_proj.to(dtype=hidden_dtype))  # (N, D)\n        return timesteps_emb\n\n\nclass AdaLayerNormSingle(nn.Module):\n    r\"\"\"\n    Norm layer adaptive layer norm single (adaLN-single).\n\n    As proposed in PixArt-Alpha (see: https://arxiv.org/abs/2310.00426; Section 2.3).\n\n    Parameters:\n        embedding_dim (`int`): The size of each embedding vector.\n        use_additional_conditions (`bool`): To use additional conditions for normalization or not.\n    \"\"\"\n\n    def __init__(self, embedding_dim: int, use_additional_conditions: bool = False, dtype=None, device=None, operations=None):\n        super().__init__()\n\n        self.emb = PixArtAlphaCombinedTimestepSizeEmbeddings(\n            embedding_dim, size_emb_dim=embedding_dim // 3, use_additional_conditions=use_additional_conditions, dtype=dtype, device=device, operations=operations\n        )\n\n        self.silu = nn.SiLU()\n        self.linear = operations.Linear(embedding_dim, 6 * embedding_dim, bias=True, dtype=dtype, device=device)\n\n    def forward(\n        self,\n        timestep: torch.Tensor,\n        added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None,\n        batch_size: Optional[int] = None,\n        hidden_dtype: Optional[torch.dtype] = None,\n    ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:\n        # No modulation happening here.\n        added_cond_kwargs = added_cond_kwargs or {\"resolution\": None, \"aspect_ratio\": None}\n        embedded_timestep = self.emb(timestep, **added_cond_kwargs, batch_size=batch_size, hidden_dtype=hidden_dtype)\n        return self.linear(self.silu(embedded_timestep)), embedded_timestep\n\nclass PixArtAlphaTextProjection(nn.Module):\n    \"\"\"\n    Projects caption embeddings. Also handles dropout for classifier-free guidance.\n\n    Adapted from https://github.com/PixArt-alpha/PixArt-alpha/blob/master/diffusion/model/nets/PixArt_blocks.py\n    \"\"\"\n\n    def __init__(self, in_features, hidden_size, out_features=None, act_fn=\"gelu_tanh\", dtype=None, device=None, operations=None):\n        super().__init__()\n        if out_features is None:\n            out_features = hidden_size\n        self.linear_1 = operations.Linear(in_features=in_features, out_features=hidden_size, bias=True, dtype=dtype, device=device)\n        if act_fn == \"gelu_tanh\":\n            self.act_1 = nn.GELU(approximate=\"tanh\")\n        elif act_fn == \"silu\":\n            self.act_1 = nn.SiLU()\n        else:\n            raise ValueError(f\"Unknown activation function: {act_fn}\")\n        self.linear_2 = operations.Linear(in_features=hidden_size, out_features=out_features, bias=True, dtype=dtype, device=device)\n\n    def forward(self, caption):\n        hidden_states = self.linear_1(caption)\n        hidden_states = self.act_1(hidden_states)\n        hidden_states = self.linear_2(hidden_states)\n        return hidden_states\n\n\nclass GELU_approx(nn.Module):\n    def __init__(self, dim_in, dim_out, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.proj = operations.Linear(dim_in, dim_out, dtype=dtype, device=device)\n\n    def forward(self, x):\n        return torch.nn.functional.gelu(self.proj(x), approximate=\"tanh\")\n\n\nclass FeedForward(nn.Module):\n    def __init__(self, dim, dim_out, mult=4, glu=False, dropout=0., dtype=None, device=None, operations=None):\n        super().__init__()\n        inner_dim = int(dim * mult)\n        project_in = GELU_approx(dim, inner_dim, dtype=dtype, device=device, operations=operations)\n\n        self.net = nn.Sequential(\n            project_in,\n            nn.Dropout(dropout),\n            operations.Linear(inner_dim, dim_out, dtype=dtype, device=device)\n        )\n\n    def forward(self, x):\n        return self.net(x)\n\n\ndef apply_rotary_emb(input_tensor, freqs_cis): #TODO: remove duplicate funcs and pick the best/fastest one\n    cos_freqs = freqs_cis[0]\n    sin_freqs = freqs_cis[1]\n\n    t_dup = rearrange(input_tensor, \"... (d r) -> ... d r\", r=2)\n    t1, t2 = t_dup.unbind(dim=-1)\n    t_dup = torch.stack((-t2, t1), dim=-1)\n    input_tensor_rot = rearrange(t_dup, \"... d r -> ... (d r)\")\n\n    out = input_tensor * cos_freqs + input_tensor_rot * sin_freqs\n\n    return out\n\n\nclass CrossAttention(nn.Module):\n    def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0., attn_precision=None, dtype=None, device=None, operations=None):\n        super().__init__()\n        inner_dim = dim_head * heads\n        context_dim = query_dim if context_dim is None else context_dim\n        self.attn_precision = attn_precision\n\n        self.heads = heads\n        self.dim_head = dim_head\n\n        self.q_norm = operations.RMSNorm(inner_dim, dtype=dtype, device=device)\n        self.k_norm = operations.RMSNorm(inner_dim, dtype=dtype, device=device)\n\n        self.to_q = operations.Linear(query_dim, inner_dim, bias=True, dtype=dtype, device=device)\n        self.to_k = operations.Linear(context_dim, inner_dim, bias=True, dtype=dtype, device=device)\n        self.to_v = operations.Linear(context_dim, inner_dim, bias=True, dtype=dtype, device=device)\n\n        self.to_out = nn.Sequential(operations.Linear(inner_dim, query_dim, dtype=dtype, device=device), nn.Dropout(dropout))\n\n    def forward(self, x, context=None, mask=None, pe=None):\n        q = self.to_q(x)\n        context = x if context is None else context\n        k = self.to_k(context)\n        v = self.to_v(context)\n\n        q = self.q_norm(q)\n        k = self.k_norm(k)\n\n        if pe is not None:\n            q = apply_rotary_emb(q, pe)\n            k = apply_rotary_emb(k, pe)\n\n        if mask is None:\n            out = comfy.ldm.modules.attention.optimized_attention(q, k, v, self.heads, attn_precision=self.attn_precision)\n        else:\n            out = comfy.ldm.modules.attention.optimized_attention_masked(q, k, v, self.heads, mask, attn_precision=self.attn_precision)\n        return self.to_out(out)\n\n\nclass BasicTransformerBlock(nn.Module):\n    def __init__(self, dim, n_heads, d_head, context_dim=None, attn_precision=None, dtype=None, device=None, operations=None):\n        super().__init__()\n\n        self.attn_precision = attn_precision\n        self.attn1 = CrossAttention(query_dim=dim, heads=n_heads, dim_head=d_head, context_dim=None, attn_precision=self.attn_precision, dtype=dtype, device=device, operations=operations)\n        self.ff = FeedForward(dim, dim_out=dim, glu=True, dtype=dtype, device=device, operations=operations)\n\n        self.attn2 = CrossAttention(query_dim=dim, context_dim=context_dim, heads=n_heads, dim_head=d_head, attn_precision=self.attn_precision, dtype=dtype, device=device, operations=operations)\n\n        self.scale_shift_table = nn.Parameter(torch.empty(6, dim, device=device, dtype=dtype))\n\n    def forward(self, x, context=None, attention_mask=None, timestep=None, pe=None):\n        shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp = (self.scale_shift_table[None, None].to(device=x.device, dtype=x.dtype) + timestep.reshape(x.shape[0], timestep.shape[1], self.scale_shift_table.shape[0], -1)).unbind(dim=2)\n\n        x += self.attn1(comfy.ldm.common_dit.rms_norm(x) * (1 + scale_msa) + shift_msa, pe=pe) * gate_msa\n\n        x += self.attn2(x, context=context, mask=attention_mask)\n\n        y = comfy.ldm.common_dit.rms_norm(x) * (1 + scale_mlp) + shift_mlp\n        x += self.ff(y) * gate_mlp\n\n        return x\n\ndef get_fractional_positions(indices_grid, max_pos):\n    fractional_positions = torch.stack(\n        [\n            indices_grid[:, i] / max_pos[i]\n            for i in range(3)\n        ],\n        dim=-1,\n    )\n    return fractional_positions\n\n\ndef precompute_freqs_cis(indices_grid, dim, out_dtype, theta=10000.0, max_pos=[20, 2048, 2048]):\n    dtype = torch.float32 #self.dtype\n\n    fractional_positions = get_fractional_positions(indices_grid, max_pos)\n\n    start = 1\n    end = theta\n    device = fractional_positions.device\n\n    indices = theta ** (\n        torch.linspace(\n            math.log(start, theta),\n            math.log(end, theta),\n            dim // 6,\n            device=device,\n            dtype=dtype,\n        )\n    )\n    indices = indices.to(dtype=dtype)\n\n    indices = indices * math.pi / 2\n\n    freqs = (\n        (indices * (fractional_positions.unsqueeze(-1) * 2 - 1))\n        .transpose(-1, -2)\n        .flatten(2)\n    )\n\n    cos_freq = freqs.cos().repeat_interleave(2, dim=-1)\n    sin_freq = freqs.sin().repeat_interleave(2, dim=-1)\n    if dim % 6 != 0:\n        cos_padding = torch.ones_like(cos_freq[:, :, : dim % 6])\n        sin_padding = torch.zeros_like(cos_freq[:, :, : dim % 6])\n        cos_freq = torch.cat([cos_padding, cos_freq], dim=-1)\n        sin_freq = torch.cat([sin_padding, sin_freq], dim=-1)\n    return cos_freq.to(out_dtype), sin_freq.to(out_dtype)\n\n\nclass ReLTXVModel(torch.nn.Module):\n    def __init__(self,\n                 in_channels=128,\n                 cross_attention_dim=2048,\n                 attention_head_dim=64,\n                 num_attention_heads=32,\n\n                 caption_channels=4096,\n                 num_layers=28,\n\n\n                 positional_embedding_theta=10000.0,\n                 positional_embedding_max_pos=[20, 2048, 2048],\n                 causal_temporal_positioning=False,\n                 vae_scale_factors=(8, 32, 32),\n                 dtype=None, device=None, operations=None, **kwargs):\n        super().__init__()\n        self.generator = None\n        self.vae_scale_factors = vae_scale_factors\n        self.dtype = dtype\n        self.out_channels = in_channels\n        self.inner_dim = num_attention_heads * attention_head_dim\n        self.causal_temporal_positioning = causal_temporal_positioning\n\n        self.patchify_proj = operations.Linear(in_channels, self.inner_dim, bias=True, dtype=dtype, device=device)\n\n        self.adaln_single = AdaLayerNormSingle(\n            self.inner_dim, use_additional_conditions=False, dtype=dtype, device=device, operations=operations\n        )\n\n        # self.adaln_single.linear = operations.Linear(self.inner_dim, 4 * self.inner_dim, bias=True, dtype=dtype, device=device)\n\n        self.caption_projection = PixArtAlphaTextProjection(\n            in_features=caption_channels, hidden_size=self.inner_dim, dtype=dtype, device=device, operations=operations\n        )\n\n        self.transformer_blocks = nn.ModuleList(\n            [\n                BasicTransformerBlock(\n                    self.inner_dim,\n                    num_attention_heads,\n                    attention_head_dim,\n                    context_dim=cross_attention_dim,\n                    # attn_precision=attn_precision,\n                    dtype=dtype, device=device, operations=operations\n                )\n                for d in range(num_layers)\n            ]\n        )\n\n        self.scale_shift_table = nn.Parameter(torch.empty(2, self.inner_dim, dtype=dtype, device=device))\n        self.norm_out = operations.LayerNorm(self.inner_dim, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n        self.proj_out = operations.Linear(self.inner_dim, self.out_channels, dtype=dtype, device=device)\n\n        self.patchifier = SymmetricPatchifier(1)\n\n    def forward(self, x, timestep, context, attention_mask, frame_rate=25, transformer_options={}, keyframe_idxs=None, **kwargs):\n        patches_replace = transformer_options.get(\"patches_replace\", {})\n        \n        SIGMA = timestep[0].unsqueeze(0) #/ 1000\n        EO = transformer_options.get(\"ExtraOptions\", ExtraOptions(\"\"))\n        \n        y0_style_pos        = transformer_options.get(\"y0_style_pos\")\n        y0_style_neg        = transformer_options.get(\"y0_style_neg\")\n\n        y0_style_pos_weight    = transformer_options.get(\"y0_style_pos_weight\", 0.0)\n        y0_style_pos_synweight = transformer_options.get(\"y0_style_pos_synweight\", 0.0)\n        y0_style_pos_synweight *= y0_style_pos_weight\n\n        y0_style_neg_weight    = transformer_options.get(\"y0_style_neg_weight\", 0.0)\n        y0_style_neg_synweight = transformer_options.get(\"y0_style_neg_synweight\", 0.0)\n        y0_style_neg_synweight *= y0_style_neg_weight\n        \n        x_orig = x.clone()\n\n        orig_shape = list(x.shape)\n\n        x, latent_coords = self.patchifier.patchify(x)\n        pixel_coords = latent_to_pixel_coords(\n            latent_coords=latent_coords,\n            scale_factors=self.vae_scale_factors,\n            causal_fix=self.causal_temporal_positioning,\n        )\n\n        if keyframe_idxs is not None:\n            pixel_coords[:, :, -keyframe_idxs.shape[2]:] = keyframe_idxs\n\n        fractional_coords = pixel_coords.to(torch.float32)\n        fractional_coords[:, 0] = fractional_coords[:, 0] * (1.0 / frame_rate)\n\n        x = self.patchify_proj(x)\n        timestep = timestep * 1000.0\n\n        if attention_mask is not None and not torch.is_floating_point(attention_mask):\n            attention_mask = (attention_mask - 1).to(x.dtype).reshape((attention_mask.shape[0], 1, -1, attention_mask.shape[-1])) * torch.finfo(x.dtype).max\n\n        pe = precompute_freqs_cis(fractional_coords, dim=self.inner_dim, out_dtype=x.dtype)\n\n        batch_size = x.shape[0]\n        timestep, embedded_timestep = self.adaln_single(\n            timestep.flatten(),\n            {\"resolution\": None, \"aspect_ratio\": None},\n            batch_size=batch_size,\n            hidden_dtype=x.dtype,\n        )\n        # Second dimension is 1 or number of tokens (if timestep_per_token)\n        timestep = timestep.view(batch_size, -1, timestep.shape[-1])\n        embedded_timestep = embedded_timestep.view(\n            batch_size, -1, embedded_timestep.shape[-1]\n        )\n\n        # 2. Blocks\n        if self.caption_projection is not None:\n            batch_size = x.shape[0]\n            context = self.caption_projection(context)\n            context = context.view(\n                batch_size, -1, x.shape[-1]\n            )\n\n        blocks_replace = patches_replace.get(\"dit\", {})\n        for i, block in enumerate(self.transformer_blocks):\n            if (\"double_block\", i) in blocks_replace:\n                def block_wrap(args):\n                    out = {}\n                    out[\"img\"] = block(args[\"img\"], context=args[\"txt\"], attention_mask=args[\"attention_mask\"], timestep=args[\"vec\"], pe=args[\"pe\"])\n                    return out\n\n                out = blocks_replace[(\"double_block\", i)]({\"img\": x, \"txt\": context, \"attention_mask\": attention_mask, \"vec\": timestep, \"pe\": pe}, {\"original_block\": block_wrap})\n                x = out[\"img\"]\n            else:\n                x = block(\n                    x,\n                    context=context,\n                    attention_mask=attention_mask,\n                    timestep=timestep,\n                    pe=pe\n                )\n\n        # 3. Output\n        scale_shift_values = (\n            self.scale_shift_table[None, None].to(device=x.device, dtype=x.dtype) + embedded_timestep[:, :, None]\n        )\n        shift, scale = scale_shift_values[:, :, 0], scale_shift_values[:, :, 1]\n        x = self.norm_out(x)\n        # Modulation\n        x = x * (1 + scale) + shift\n        x = self.proj_out(x)\n\n        x = self.patchifier.unpatchify(\n            latents=x,\n            output_height=orig_shape[3],\n            output_width=orig_shape[4],\n            output_num_frames=orig_shape[2],\n            out_channels=orig_shape[1] // math.prod(self.patchifier.patch_size),\n        )\n\n        eps = x\n\n\n\n\n\n        \n        dtype = eps.dtype if self.style_dtype is None else self.style_dtype\n        pinv_dtype = torch.float32 if dtype != torch.float64 else dtype\n        W_inv = None\n        \n        \n        #if eps.shape[0] == 2 or (eps.shape[0] == 1): #: and not UNCOND):\n        if y0_style_pos is not None and y0_style_pos_weight != 0.0:\n            y0_style_pos = y0_style_pos.to(torch.float32)\n            x   = x_orig.clone().to(torch.float32)\n            eps = eps.to(torch.float32)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n            \n            img,          img_latent_coords          = self.patchifier.patchify(denoised)\n            img_y0_adain, img_y0_adain_latent_coords = self.patchifier.patchify(y0_style_pos)\n\n            W = self.patchify_proj.weight.data.to(torch.float32)   # shape [2560, 64]\n            b = self.patchify_proj.bias  .data.to(torch.float32)     # shape [2560]\n            \n            denoised_embed = F.linear(img         .to(W), W, b).to(img)\n            y0_adain_embed = F.linear(img_y0_adain.to(W), W, b).to(img_y0_adain)\n\n            if transformer_options['y0_style_method'] == \"AdaIN\":\n                denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = (denoised_embed - b) @ torch.linalg.pinv(W.to(pinv_dtype)).T.to(dtype)\n                    denoised_embed = F.linear(denoised_embed.to(W), W, b).to(img)\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                if self.y0_adain_embed is None or self.y0_adain_embed.shape != y0_adain_embed.shape or torch.norm(self.y0_adain_embed - y0_adain_embed) > 0:\n                    self.y0_adain_embed = y0_adain_embed\n                    \n                    f_s          = y0_adain_embed[0].clone()\n                    self.mu_s    = f_s.mean(dim=0, keepdim=True)\n                    f_s_centered = f_s - self.mu_s\n                    \n                    cov = (f_s_centered.T.double() @ f_s_centered.double()) / (f_s_centered.size(0) - 1)\n\n                    S_eig, U_eig = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    S_eig_sqrt    = S_eig.clamp(min=0).sqrt() # eigenvalues -> singular values\n                    \n                    whiten = U_eig @ torch.diag(S_eig_sqrt) @ U_eig.T\n                    self.y0_color  = whiten.to(f_s_centered)\n\n                for wct_i in range(eps.shape[0]):\n                    f_c          = denoised_embed[wct_i].clone()\n                    mu_c         = f_c.mean(dim=0, keepdim=True)\n                    f_c_centered = f_c - mu_c\n                    \n                    cov = (f_c_centered.T.double() @ f_c_centered.double()) / (f_c_centered.size(0) - 1)\n\n                    S_eig, U_eig  = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    inv_sqrt_eig  = S_eig.clamp(min=0).rsqrt() \n                    \n                    whiten = U_eig @ torch.diag(inv_sqrt_eig) @ U_eig.T\n                    whiten = whiten.to(f_c_centered)\n\n                    f_c_whitened = f_c_centered @ whiten.T\n                    f_cs         = f_c_whitened @ self.y0_color.T + self.mu_s\n                    \n                    denoised_embed[wct_i] = f_cs\n\n            \n            denoised_approx = (denoised_embed - b.to(denoised_embed)) @ torch.linalg.pinv(W).T.to(denoised_embed)\n            denoised_approx = denoised_approx.to(eps)\n            \n\n            denoised_approx = self.patchifier.unpatchify(\n                latents=denoised_approx,\n                output_height=orig_shape[3],\n                output_width=orig_shape[4],\n                output_num_frames=orig_shape[2],\n                out_channels=orig_shape[1] // math.prod(self.patchifier.patch_size),\n            )\n            \n            eps = (x - denoised_approx) / sigma\n            \n            #UNCOND = transformer_options['cond_or_uncond'][cond_iter] == 1\n\n            if eps.shape[0] == 1 and transformer_options['cond_or_uncond'][0] == 1:\n                eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n                #if eps.shape[0] == 2:\n                #    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n            else: #if not UNCOND:\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_pos_weight * (eps[1] - eps_orig[1])\n                    eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n                else:\n                    eps[0] = eps_orig[0] + y0_style_pos_weight * (eps[0] - eps_orig[0])\n            \n            eps = eps.float()\n        \n        #if eps.shape[0] == 2 or (eps.shape[0] == 1): # and UNCOND):\n        if y0_style_neg is not None and y0_style_neg_weight != 0.0:\n            y0_style_neg = y0_style_neg.to(torch.float32)\n            x   = x_orig.clone().to(torch.float32)\n            eps = eps.to(torch.float32)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n            \n            img,          img_latent_coords          = self.patchifier.patchify(denoised)\n            img_y0_adain, img_y0_adain_latent_coords = self.patchifier.patchify(y0_style_neg)\n\n            W = self.patchify_proj.weight.data.to(torch.float32)   # shape [2560, 64]\n            b = self.patchify_proj.bias  .data.to(torch.float32)     # shape [2560]\n            \n            denoised_embed = F.linear(img         .to(W), W, b).to(img)\n            y0_adain_embed = F.linear(img_y0_adain.to(W), W, b).to(img_y0_adain)\n            \n            if transformer_options['y0_style_method'] == \"AdaIN\":\n                denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = (denoised_embed - b) @ torch.linalg.pinv(W.to(pinv_dtype)).T.to(dtype)\n                    denoised_embed = F.linear(denoised_embed.to(W), W, b).to(img)\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                if self.y0_adain_embed is None or self.y0_adain_embed.shape != y0_adain_embed.shape or torch.norm(self.y0_adain_embed - y0_adain_embed) > 0:\n                    self.y0_adain_embed = y0_adain_embed\n                    \n                    f_s          = y0_adain_embed[0].clone()\n                    self.mu_s    = f_s.mean(dim=0, keepdim=True)\n                    f_s_centered = f_s - self.mu_s\n                    \n                    cov = (f_s_centered.T.double() @ f_s_centered.double()) / (f_s_centered.size(0) - 1)\n\n                    S_eig, U_eig = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    S_eig_sqrt    = S_eig.clamp(min=0).sqrt() # eigenvalues -> singular values\n                    \n                    whiten = U_eig @ torch.diag(S_eig_sqrt) @ U_eig.T\n                    self.y0_color  = whiten.to(f_s_centered)\n\n                for wct_i in range(eps.shape[0]):\n                    f_c          = denoised_embed[wct_i].clone()\n                    mu_c         = f_c.mean(dim=0, keepdim=True)\n                    f_c_centered = f_c - mu_c\n                    \n                    cov = (f_c_centered.T.double() @ f_c_centered.double()) / (f_c_centered.size(0) - 1)\n\n                    S_eig, U_eig  = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    inv_sqrt_eig  = S_eig.clamp(min=0).rsqrt() \n                    \n                    whiten = U_eig @ torch.diag(inv_sqrt_eig) @ U_eig.T\n                    whiten = whiten.to(f_c_centered)\n\n                    f_c_whitened = f_c_centered @ whiten.T\n                    f_cs         = f_c_whitened @ self.y0_color.T + self.mu_s\n                    \n                    denoised_embed[wct_i] = f_cs\n\n            denoised_approx = (denoised_embed - b.to(denoised_embed)) @ torch.linalg.pinv(W).T.to(denoised_embed)\n            denoised_approx = denoised_approx.to(eps)\n            \n            #denoised_approx = rearrange(denoised_approx, \"b (h w) (c ph pw) -> b c (h ph) (w pw)\", h=h_len, w=w_len, ph=2, pw=2)[:,:,:h,:w]\n            \n            #denoised_approx = self.unpatchify(denoised_approx, (h + 1) // self.patch_size, (w + 1) // self.patch_size)[:,:,:h,:w]\n            \n            denoised_approx = self.patchifier.unpatchify(\n                latents=denoised_approx,\n                output_height=orig_shape[3],\n                output_width=orig_shape[4],\n                output_num_frames=orig_shape[2],\n                out_channels=orig_shape[1] // math.prod(self.patchifier.patch_size),\n            )\n            \n            if eps.shape[0] == 1 and not transformer_options['cond_or_uncond'][0] == 1:\n                eps[0] = eps_orig[0] + y0_style_neg_synweight * (eps[0] - eps_orig[0])\n            else:\n                eps = (x - denoised_approx) / sigma\n                eps[0] = eps_orig[0] + y0_style_neg_weight * (eps[0] - eps_orig[0])\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n            \n            eps = eps.float()\n        \n        return eps\n\n\n\n\n\ndef adain_seq_inplace(content: torch.Tensor, style: torch.Tensor, eps: float = 1e-7) -> torch.Tensor:\n    mean_c = content.mean(1, keepdim=True)\n    std_c  = content.std (1, keepdim=True).add_(eps)  # in-place add\n    mean_s = style.mean  (1, keepdim=True)\n    std_s  = style.std   (1, keepdim=True).add_(eps)\n\n    content.sub_(mean_c).div_(std_c).mul_(std_s).add_(mean_s)  # in-place chain\n    return content\n\n\ndef adain_seq(content: torch.Tensor, style: torch.Tensor, eps: float = 1e-7) -> torch.Tensor:\n    return ((content - content.mean(1, keepdim=True)) / (content.std(1, keepdim=True) + eps)) * (style.std(1, keepdim=True) + eps) + style.mean(1, keepdim=True)\n\n\n\n"
  },
  {
    "path": "lightricks/symmetric_patchifier.py",
    "content": "from abc import ABC, abstractmethod\nfrom typing import Tuple\n\nimport torch\nfrom einops import rearrange\nfrom torch import Tensor\n\n\ndef latent_to_pixel_coords(\n    latent_coords: Tensor, scale_factors: Tuple[int, int, int], causal_fix: bool = False\n) -> Tensor:\n    \"\"\"\n    Converts latent coordinates to pixel coordinates by scaling them according to the VAE's\n    configuration.\n    Args:\n        latent_coords (Tensor): A tensor of shape [batch_size, 3, num_latents]\n        containing the latent corner coordinates of each token.\n        scale_factors (Tuple[int, int, int]): The scale factors of the VAE's latent space.\n        causal_fix (bool): Whether to take into account the different temporal scale\n            of the first frame. Default = False for backwards compatibility.\n    Returns:\n        Tensor: A tensor of pixel coordinates corresponding to the input latent coordinates.\n    \"\"\"\n    pixel_coords = (\n        latent_coords\n        * torch.tensor(scale_factors, device=latent_coords.device)[None, :, None]\n    )\n    if causal_fix:\n        # Fix temporal scale for first frame to 1 due to causality\n        pixel_coords[:, 0] = (pixel_coords[:, 0] + 1 - scale_factors[0]).clamp(min=0)\n    return pixel_coords\n\n\nclass Patchifier(ABC):\n    def __init__(self, patch_size: int):\n        super().__init__()\n        self._patch_size = (1, patch_size, patch_size)\n\n    @abstractmethod\n    def patchify(\n        self, latents: Tensor, frame_rates: Tensor, scale_grid: bool\n    ) -> Tuple[Tensor, Tensor]:\n        pass\n\n    @abstractmethod\n    def unpatchify(\n        self,\n        latents: Tensor,\n        output_height: int,\n        output_width: int,\n        output_num_frames: int,\n        out_channels: int,\n    ) -> Tuple[Tensor, Tensor]:\n        pass\n\n    @property\n    def patch_size(self):\n        return self._patch_size\n\n    def get_latent_coords(\n        self, latent_num_frames, latent_height, latent_width, batch_size, device\n    ):\n        \"\"\"\n        Return a tensor of shape [batch_size, 3, num_patches] containing the\n            top-left corner latent coordinates of each latent patch.\n        The tensor is repeated for each batch element.\n        \"\"\"\n        latent_sample_coords = torch.meshgrid(\n            torch.arange(0, latent_num_frames, self._patch_size[0], device=device),\n            torch.arange(0, latent_height, self._patch_size[1], device=device),\n            torch.arange(0, latent_width, self._patch_size[2], device=device),\n            indexing=\"ij\",\n        )\n        latent_sample_coords = torch.stack(latent_sample_coords, dim=0)\n        latent_coords = latent_sample_coords.unsqueeze(0).repeat(batch_size, 1, 1, 1, 1)\n        latent_coords = rearrange(\n            latent_coords, \"b c f h w -> b c (f h w)\", b=batch_size\n        )\n        return latent_coords\n\n\nclass SymmetricPatchifier(Patchifier):\n    def patchify(\n        self,\n        latents: Tensor,\n    ) -> Tuple[Tensor, Tensor]:\n        b, _, f, h, w = latents.shape\n        latent_coords = self.get_latent_coords(f, h, w, b, latents.device)\n        latents = rearrange(\n            latents,\n            \"b c (f p1) (h p2) (w p3) -> b (f h w) (c p1 p2 p3)\",\n            p1=self._patch_size[0],\n            p2=self._patch_size[1],\n            p3=self._patch_size[2],\n        )\n        return latents, latent_coords\n\n    def unpatchify(\n        self,\n        latents: Tensor,\n        output_height: int,\n        output_width: int,\n        output_num_frames: int,\n        out_channels: int,\n    ) -> Tuple[Tensor, Tensor]:\n        output_height = output_height // self._patch_size[1]\n        output_width = output_width // self._patch_size[2]\n        latents = rearrange(\n            latents,\n            \"b (f h w) (c p q) -> b c f (h p) (w q) \",\n            f=output_num_frames,\n            h=output_height,\n            w=output_width,\n            p=self._patch_size[1],\n            q=self._patch_size[2],\n        )\n        return latents\n"
  },
  {
    "path": "lightricks/vae/causal_conv3d.py",
    "content": "from typing import Tuple, Union\n\nimport torch\nimport torch.nn as nn\nimport comfy.ops\nops = comfy.ops.disable_weight_init\n\n\nclass CausalConv3d(nn.Module):\n    def __init__(\n        self,\n        in_channels,\n        out_channels,\n        kernel_size: int = 3,\n        stride: Union[int, Tuple[int]] = 1,\n        dilation: int = 1,\n        groups: int = 1,\n        spatial_padding_mode: str = \"zeros\",\n        **kwargs,\n    ):\n        super().__init__()\n\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n\n        kernel_size = (kernel_size, kernel_size, kernel_size)\n        self.time_kernel_size = kernel_size[0]\n\n        dilation = (dilation, 1, 1)\n\n        height_pad = kernel_size[1] // 2\n        width_pad = kernel_size[2] // 2\n        padding = (0, height_pad, width_pad)\n\n        self.conv = ops.Conv3d(\n            in_channels,\n            out_channels,\n            kernel_size,\n            stride=stride,\n            dilation=dilation,\n            padding=padding,\n            padding_mode=spatial_padding_mode,\n            groups=groups,\n        )\n\n    def forward(self, x, causal: bool = True):\n        if causal:\n            first_frame_pad = x[:, :, :1, :, :].repeat(\n                (1, 1, self.time_kernel_size - 1, 1, 1)\n            )\n            x = torch.concatenate((first_frame_pad, x), dim=2)\n        else:\n            first_frame_pad = x[:, :, :1, :, :].repeat(\n                (1, 1, (self.time_kernel_size - 1) // 2, 1, 1)\n            )\n            last_frame_pad = x[:, :, -1:, :, :].repeat(\n                (1, 1, (self.time_kernel_size - 1) // 2, 1, 1)\n            )\n            x = torch.concatenate((first_frame_pad, x, last_frame_pad), dim=2)\n        x = self.conv(x)\n        return x\n\n    @property\n    def weight(self):\n        return self.conv.weight\n"
  },
  {
    "path": "lightricks/vae/causal_video_autoencoder.py",
    "content": "from __future__ import annotations\nimport torch\nfrom torch import nn\nfrom functools import partial\nimport math\nfrom einops import rearrange\nfrom typing import List, Optional, Tuple, Union\nfrom .conv_nd_factory import make_conv_nd, make_linear_nd\nfrom .pixel_norm import PixelNorm\nfrom ..model import PixArtAlphaCombinedTimestepSizeEmbeddings\nimport comfy.ops\n\nops = comfy.ops.disable_weight_init\n\nclass Encoder(nn.Module):\n    r\"\"\"\n    The `Encoder` layer of a variational autoencoder that encodes its input into a latent representation.\n\n    Args:\n        dims (`int` or `Tuple[int, int]`, *optional*, defaults to 3):\n            The number of dimensions to use in convolutions.\n        in_channels (`int`, *optional*, defaults to 3):\n            The number of input channels.\n        out_channels (`int`, *optional*, defaults to 3):\n            The number of output channels.\n        blocks (`List[Tuple[str, int]]`, *optional*, defaults to `[(\"res_x\", 1)]`):\n            The blocks to use. Each block is a tuple of the block name and the number of layers.\n        base_channels (`int`, *optional*, defaults to 128):\n            The number of output channels for the first convolutional layer.\n        norm_num_groups (`int`, *optional*, defaults to 32):\n            The number of groups for normalization.\n        patch_size (`int`, *optional*, defaults to 1):\n            The patch size to use. Should be a power of 2.\n        norm_layer (`str`, *optional*, defaults to `group_norm`):\n            The normalization layer to use. Can be either `group_norm` or `pixel_norm`.\n        latent_log_var (`str`, *optional*, defaults to `per_channel`):\n            The number of channels for the log variance. Can be either `per_channel`, `uniform`, `constant` or `none`.\n    \"\"\"\n\n    def __init__(\n        self,\n        dims: Union[int, Tuple[int, int]] = 3,\n        in_channels: int = 3,\n        out_channels: int = 3,\n        blocks: List[Tuple[str, int | dict]] = [(\"res_x\", 1)],\n        base_channels: int = 128,\n        norm_num_groups: int = 32,\n        patch_size: Union[int, Tuple[int]] = 1,\n        norm_layer: str = \"group_norm\",  # group_norm, pixel_norm\n        latent_log_var: str = \"per_channel\",\n        spatial_padding_mode: str = \"zeros\",\n    ):\n        super().__init__()\n        self.patch_size = patch_size\n        self.norm_layer = norm_layer\n        self.latent_channels = out_channels\n        self.latent_log_var = latent_log_var\n        self.blocks_desc = blocks\n\n        in_channels = in_channels * patch_size**2\n        output_channel = base_channels\n\n        self.conv_in = make_conv_nd(\n            dims=dims,\n            in_channels=in_channels,\n            out_channels=output_channel,\n            kernel_size=3,\n            stride=1,\n            padding=1,\n            causal=True,\n            spatial_padding_mode=spatial_padding_mode,\n        )\n\n        self.down_blocks = nn.ModuleList([])\n\n        for block_name, block_params in blocks:\n            input_channel = output_channel\n            if isinstance(block_params, int):\n                block_params = {\"num_layers\": block_params}\n\n            if block_name == \"res_x\":\n                block = UNetMidBlock3D(\n                    dims=dims,\n                    in_channels=input_channel,\n                    num_layers=block_params[\"num_layers\"],\n                    resnet_eps=1e-6,\n                    resnet_groups=norm_num_groups,\n                    norm_layer=norm_layer,\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            elif block_name == \"res_x_y\":\n                output_channel = block_params.get(\"multiplier\", 2) * output_channel\n                block = ResnetBlock3D(\n                    dims=dims,\n                    in_channels=input_channel,\n                    out_channels=output_channel,\n                    eps=1e-6,\n                    groups=norm_num_groups,\n                    norm_layer=norm_layer,\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            elif block_name == \"compress_time\":\n                block = make_conv_nd(\n                    dims=dims,\n                    in_channels=input_channel,\n                    out_channels=output_channel,\n                    kernel_size=3,\n                    stride=(2, 1, 1),\n                    causal=True,\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            elif block_name == \"compress_space\":\n                block = make_conv_nd(\n                    dims=dims,\n                    in_channels=input_channel,\n                    out_channels=output_channel,\n                    kernel_size=3,\n                    stride=(1, 2, 2),\n                    causal=True,\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            elif block_name == \"compress_all\":\n                block = make_conv_nd(\n                    dims=dims,\n                    in_channels=input_channel,\n                    out_channels=output_channel,\n                    kernel_size=3,\n                    stride=(2, 2, 2),\n                    causal=True,\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            elif block_name == \"compress_all_x_y\":\n                output_channel = block_params.get(\"multiplier\", 2) * output_channel\n                block = make_conv_nd(\n                    dims=dims,\n                    in_channels=input_channel,\n                    out_channels=output_channel,\n                    kernel_size=3,\n                    stride=(2, 2, 2),\n                    causal=True,\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            elif block_name == \"compress_all_res\":\n                output_channel = block_params.get(\"multiplier\", 2) * output_channel\n                block = SpaceToDepthDownsample(\n                    dims=dims,\n                    in_channels=input_channel,\n                    out_channels=output_channel,\n                    stride=(2, 2, 2),\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            elif block_name == \"compress_space_res\":\n                output_channel = block_params.get(\"multiplier\", 2) * output_channel\n                block = SpaceToDepthDownsample(\n                    dims=dims,\n                    in_channels=input_channel,\n                    out_channels=output_channel,\n                    stride=(1, 2, 2),\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            elif block_name == \"compress_time_res\":\n                output_channel = block_params.get(\"multiplier\", 2) * output_channel\n                block = SpaceToDepthDownsample(\n                    dims=dims,\n                    in_channels=input_channel,\n                    out_channels=output_channel,\n                    stride=(2, 1, 1),\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            else:\n                raise ValueError(f\"unknown block: {block_name}\")\n\n            self.down_blocks.append(block)\n\n        # out\n        if norm_layer == \"group_norm\":\n            self.conv_norm_out = nn.GroupNorm(\n                num_channels=output_channel, num_groups=norm_num_groups, eps=1e-6\n            )\n        elif norm_layer == \"pixel_norm\":\n            self.conv_norm_out = PixelNorm()\n        elif norm_layer == \"layer_norm\":\n            self.conv_norm_out = LayerNorm(output_channel, eps=1e-6)\n\n        self.conv_act = nn.SiLU()\n\n        conv_out_channels = out_channels\n        if latent_log_var == \"per_channel\":\n            conv_out_channels *= 2\n        elif latent_log_var == \"uniform\":\n            conv_out_channels += 1\n        elif latent_log_var == \"constant\":\n            conv_out_channels += 1\n        elif latent_log_var != \"none\":\n            raise ValueError(f\"Invalid latent_log_var: {latent_log_var}\")\n        self.conv_out = make_conv_nd(\n            dims,\n            output_channel,\n            conv_out_channels,\n            3,\n            padding=1,\n            causal=True,\n            spatial_padding_mode=spatial_padding_mode,\n        )\n\n        self.gradient_checkpointing = False\n\n    def forward(self, sample: torch.FloatTensor) -> torch.FloatTensor:\n        r\"\"\"The forward method of the `Encoder` class.\"\"\"\n\n        sample = patchify(sample, patch_size_hw=self.patch_size, patch_size_t=1)\n        sample = self.conv_in(sample)\n\n        checkpoint_fn = (\n            partial(torch.utils.checkpoint.checkpoint, use_reentrant=False)\n            if self.gradient_checkpointing and self.training\n            else lambda x: x\n        )\n\n        for down_block in self.down_blocks:\n            sample = checkpoint_fn(down_block)(sample)\n\n        sample = self.conv_norm_out(sample)\n        sample = self.conv_act(sample)\n        sample = self.conv_out(sample)\n\n        if self.latent_log_var == \"uniform\":\n            last_channel = sample[:, -1:, ...]\n            num_dims = sample.dim()\n\n            if num_dims == 4:\n                # For shape (B, C, H, W)\n                repeated_last_channel = last_channel.repeat(\n                    1, sample.shape[1] - 2, 1, 1\n                )\n                sample = torch.cat([sample, repeated_last_channel], dim=1)\n            elif num_dims == 5:\n                # For shape (B, C, F, H, W)\n                repeated_last_channel = last_channel.repeat(\n                    1, sample.shape[1] - 2, 1, 1, 1\n                )\n                sample = torch.cat([sample, repeated_last_channel], dim=1)\n            else:\n                raise ValueError(f\"Invalid input shape: {sample.shape}\")\n        elif self.latent_log_var == \"constant\":\n            sample = sample[:, :-1, ...]\n            approx_ln_0 = (\n                -30\n            )  # this is the minimal clamp value in DiagonalGaussianDistribution objects\n            sample = torch.cat(\n                [sample, torch.ones_like(sample, device=sample.device) * approx_ln_0],\n                dim=1,\n            )\n\n        return sample\n\n\nclass Decoder(nn.Module):\n    r\"\"\"\n    The `Decoder` layer of a variational autoencoder that decodes its latent representation into an output sample.\n\n    Args:\n        dims (`int` or `Tuple[int, int]`, *optional*, defaults to 3):\n            The number of dimensions to use in convolutions.\n        in_channels (`int`, *optional*, defaults to 3):\n            The number of input channels.\n        out_channels (`int`, *optional*, defaults to 3):\n            The number of output channels.\n        blocks (`List[Tuple[str, int]]`, *optional*, defaults to `[(\"res_x\", 1)]`):\n            The blocks to use. Each block is a tuple of the block name and the number of layers.\n        base_channels (`int`, *optional*, defaults to 128):\n            The number of output channels for the first convolutional layer.\n        norm_num_groups (`int`, *optional*, defaults to 32):\n            The number of groups for normalization.\n        patch_size (`int`, *optional*, defaults to 1):\n            The patch size to use. Should be a power of 2.\n        norm_layer (`str`, *optional*, defaults to `group_norm`):\n            The normalization layer to use. Can be either `group_norm` or `pixel_norm`.\n        causal (`bool`, *optional*, defaults to `True`):\n            Whether to use causal convolutions or not.\n    \"\"\"\n\n    def __init__(\n        self,\n        dims,\n        in_channels: int = 3,\n        out_channels: int = 3,\n        blocks: List[Tuple[str, int | dict]] = [(\"res_x\", 1)],\n        base_channels: int = 128,\n        layers_per_block: int = 2,\n        norm_num_groups: int = 32,\n        patch_size: int = 1,\n        norm_layer: str = \"group_norm\",\n        causal: bool = True,\n        timestep_conditioning: bool = False,\n        spatial_padding_mode: str = \"zeros\",\n    ):\n        super().__init__()\n        self.patch_size = patch_size\n        self.layers_per_block = layers_per_block\n        out_channels = out_channels * patch_size**2\n        self.causal = causal\n        self.blocks_desc = blocks\n\n        # Compute output channel to be product of all channel-multiplier blocks\n        output_channel = base_channels\n        for block_name, block_params in list(reversed(blocks)):\n            block_params = block_params if isinstance(block_params, dict) else {}\n            if block_name == \"res_x_y\":\n                output_channel = output_channel * block_params.get(\"multiplier\", 2)\n            if block_name == \"compress_all\":\n                output_channel = output_channel * block_params.get(\"multiplier\", 1)\n\n        self.conv_in = make_conv_nd(\n            dims,\n            in_channels,\n            output_channel,\n            kernel_size=3,\n            stride=1,\n            padding=1,\n            causal=True,\n            spatial_padding_mode=spatial_padding_mode,\n        )\n\n        self.up_blocks = nn.ModuleList([])\n\n        for block_name, block_params in list(reversed(blocks)):\n            input_channel = output_channel\n            if isinstance(block_params, int):\n                block_params = {\"num_layers\": block_params}\n\n            if block_name == \"res_x\":\n                block = UNetMidBlock3D(\n                    dims=dims,\n                    in_channels=input_channel,\n                    num_layers=block_params[\"num_layers\"],\n                    resnet_eps=1e-6,\n                    resnet_groups=norm_num_groups,\n                    norm_layer=norm_layer,\n                    inject_noise=block_params.get(\"inject_noise\", False),\n                    timestep_conditioning=timestep_conditioning,\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            elif block_name == \"attn_res_x\":\n                block = UNetMidBlock3D(\n                    dims=dims,\n                    in_channels=input_channel,\n                    num_layers=block_params[\"num_layers\"],\n                    resnet_groups=norm_num_groups,\n                    norm_layer=norm_layer,\n                    inject_noise=block_params.get(\"inject_noise\", False),\n                    timestep_conditioning=timestep_conditioning,\n                    attention_head_dim=block_params[\"attention_head_dim\"],\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            elif block_name == \"res_x_y\":\n                output_channel = output_channel // block_params.get(\"multiplier\", 2)\n                block = ResnetBlock3D(\n                    dims=dims,\n                    in_channels=input_channel,\n                    out_channels=output_channel,\n                    eps=1e-6,\n                    groups=norm_num_groups,\n                    norm_layer=norm_layer,\n                    inject_noise=block_params.get(\"inject_noise\", False),\n                    timestep_conditioning=False,\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            elif block_name == \"compress_time\":\n                block = DepthToSpaceUpsample(\n                    dims=dims,\n                    in_channels=input_channel,\n                    stride=(2, 1, 1),\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            elif block_name == \"compress_space\":\n                block = DepthToSpaceUpsample(\n                    dims=dims,\n                    in_channels=input_channel,\n                    stride=(1, 2, 2),\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            elif block_name == \"compress_all\":\n                output_channel = output_channel // block_params.get(\"multiplier\", 1)\n                block = DepthToSpaceUpsample(\n                    dims=dims,\n                    in_channels=input_channel,\n                    stride=(2, 2, 2),\n                    residual=block_params.get(\"residual\", False),\n                    out_channels_reduction_factor=block_params.get(\"multiplier\", 1),\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n            else:\n                raise ValueError(f\"unknown layer: {block_name}\")\n\n            self.up_blocks.append(block)\n\n        if norm_layer == \"group_norm\":\n            self.conv_norm_out = nn.GroupNorm(\n                num_channels=output_channel, num_groups=norm_num_groups, eps=1e-6\n            )\n        elif norm_layer == \"pixel_norm\":\n            self.conv_norm_out = PixelNorm()\n        elif norm_layer == \"layer_norm\":\n            self.conv_norm_out = LayerNorm(output_channel, eps=1e-6)\n\n        self.conv_act = nn.SiLU()\n        self.conv_out = make_conv_nd(\n            dims,\n            output_channel,\n            out_channels,\n            3,\n            padding=1,\n            causal=True,\n            spatial_padding_mode=spatial_padding_mode,\n        )\n\n        self.gradient_checkpointing = False\n\n        self.timestep_conditioning = timestep_conditioning\n\n        if timestep_conditioning:\n            self.timestep_scale_multiplier = nn.Parameter(\n                torch.tensor(1000.0, dtype=torch.float32)\n            )\n            self.last_time_embedder = PixArtAlphaCombinedTimestepSizeEmbeddings(\n                output_channel * 2, 0, operations=ops,\n            )\n            self.last_scale_shift_table = nn.Parameter(torch.empty(2, output_channel))\n\n    # def forward(self, sample: torch.FloatTensor, target_shape) -> torch.FloatTensor:\n    def forward(\n        self,\n        sample: torch.FloatTensor,\n        timestep: Optional[torch.Tensor] = None,\n    ) -> torch.FloatTensor:\n        r\"\"\"The forward method of the `Decoder` class.\"\"\"\n        batch_size = sample.shape[0]\n\n        sample = self.conv_in(sample, causal=self.causal)\n\n        checkpoint_fn = (\n            partial(torch.utils.checkpoint.checkpoint, use_reentrant=False)\n            if self.gradient_checkpointing and self.training\n            else lambda x: x\n        )\n\n        scaled_timestep = None\n        if self.timestep_conditioning:\n            assert (\n                timestep is not None\n            ), \"should pass timestep with timestep_conditioning=True\"\n            scaled_timestep = timestep * self.timestep_scale_multiplier.to(dtype=sample.dtype, device=sample.device)\n\n        for up_block in self.up_blocks:\n            if self.timestep_conditioning and isinstance(up_block, UNetMidBlock3D):\n                sample = checkpoint_fn(up_block)(\n                    sample, causal=self.causal, timestep=scaled_timestep\n                )\n            else:\n                sample = checkpoint_fn(up_block)(sample, causal=self.causal)\n\n        sample = self.conv_norm_out(sample)\n\n        if self.timestep_conditioning:\n            embedded_timestep = self.last_time_embedder(\n                timestep=scaled_timestep.flatten(),\n                resolution=None,\n                aspect_ratio=None,\n                batch_size=sample.shape[0],\n                hidden_dtype=sample.dtype,\n            )\n            embedded_timestep = embedded_timestep.view(\n                batch_size, embedded_timestep.shape[-1], 1, 1, 1\n            )\n            ada_values = self.last_scale_shift_table[\n                None, ..., None, None, None\n            ].to(device=sample.device, dtype=sample.dtype) + embedded_timestep.reshape(\n                batch_size,\n                2,\n                -1,\n                embedded_timestep.shape[-3],\n                embedded_timestep.shape[-2],\n                embedded_timestep.shape[-1],\n            )\n            shift, scale = ada_values.unbind(dim=1)\n            sample = sample * (1 + scale) + shift\n\n        sample = self.conv_act(sample)\n        sample = self.conv_out(sample, causal=self.causal)\n\n        sample = unpatchify(sample, patch_size_hw=self.patch_size, patch_size_t=1)\n\n        return sample\n\n\nclass UNetMidBlock3D(nn.Module):\n    \"\"\"\n    A 3D UNet mid-block [`UNetMidBlock3D`] with multiple residual blocks.\n\n    Args:\n        in_channels (`int`): The number of input channels.\n        dropout (`float`, *optional*, defaults to 0.0): The dropout rate.\n        num_layers (`int`, *optional*, defaults to 1): The number of residual blocks.\n        resnet_eps (`float`, *optional*, 1e-6 ): The epsilon value for the resnet blocks.\n        resnet_groups (`int`, *optional*, defaults to 32):\n            The number of groups to use in the group normalization layers of the resnet blocks.\n        norm_layer (`str`, *optional*, defaults to `group_norm`):\n            The normalization layer to use. Can be either `group_norm` or `pixel_norm`.\n        inject_noise (`bool`, *optional*, defaults to `False`):\n            Whether to inject noise into the hidden states.\n        timestep_conditioning (`bool`, *optional*, defaults to `False`):\n            Whether to condition the hidden states on the timestep.\n\n    Returns:\n        `torch.FloatTensor`: The output of the last residual block, which is a tensor of shape `(batch_size,\n        in_channels, height, width)`.\n\n    \"\"\"\n\n    def __init__(\n        self,\n        dims: Union[int, Tuple[int, int]],\n        in_channels: int,\n        dropout: float = 0.0,\n        num_layers: int = 1,\n        resnet_eps: float = 1e-6,\n        resnet_groups: int = 32,\n        norm_layer: str = \"group_norm\",\n        inject_noise: bool = False,\n        timestep_conditioning: bool = False,\n        spatial_padding_mode: str = \"zeros\",\n    ):\n        super().__init__()\n        resnet_groups = (\n            resnet_groups if resnet_groups is not None else min(in_channels // 4, 32)\n        )\n\n        self.timestep_conditioning = timestep_conditioning\n\n        if timestep_conditioning:\n            self.time_embedder = PixArtAlphaCombinedTimestepSizeEmbeddings(\n                in_channels * 4, 0, operations=ops,\n            )\n\n        self.res_blocks = nn.ModuleList(\n            [\n                ResnetBlock3D(\n                    dims=dims,\n                    in_channels=in_channels,\n                    out_channels=in_channels,\n                    eps=resnet_eps,\n                    groups=resnet_groups,\n                    dropout=dropout,\n                    norm_layer=norm_layer,\n                    inject_noise=inject_noise,\n                    timestep_conditioning=timestep_conditioning,\n                    spatial_padding_mode=spatial_padding_mode,\n                )\n                for _ in range(num_layers)\n            ]\n        )\n\n    def forward(\n        self,\n        hidden_states: torch.FloatTensor,\n        causal: bool = True,\n        timestep: Optional[torch.Tensor] = None,\n    ) -> torch.FloatTensor:\n        timestep_embed = None\n        if self.timestep_conditioning:\n            assert (\n                timestep is not None\n            ), \"should pass timestep with timestep_conditioning=True\"\n            batch_size = hidden_states.shape[0]\n            timestep_embed = self.time_embedder(\n                timestep=timestep.flatten(),\n                resolution=None,\n                aspect_ratio=None,\n                batch_size=batch_size,\n                hidden_dtype=hidden_states.dtype,\n            )\n            timestep_embed = timestep_embed.view(\n                batch_size, timestep_embed.shape[-1], 1, 1, 1\n            )\n\n        for resnet in self.res_blocks:\n            hidden_states = resnet(hidden_states, causal=causal, timestep=timestep_embed)\n\n        return hidden_states\n\n\nclass SpaceToDepthDownsample(nn.Module):\n    def __init__(self, dims, in_channels, out_channels, stride, spatial_padding_mode):\n        super().__init__()\n        self.stride = stride\n        self.group_size = in_channels * math.prod(stride) // out_channels\n        self.conv = make_conv_nd(\n            dims=dims,\n            in_channels=in_channels,\n            out_channels=out_channels // math.prod(stride),\n            kernel_size=3,\n            stride=1,\n            causal=True,\n            spatial_padding_mode=spatial_padding_mode,\n        )\n\n    def forward(self, x, causal: bool = True):\n        if self.stride[0] == 2:\n            x = torch.cat(\n                [x[:, :, :1, :, :], x], dim=2\n            )  # duplicate first frames for padding\n\n        # skip connection\n        x_in = rearrange(\n            x,\n            \"b c (d p1) (h p2) (w p3) -> b (c p1 p2 p3) d h w\",\n            p1=self.stride[0],\n            p2=self.stride[1],\n            p3=self.stride[2],\n        )\n        x_in = rearrange(x_in, \"b (c g) d h w -> b c g d h w\", g=self.group_size)\n        x_in = x_in.mean(dim=2)\n\n        # conv\n        x = self.conv(x, causal=causal)\n        x = rearrange(\n            x,\n            \"b c (d p1) (h p2) (w p3) -> b (c p1 p2 p3) d h w\",\n            p1=self.stride[0],\n            p2=self.stride[1],\n            p3=self.stride[2],\n        )\n\n        x = x + x_in\n\n        return x\n\n\nclass DepthToSpaceUpsample(nn.Module):\n    def __init__(\n        self,\n        dims,\n        in_channels,\n        stride,\n        residual=False,\n        out_channels_reduction_factor=1,\n        spatial_padding_mode=\"zeros\",\n    ):\n        super().__init__()\n        self.stride = stride\n        self.out_channels = (\n            math.prod(stride) * in_channels // out_channels_reduction_factor\n        )\n        self.conv = make_conv_nd(\n            dims=dims,\n            in_channels=in_channels,\n            out_channels=self.out_channels,\n            kernel_size=3,\n            stride=1,\n            causal=True,\n            spatial_padding_mode=spatial_padding_mode,\n        )\n        self.residual = residual\n        self.out_channels_reduction_factor = out_channels_reduction_factor\n\n    def forward(self, x, causal: bool = True, timestep: Optional[torch.Tensor] = None):\n        if self.residual:\n            # Reshape and duplicate the input to match the output shape\n            x_in = rearrange(\n                x,\n                \"b (c p1 p2 p3) d h w -> b c (d p1) (h p2) (w p3)\",\n                p1=self.stride[0],\n                p2=self.stride[1],\n                p3=self.stride[2],\n            )\n            num_repeat = math.prod(self.stride) // self.out_channels_reduction_factor\n            x_in = x_in.repeat(1, num_repeat, 1, 1, 1)\n            if self.stride[0] == 2:\n                x_in = x_in[:, :, 1:, :, :]\n        x = self.conv(x, causal=causal)\n        x = rearrange(\n            x,\n            \"b (c p1 p2 p3) d h w -> b c (d p1) (h p2) (w p3)\",\n            p1=self.stride[0],\n            p2=self.stride[1],\n            p3=self.stride[2],\n        )\n        if self.stride[0] == 2:\n            x = x[:, :, 1:, :, :]\n        if self.residual:\n            x = x + x_in\n        return x\n\nclass LayerNorm(nn.Module):\n    def __init__(self, dim, eps, elementwise_affine=True) -> None:\n        super().__init__()\n        self.norm = ops.LayerNorm(dim, eps=eps, elementwise_affine=elementwise_affine)\n\n    def forward(self, x):\n        x = rearrange(x, \"b c d h w -> b d h w c\")\n        x = self.norm(x)\n        x = rearrange(x, \"b d h w c -> b c d h w\")\n        return x\n\n\nclass ResnetBlock3D(nn.Module):\n    r\"\"\"\n    A Resnet block.\n\n    Parameters:\n        in_channels (`int`): The number of channels in the input.\n        out_channels (`int`, *optional*, default to be `None`):\n            The number of output channels for the first conv layer. If None, same as `in_channels`.\n        dropout (`float`, *optional*, defaults to `0.0`): The dropout probability to use.\n        groups (`int`, *optional*, default to `32`): The number of groups to use for the first normalization layer.\n        eps (`float`, *optional*, defaults to `1e-6`): The epsilon to use for the normalization.\n    \"\"\"\n\n    def __init__(\n        self,\n        dims: Union[int, Tuple[int, int]],\n        in_channels: int,\n        out_channels: Optional[int] = None,\n        dropout: float = 0.0,\n        groups: int = 32,\n        eps: float = 1e-6,\n        norm_layer: str = \"group_norm\",\n        inject_noise: bool = False,\n        timestep_conditioning: bool = False,\n        spatial_padding_mode: str = \"zeros\",\n    ):\n        super().__init__()\n        self.in_channels = in_channels\n        out_channels = in_channels if out_channels is None else out_channels\n        self.out_channels = out_channels\n        self.inject_noise = inject_noise\n\n        if norm_layer == \"group_norm\":\n            self.norm1 = nn.GroupNorm(\n                num_groups=groups, num_channels=in_channels, eps=eps, affine=True\n            )\n        elif norm_layer == \"pixel_norm\":\n            self.norm1 = PixelNorm()\n        elif norm_layer == \"layer_norm\":\n            self.norm1 = LayerNorm(in_channels, eps=eps, elementwise_affine=True)\n\n        self.non_linearity = nn.SiLU()\n\n        self.conv1 = make_conv_nd(\n            dims,\n            in_channels,\n            out_channels,\n            kernel_size=3,\n            stride=1,\n            padding=1,\n            causal=True,\n            spatial_padding_mode=spatial_padding_mode,\n        )\n\n        if inject_noise:\n            self.per_channel_scale1 = nn.Parameter(torch.zeros((in_channels, 1, 1)))\n\n        if norm_layer == \"group_norm\":\n            self.norm2 = nn.GroupNorm(\n                num_groups=groups, num_channels=out_channels, eps=eps, affine=True\n            )\n        elif norm_layer == \"pixel_norm\":\n            self.norm2 = PixelNorm()\n        elif norm_layer == \"layer_norm\":\n            self.norm2 = LayerNorm(out_channels, eps=eps, elementwise_affine=True)\n\n        self.dropout = torch.nn.Dropout(dropout)\n\n        self.conv2 = make_conv_nd(\n            dims,\n            out_channels,\n            out_channels,\n            kernel_size=3,\n            stride=1,\n            padding=1,\n            causal=True,\n            spatial_padding_mode=spatial_padding_mode,\n        )\n\n        if inject_noise:\n            self.per_channel_scale2 = nn.Parameter(torch.zeros((in_channels, 1, 1)))\n\n        self.conv_shortcut = (\n            make_linear_nd(\n                dims=dims, in_channels=in_channels, out_channels=out_channels\n            )\n            if in_channels != out_channels\n            else nn.Identity()\n        )\n\n        self.norm3 = (\n            LayerNorm(in_channels, eps=eps, elementwise_affine=True)\n            if in_channels != out_channels\n            else nn.Identity()\n        )\n\n        self.timestep_conditioning = timestep_conditioning\n\n        if timestep_conditioning:\n            self.scale_shift_table = nn.Parameter(\n                torch.randn(4, in_channels) / in_channels**0.5\n            )\n\n    def _feed_spatial_noise(\n        self, hidden_states: torch.FloatTensor, per_channel_scale: torch.FloatTensor\n    ) -> torch.FloatTensor:\n        spatial_shape = hidden_states.shape[-2:]\n        device = hidden_states.device\n        dtype = hidden_states.dtype\n\n        # similar to the \"explicit noise inputs\" method in style-gan\n        spatial_noise = torch.randn(spatial_shape, device=device, dtype=dtype)[None]\n        scaled_noise = (spatial_noise * per_channel_scale)[None, :, None, ...]\n        hidden_states = hidden_states + scaled_noise\n\n        return hidden_states\n\n    def forward(\n        self,\n        input_tensor: torch.FloatTensor,\n        causal: bool = True,\n        timestep: Optional[torch.Tensor] = None,\n    ) -> torch.FloatTensor:\n        hidden_states = input_tensor\n        batch_size = hidden_states.shape[0]\n\n        hidden_states = self.norm1(hidden_states)\n        if self.timestep_conditioning:\n            assert (\n                timestep is not None\n            ), \"should pass timestep with timestep_conditioning=True\"\n            ada_values = self.scale_shift_table[\n                None, ..., None, None, None\n            ].to(device=hidden_states.device, dtype=hidden_states.dtype) + timestep.reshape(\n                batch_size,\n                4,\n                -1,\n                timestep.shape[-3],\n                timestep.shape[-2],\n                timestep.shape[-1],\n            )\n            shift1, scale1, shift2, scale2 = ada_values.unbind(dim=1)\n\n            hidden_states = hidden_states * (1 + scale1) + shift1\n\n        hidden_states = self.non_linearity(hidden_states)\n\n        hidden_states = self.conv1(hidden_states, causal=causal)\n\n        if self.inject_noise:\n            hidden_states = self._feed_spatial_noise(\n                hidden_states, self.per_channel_scale1.to(device=hidden_states.device, dtype=hidden_states.dtype)\n            )\n\n        hidden_states = self.norm2(hidden_states)\n\n        if self.timestep_conditioning:\n            hidden_states = hidden_states * (1 + scale2) + shift2\n\n        hidden_states = self.non_linearity(hidden_states)\n\n        hidden_states = self.dropout(hidden_states)\n\n        hidden_states = self.conv2(hidden_states, causal=causal)\n\n        if self.inject_noise:\n            hidden_states = self._feed_spatial_noise(\n                hidden_states, self.per_channel_scale2.to(device=hidden_states.device, dtype=hidden_states.dtype)\n            )\n\n        input_tensor = self.norm3(input_tensor)\n\n        batch_size = input_tensor.shape[0]\n\n        input_tensor = self.conv_shortcut(input_tensor)\n\n        output_tensor = input_tensor + hidden_states\n\n        return output_tensor\n\n\ndef patchify(x, patch_size_hw, patch_size_t=1):\n    if patch_size_hw == 1 and patch_size_t == 1:\n        return x\n    if x.dim() == 4:\n        x = rearrange(\n            x, \"b c (h q) (w r) -> b (c r q) h w\", q=patch_size_hw, r=patch_size_hw\n        )\n    elif x.dim() == 5:\n        x = rearrange(\n            x,\n            \"b c (f p) (h q) (w r) -> b (c p r q) f h w\",\n            p=patch_size_t,\n            q=patch_size_hw,\n            r=patch_size_hw,\n        )\n    else:\n        raise ValueError(f\"Invalid input shape: {x.shape}\")\n\n    return x\n\n\ndef unpatchify(x, patch_size_hw, patch_size_t=1):\n    if patch_size_hw == 1 and patch_size_t == 1:\n        return x\n\n    if x.dim() == 4:\n        x = rearrange(\n            x, \"b (c r q) h w -> b c (h q) (w r)\", q=patch_size_hw, r=patch_size_hw\n        )\n    elif x.dim() == 5:\n        x = rearrange(\n            x,\n            \"b (c p r q) f h w -> b c (f p) (h q) (w r)\",\n            p=patch_size_t,\n            q=patch_size_hw,\n            r=patch_size_hw,\n        )\n\n    return x\n\nclass processor(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.register_buffer(\"std-of-means\", torch.empty(128))\n        self.register_buffer(\"mean-of-means\", torch.empty(128))\n        self.register_buffer(\"mean-of-stds\", torch.empty(128))\n        self.register_buffer(\"mean-of-stds_over_std-of-means\", torch.empty(128))\n        self.register_buffer(\"channel\", torch.empty(128))\n\n    def un_normalize(self, x):\n        return (x * self.get_buffer(\"std-of-means\").view(1, -1, 1, 1, 1).to(x)) + self.get_buffer(\"mean-of-means\").view(1, -1, 1, 1, 1).to(x)\n\n    def normalize(self, x):\n        return (x - self.get_buffer(\"mean-of-means\").view(1, -1, 1, 1, 1).to(x)) / self.get_buffer(\"std-of-means\").view(1, -1, 1, 1, 1).to(x)\n\nclass VideoVAE(nn.Module):\n    def __init__(self, version=0, config=None):\n        super().__init__()\n\n        if config is None:\n            config = self.guess_config(version)\n\n        self.timestep_conditioning = config.get(\"timestep_conditioning\", False)\n        double_z = config.get(\"double_z\", True)\n        latent_log_var = config.get(\n            \"latent_log_var\", \"per_channel\" if double_z else \"none\"\n        )\n\n        self.encoder = Encoder(\n            dims=config[\"dims\"],\n            in_channels=config.get(\"in_channels\", 3),\n            out_channels=config[\"latent_channels\"],\n            blocks=config.get(\"encoder_blocks\", config.get(\"encoder_blocks\", config.get(\"blocks\"))),\n            patch_size=config.get(\"patch_size\", 1),\n            latent_log_var=latent_log_var,\n            norm_layer=config.get(\"norm_layer\", \"group_norm\"),\n            spatial_padding_mode=config.get(\"spatial_padding_mode\", \"zeros\"),\n        )\n\n        self.decoder = Decoder(\n            dims=config[\"dims\"],\n            in_channels=config[\"latent_channels\"],\n            out_channels=config.get(\"out_channels\", 3),\n            blocks=config.get(\"decoder_blocks\", config.get(\"decoder_blocks\", config.get(\"blocks\"))),\n            patch_size=config.get(\"patch_size\", 1),\n            norm_layer=config.get(\"norm_layer\", \"group_norm\"),\n            causal=config.get(\"causal_decoder\", False),\n            timestep_conditioning=self.timestep_conditioning,\n            spatial_padding_mode=config.get(\"spatial_padding_mode\", \"zeros\"),\n        )\n\n        self.per_channel_statistics = processor()\n\n    def guess_config(self, version):\n        if version == 0:\n            config = {\n                \"_class_name\": \"CausalVideoAutoencoder\",\n                \"dims\": 3,\n                \"in_channels\": 3,\n                \"out_channels\": 3,\n                \"latent_channels\": 128,\n                \"blocks\": [\n                    [\"res_x\", 4],\n                    [\"compress_all\", 1],\n                    [\"res_x_y\", 1],\n                    [\"res_x\", 3],\n                    [\"compress_all\", 1],\n                    [\"res_x_y\", 1],\n                    [\"res_x\", 3],\n                    [\"compress_all\", 1],\n                    [\"res_x\", 3],\n                    [\"res_x\", 4],\n                ],\n                \"scaling_factor\": 1.0,\n                \"norm_layer\": \"pixel_norm\",\n                \"patch_size\": 4,\n                \"latent_log_var\": \"uniform\",\n                \"use_quant_conv\": False,\n                \"causal_decoder\": False,\n            }\n        elif version == 1:\n            config = {\n                \"_class_name\": \"CausalVideoAutoencoder\",\n                \"dims\": 3,\n                \"in_channels\": 3,\n                \"out_channels\": 3,\n                \"latent_channels\": 128,\n                \"decoder_blocks\": [\n                    [\"res_x\", {\"num_layers\": 5, \"inject_noise\": True}],\n                    [\"compress_all\", {\"residual\": True, \"multiplier\": 2}],\n                    [\"res_x\", {\"num_layers\": 6, \"inject_noise\": True}],\n                    [\"compress_all\", {\"residual\": True, \"multiplier\": 2}],\n                    [\"res_x\", {\"num_layers\": 7, \"inject_noise\": True}],\n                    [\"compress_all\", {\"residual\": True, \"multiplier\": 2}],\n                    [\"res_x\", {\"num_layers\": 8, \"inject_noise\": False}]\n                ],\n                \"encoder_blocks\": [\n                    [\"res_x\", {\"num_layers\": 4}],\n                    [\"compress_all\", {}],\n                    [\"res_x_y\", 1],\n                    [\"res_x\", {\"num_layers\": 3}],\n                    [\"compress_all\", {}],\n                    [\"res_x_y\", 1],\n                    [\"res_x\", {\"num_layers\": 3}],\n                    [\"compress_all\", {}],\n                    [\"res_x\", {\"num_layers\": 3}],\n                    [\"res_x\", {\"num_layers\": 4}]\n                ],\n                \"scaling_factor\": 1.0,\n                \"norm_layer\": \"pixel_norm\",\n                \"patch_size\": 4,\n                \"latent_log_var\": \"uniform\",\n                \"use_quant_conv\": False,\n                \"causal_decoder\": False,\n                \"timestep_conditioning\": True,\n            }\n        else:\n            config = {\n                \"_class_name\": \"CausalVideoAutoencoder\",\n                \"dims\": 3,\n                \"in_channels\": 3,\n                \"out_channels\": 3,\n                \"latent_channels\": 128,\n                \"encoder_blocks\": [\n                    [\"res_x\", {\"num_layers\": 4}],\n                    [\"compress_space_res\", {\"multiplier\": 2}],\n                    [\"res_x\", {\"num_layers\": 6}],\n                    [\"compress_time_res\", {\"multiplier\": 2}],\n                    [\"res_x\", {\"num_layers\": 6}],\n                    [\"compress_all_res\", {\"multiplier\": 2}],\n                    [\"res_x\", {\"num_layers\": 2}],\n                    [\"compress_all_res\", {\"multiplier\": 2}],\n                    [\"res_x\", {\"num_layers\": 2}]\n                ],\n                \"decoder_blocks\": [\n                    [\"res_x\", {\"num_layers\": 5, \"inject_noise\": False}],\n                    [\"compress_all\", {\"residual\": True, \"multiplier\": 2}],\n                    [\"res_x\", {\"num_layers\": 5, \"inject_noise\": False}],\n                    [\"compress_all\", {\"residual\": True, \"multiplier\": 2}],\n                    [\"res_x\", {\"num_layers\": 5, \"inject_noise\": False}],\n                    [\"compress_all\", {\"residual\": True, \"multiplier\": 2}],\n                    [\"res_x\", {\"num_layers\": 5, \"inject_noise\": False}]\n                ],\n                \"scaling_factor\": 1.0,\n                \"norm_layer\": \"pixel_norm\",\n                \"patch_size\": 4,\n                \"latent_log_var\": \"uniform\",\n                \"use_quant_conv\": False,\n                \"causal_decoder\": False,\n                \"timestep_conditioning\": True\n            }\n        return config\n\n    def encode(self, x):\n        frames_count = x.shape[2]\n        if ((frames_count - 1) % 8) != 0:\n            raise ValueError(\"Invalid number of frames: Encode input must have 1 + 8 * x frames (e.g., 1, 9, 17, ...). Please check your input.\")\n        means, logvar = torch.chunk(self.encoder(x), 2, dim=1)\n        return self.per_channel_statistics.normalize(means)\n\n    def decode(self, x, timestep=0.05, noise_scale=0.025):\n        if self.timestep_conditioning: #TODO: seed\n            x = torch.randn_like(x) * noise_scale + (1.0 - noise_scale) * x\n        return self.decoder(self.per_channel_statistics.un_normalize(x), timestep=timestep)\n\n"
  },
  {
    "path": "lightricks/vae/conv_nd_factory.py",
    "content": "from typing import Tuple, Union\n\n\nfrom .dual_conv3d import DualConv3d\nfrom .causal_conv3d import CausalConv3d\nimport comfy.ops\nops = comfy.ops.disable_weight_init\n\ndef make_conv_nd(\n    dims: Union[int, Tuple[int, int]],\n    in_channels: int,\n    out_channels: int,\n    kernel_size: int,\n    stride=1,\n    padding=0,\n    dilation=1,\n    groups=1,\n    bias=True,\n    causal=False,\n    spatial_padding_mode=\"zeros\",\n    temporal_padding_mode=\"zeros\",\n):\n    if not (spatial_padding_mode == temporal_padding_mode or causal):\n        raise NotImplementedError(\"spatial and temporal padding modes must be equal\")\n    if dims == 2:\n        return ops.Conv2d(\n            in_channels=in_channels,\n            out_channels=out_channels,\n            kernel_size=kernel_size,\n            stride=stride,\n            padding=padding,\n            dilation=dilation,\n            groups=groups,\n            bias=bias,\n            padding_mode=spatial_padding_mode,\n        )\n    elif dims == 3:\n        if causal:\n            return CausalConv3d(\n                in_channels=in_channels,\n                out_channels=out_channels,\n                kernel_size=kernel_size,\n                stride=stride,\n                padding=padding,\n                dilation=dilation,\n                groups=groups,\n                bias=bias,\n                spatial_padding_mode=spatial_padding_mode,\n            )\n        return ops.Conv3d(\n            in_channels=in_channels,\n            out_channels=out_channels,\n            kernel_size=kernel_size,\n            stride=stride,\n            padding=padding,\n            dilation=dilation,\n            groups=groups,\n            bias=bias,\n            padding_mode=spatial_padding_mode,\n        )\n    elif dims == (2, 1):\n        return DualConv3d(\n            in_channels=in_channels,\n            out_channels=out_channels,\n            kernel_size=kernel_size,\n            stride=stride,\n            padding=padding,\n            bias=bias,\n            padding_mode=spatial_padding_mode,\n        )\n    else:\n        raise ValueError(f\"unsupported dimensions: {dims}\")\n\n\ndef make_linear_nd(\n    dims: int,\n    in_channels: int,\n    out_channels: int,\n    bias=True,\n):\n    if dims == 2:\n        return ops.Conv2d(\n            in_channels=in_channels, out_channels=out_channels, kernel_size=1, bias=bias\n        )\n    elif dims == 3 or dims == (2, 1):\n        return ops.Conv3d(\n            in_channels=in_channels, out_channels=out_channels, kernel_size=1, bias=bias\n        )\n    else:\n        raise ValueError(f\"unsupported dimensions: {dims}\")\n"
  },
  {
    "path": "lightricks/vae/dual_conv3d.py",
    "content": "import math\nfrom typing import Tuple, Union\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom einops import rearrange\n\n\nclass DualConv3d(nn.Module):\n    def __init__(\n        self,\n        in_channels,\n        out_channels,\n        kernel_size,\n        stride: Union[int, Tuple[int, int, int]] = 1,\n        padding: Union[int, Tuple[int, int, int]] = 0,\n        dilation: Union[int, Tuple[int, int, int]] = 1,\n        groups=1,\n        bias=True,\n        padding_mode=\"zeros\",\n    ):\n        super(DualConv3d, self).__init__()\n\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.padding_mode = padding_mode\n        # Ensure kernel_size, stride, padding, and dilation are tuples of length 3\n        if isinstance(kernel_size, int):\n            kernel_size = (kernel_size, kernel_size, kernel_size)\n        if kernel_size == (1, 1, 1):\n            raise ValueError(\n                \"kernel_size must be greater than 1. Use make_linear_nd instead.\"\n            )\n        if isinstance(stride, int):\n            stride = (stride, stride, stride)\n        if isinstance(padding, int):\n            padding = (padding, padding, padding)\n        if isinstance(dilation, int):\n            dilation = (dilation, dilation, dilation)\n\n        # Set parameters for convolutions\n        self.groups = groups\n        self.bias = bias\n\n        # Define the size of the channels after the first convolution\n        intermediate_channels = (\n            out_channels if in_channels < out_channels else in_channels\n        )\n\n        # Define parameters for the first convolution\n        self.weight1 = nn.Parameter(\n            torch.Tensor(\n                intermediate_channels,\n                in_channels // groups,\n                1,\n                kernel_size[1],\n                kernel_size[2],\n            )\n        )\n        self.stride1 = (1, stride[1], stride[2])\n        self.padding1 = (0, padding[1], padding[2])\n        self.dilation1 = (1, dilation[1], dilation[2])\n        if bias:\n            self.bias1 = nn.Parameter(torch.Tensor(intermediate_channels))\n        else:\n            self.register_parameter(\"bias1\", None)\n\n        # Define parameters for the second convolution\n        self.weight2 = nn.Parameter(\n            torch.Tensor(\n                out_channels, intermediate_channels // groups, kernel_size[0], 1, 1\n            )\n        )\n        self.stride2 = (stride[0], 1, 1)\n        self.padding2 = (padding[0], 0, 0)\n        self.dilation2 = (dilation[0], 1, 1)\n        if bias:\n            self.bias2 = nn.Parameter(torch.Tensor(out_channels))\n        else:\n            self.register_parameter(\"bias2\", None)\n\n        # Initialize weights and biases\n        self.reset_parameters()\n\n    def reset_parameters(self):\n        nn.init.kaiming_uniform_(self.weight1, a=math.sqrt(5))\n        nn.init.kaiming_uniform_(self.weight2, a=math.sqrt(5))\n        if self.bias:\n            fan_in1, _ = nn.init._calculate_fan_in_and_fan_out(self.weight1)\n            bound1 = 1 / math.sqrt(fan_in1)\n            nn.init.uniform_(self.bias1, -bound1, bound1)\n            fan_in2, _ = nn.init._calculate_fan_in_and_fan_out(self.weight2)\n            bound2 = 1 / math.sqrt(fan_in2)\n            nn.init.uniform_(self.bias2, -bound2, bound2)\n\n    def forward(self, x, use_conv3d=False, skip_time_conv=False):\n        if use_conv3d:\n            return self.forward_with_3d(x=x, skip_time_conv=skip_time_conv)\n        else:\n            return self.forward_with_2d(x=x, skip_time_conv=skip_time_conv)\n\n    def forward_with_3d(self, x, skip_time_conv):\n        # First convolution\n        x = F.conv3d(\n            x,\n            self.weight1,\n            self.bias1,\n            self.stride1,\n            self.padding1,\n            self.dilation1,\n            self.groups,\n            padding_mode=self.padding_mode,\n        )\n\n        if skip_time_conv:\n            return x\n\n        # Second convolution\n        x = F.conv3d(\n            x,\n            self.weight2,\n            self.bias2,\n            self.stride2,\n            self.padding2,\n            self.dilation2,\n            self.groups,\n            padding_mode=self.padding_mode,\n        )\n\n        return x\n\n    def forward_with_2d(self, x, skip_time_conv):\n        b, c, d, h, w = x.shape\n\n        # First 2D convolution\n        x = rearrange(x, \"b c d h w -> (b d) c h w\")\n        # Squeeze the depth dimension out of weight1 since it's 1\n        weight1 = self.weight1.squeeze(2)\n        # Select stride, padding, and dilation for the 2D convolution\n        stride1 = (self.stride1[1], self.stride1[2])\n        padding1 = (self.padding1[1], self.padding1[2])\n        dilation1 = (self.dilation1[1], self.dilation1[2])\n        x = F.conv2d(\n            x,\n            weight1,\n            self.bias1,\n            stride1,\n            padding1,\n            dilation1,\n            self.groups,\n            padding_mode=self.padding_mode,\n        )\n\n        _, _, h, w = x.shape\n\n        if skip_time_conv:\n            x = rearrange(x, \"(b d) c h w -> b c d h w\", b=b)\n            return x\n\n        # Second convolution which is essentially treated as a 1D convolution across the 'd' dimension\n        x = rearrange(x, \"(b d) c h w -> (b h w) c d\", b=b)\n\n        # Reshape weight2 to match the expected dimensions for conv1d\n        weight2 = self.weight2.squeeze(-1).squeeze(-1)\n        # Use only the relevant dimension for stride, padding, and dilation for the 1D convolution\n        stride2 = self.stride2[0]\n        padding2 = self.padding2[0]\n        dilation2 = self.dilation2[0]\n        x = F.conv1d(\n            x,\n            weight2,\n            self.bias2,\n            stride2,\n            padding2,\n            dilation2,\n            self.groups,\n            padding_mode=self.padding_mode,\n        )\n        x = rearrange(x, \"(b h w) c d -> b c d h w\", b=b, h=h, w=w)\n\n        return x\n\n    @property\n    def weight(self):\n        return self.weight2\n\n\ndef test_dual_conv3d_consistency():\n    # Initialize parameters\n    in_channels = 3\n    out_channels = 5\n    kernel_size = (3, 3, 3)\n    stride = (2, 2, 2)\n    padding = (1, 1, 1)\n\n    # Create an instance of the DualConv3d class\n    dual_conv3d = DualConv3d(\n        in_channels=in_channels,\n        out_channels=out_channels,\n        kernel_size=kernel_size,\n        stride=stride,\n        padding=padding,\n        bias=True,\n    )\n\n    # Example input tensor\n    test_input = torch.randn(1, 3, 10, 10, 10)\n\n    # Perform forward passes with both 3D and 2D settings\n    output_conv3d = dual_conv3d(test_input, use_conv3d=True)\n    output_2d = dual_conv3d(test_input, use_conv3d=False)\n\n    # Assert that the outputs from both methods are sufficiently close\n    assert torch.allclose(\n        output_conv3d, output_2d, atol=1e-6\n    ), \"Outputs are not consistent between 3D and 2D convolutions.\"\n"
  },
  {
    "path": "lightricks/vae/pixel_norm.py",
    "content": "import torch\nfrom torch import nn\n\n\nclass PixelNorm(nn.Module):\n    def __init__(self, dim=1, eps=1e-8):\n        super(PixelNorm, self).__init__()\n        self.dim = dim\n        self.eps = eps\n\n    def forward(self, x):\n        return x / torch.sqrt(torch.mean(x**2, dim=self.dim, keepdim=True) + self.eps)\n"
  },
  {
    "path": "loaders.py",
    "content": "import folder_paths\r\nimport torch\r\nimport comfy.samplers\r\nimport comfy.sample\r\nimport comfy.sampler_helpers\r\nimport comfy.model_sampling\r\nimport comfy.latent_formats\r\nimport comfy.sd\r\nimport comfy.clip_vision\r\nimport comfy.supported_models\r\nfrom comfy.utils import load_torch_file\r\n\r\n# Documentation: Self-documenting code\r\n# Instructions for use: Obvious\r\n# Expected results: Fork desync\r\n# adapted from https://github.com/comfyanonymous/ComfyUI/blob/master/nodes.py\r\n\r\nclip_types = [\"stable_diffusion\", \"stable_cascade\", \"sd3\", \"stable_audio\", \"hunyuan_dit\", \"flux\", \"mochi\", \"ltxv\", \"hunyuan_video\", \"pixart\", \"cosmos\", \"lumina2\", \"wan\", \"hidream\", \"chroma\", \"ace\"]\r\n\r\nclass BaseModelLoader:\r\n    @staticmethod\r\n    def load_taesd(name):\r\n        sd = {}\r\n        approx_vaes = folder_paths.get_filename_list(\"vae_approx\")\r\n\r\n        encoder = next(filter(lambda a: a.startswith(f\"{name}_encoder.\"), approx_vaes))\r\n        decoder = next(filter(lambda a: a.startswith(f\"{name}_decoder.\"), approx_vaes))\r\n\r\n        enc = comfy.utils.load_torch_file(folder_paths.get_full_path_or_raise(\"vae_approx\", encoder))\r\n        for k in enc:\r\n            sd[f\"taesd_encoder.{k}\"] = enc[k]\r\n\r\n        dec = comfy.utils.load_torch_file(folder_paths.get_full_path_or_raise(\"vae_approx\", decoder))\r\n        for k in dec:\r\n            sd[f\"taesd_decoder.{k}\"] = dec[k]\r\n\r\n        # VAE scale and shift mapping\r\n        vae_params = {\r\n            \"taesd\": (0.18215, 0.0),\r\n            \"taesdxl\": (0.13025, 0.0),\r\n            \"taesd3\": (1.5305, 0.0609),\r\n            \"taef1\": (0.3611, 0.1159)\r\n        }\r\n        \r\n        if name in vae_params:\r\n            scale, shift = vae_params[name]\r\n            sd[\"vae_scale\"] = torch.tensor(scale)\r\n            sd[\"vae_shift\"] = torch.tensor(shift)\r\n            \r\n        return sd\r\n    \r\n    @staticmethod\r\n    def guess_clip_type(model):\r\n        import comfy.model_base as mb\r\n        \r\n        type_map = [\r\n            (mb.SDXLRefiner, \"sdxl\"),\r\n            (mb.SDXL, \"sdxl\"),\r\n            (mb.SD15_instructpix2pix, \"stable_diffusion\"),\r\n            (mb.SDXL_instructpix2pix, \"sdxl\"),\r\n            (mb.StableCascade_C, \"stable_cascade\"),\r\n            (mb.StableCascade_B, \"stable_cascade\"),\r\n            (mb.Flux, \"flux\"),\r\n            (mb.LTXV, \"ltxv\"),\r\n            (mb.HunyuanDiT, \"hunyuan_dit\"),\r\n            (mb.HunyuanVideo, \"hunyuan_video\"),\r\n            (mb.HunyuanVideoI2V, \"hunyuan_video\"),\r\n            (mb.HunyuanVideoSkyreelsI2V, \"hunyuan_video\"),\r\n            (mb.PixArt, \"pixart\"),\r\n            (mb.CosmosVideo, \"cosmos\"),\r\n            (mb.Lumina2, \"lumina2\"),\r\n            (mb.WAN21, \"wan\"),\r\n            (mb.WAN21_Vace, \"wan\"),\r\n            (mb.WAN21_Camera, \"wan\"),\r\n            (mb.HiDream, \"hidream\"),\r\n            (mb.Chroma, \"chroma\"),\r\n            (mb.ACEStep, \"ace\"),\r\n            (mb.SD3, \"sd3\"),\r\n            (mb.GenmoMochi, \"mochi\"),\r\n        ]\r\n\r\n        for cls, clip_type in type_map:\r\n            if isinstance(model, cls):\r\n                return clip_type.upper()\r\n\r\n        # fallback\r\n        known_types = {\r\n            \"stable_diffusion\", \"stable_cascade\", \"sd3\", \"stable_audio\", \"hunyuan_dit\", \"flux\", \"mochi\", \"ltxv\",\r\n            \"hunyuan_video\", \"pixart\", \"cosmos\", \"lumina2\", \"wan\", \"hidream\", \"chroma\", \"ace\"\r\n        }\r\n\r\n        class_name = model.__class__.__name__.lower()\r\n        for t in known_types:\r\n            if t in class_name:\r\n                return t.upper()\r\n\r\n        default_clip_type = \"stable_diffusion\"\r\n        return default_clip_type.upper()\r\n\r\n    @staticmethod\r\n    def get_model_files():\r\n        return [f for f in folder_paths.get_filename_list(\"checkpoints\") + \r\n                folder_paths.get_filename_list(\"diffusion_models\") \r\n                if f.endswith((\".ckpt\", \".safetensors\", \".sft\", \".pt\"))]\r\n\r\n    @staticmethod\r\n    def get_weight_options():\r\n        return [\"default\", \"fp8_e4m3fn\", \"fp8_e4m3fn_fast\", \"fp8_e5m2\"]\r\n\r\n    @staticmethod\r\n    def get_clip_options():\r\n        return [\".use_ckpt_clip\"] + folder_paths.get_filename_list(\"text_encoders\")\r\n\r\n    @staticmethod\r\n    def vae_list():\r\n        vaes = folder_paths.get_filename_list(\"vae\")\r\n        approx_vaes = folder_paths.get_filename_list(\"vae_approx\")\r\n        sdxl_taesd_enc = False\r\n        sdxl_taesd_dec = False\r\n        sd1_taesd_enc = False\r\n        sd1_taesd_dec = False\r\n        sd3_taesd_enc = False\r\n        sd3_taesd_dec = False\r\n        f1_taesd_enc = False\r\n        f1_taesd_dec = False\r\n\r\n        for v in approx_vaes:\r\n            if v.startswith(\"taesd_decoder.\"):\r\n                sd1_taesd_dec = True\r\n            elif v.startswith(\"taesd_encoder.\"):\r\n                sd1_taesd_enc = True\r\n            elif v.startswith(\"taesdxl_decoder.\"):\r\n                sdxl_taesd_dec = True\r\n            elif v.startswith(\"taesdxl_encoder.\"):\r\n                sdxl_taesd_enc = True\r\n            elif v.startswith(\"taesd3_decoder.\"):\r\n                sd3_taesd_dec = True\r\n            elif v.startswith(\"taesd3_encoder.\"):\r\n                sd3_taesd_enc = True\r\n            elif v.startswith(\"taef1_encoder.\"):\r\n                f1_taesd_enc = True\r\n            elif v.startswith(\"taef1_decoder.\"):\r\n                f1_taesd_dec = True\r\n\r\n        if sd1_taesd_dec and sd1_taesd_enc:\r\n            vaes.append(\"taesd\")\r\n        if sdxl_taesd_dec and sdxl_taesd_enc:\r\n            vaes.append(\"taesdxl\")\r\n        if sd3_taesd_dec and sd3_taesd_enc:\r\n            vaes.append(\"taesd3\")\r\n        if f1_taesd_dec and f1_taesd_enc:\r\n            vaes.append(\"taef1\")\r\n        return vaes\r\n\r\n    def process_weight_dtype(self, weight_dtype):\r\n        model_options = {}\r\n        if weight_dtype == \"fp8_e4m3fn\":\r\n            model_options[\"dtype\"] = torch.float8_e4m3fn\r\n        elif weight_dtype == \"fp8_e4m3fn_fast\":\r\n            model_options[\"dtype\"] = torch.float8_e4m3fn\r\n            model_options[\"fp8_optimizations\"] = True\r\n        elif weight_dtype == \"fp8_e5m2\":\r\n            model_options[\"dtype\"] = torch.float8_e5m2\r\n        return model_options\r\n\r\n    def load_checkpoint(self, model_name, output_vae, output_clip, model_options):\r\n        try:\r\n            ckpt_path = folder_paths.get_full_path_or_raise(\"checkpoints\", model_name)\r\n            \r\n            out = None\r\n            try:\r\n                out = comfy.sd.load_checkpoint_guess_config(\r\n                    ckpt_path, \r\n                    output_vae=output_vae,\r\n                    output_clip=output_clip,\r\n                    embedding_directory=folder_paths.get_folder_paths(\"embeddings\"),\r\n                    model_options=model_options\r\n                )\r\n            except RuntimeError as e:\r\n                if \"ERROR: Could not detect model type of:\" in str(e):\r\n                    error_msg = \"\"\r\n                    if output_vae is True:\r\n                        error_msg += \"Model/Checkpoint file does not contain a VAE\\n\"\r\n                    if output_clip is True:\r\n                        error_msg += \"Model/Checkpoint file does not contain a CLIP\\n\"\r\n                    if error_msg != \"\":\r\n                        raise ValueError(error_msg)\r\n                    else:               \r\n                        out = (comfy.sd.load_diffusion_model(ckpt_path, model_options),)\r\n                else:\r\n                    raise e\r\n            \r\n            return out\r\n            \r\n        except FileNotFoundError:\r\n            ckpt_path = folder_paths.get_full_path_or_raise(\"diffusion_models\", model_name)\r\n\r\n            model = comfy.sd.load_diffusion_model(ckpt_path, model_options=model_options)\r\n            return (model, )\r\n\r\n\r\n\r\n    def load_vae(self, vae_name, ckpt_out):\r\n        if vae_name == \".use_ckpt_vae\":\r\n            if ckpt_out[2] is None:\r\n                raise ValueError(\"Model does not have a VAE\")\r\n            return ckpt_out[2]\r\n        elif vae_name in [\"taesd\", \"taesdxl\", \"taesd3\", \"taef1\"]:\r\n            sd = self.load_taesd(vae_name)\r\n            return comfy.sd.VAE(sd=sd)\r\n        elif vae_name == \".none\":\r\n            return None\r\n        else:\r\n            vae_path = folder_paths.get_full_path_or_raise(\"vae\", vae_name)\r\n            sd = comfy.utils.load_torch_file(vae_path)\r\n            return comfy.sd.VAE(sd=sd)\r\n\r\n\r\ndef load_clipvision(ckpt_path):\r\n    sd = load_torch_file(ckpt_path)\r\n    clip_vision = comfy.clip_vision.load(ckpt_path)\r\n    return clip_vision\r\n    \r\nclass FluxLoader(BaseModelLoader):\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\r\n            \"model_name\":       (s.get_model_files(),),\r\n            \"weight_dtype\":     (s.get_weight_options(),),\r\n            \"clip_name1\":       (s.get_clip_options(),),\r\n            \"clip_name2_opt\":   ([\".none\"]         + folder_paths.get_filename_list(\"text_encoders\"),),\r\n            \"vae_name\":         ([\".use_ckpt_vae\"] + s.vae_list(),),\r\n            \"clip_vision_name\": ([\".none\"]         + folder_paths.get_filename_list(\"clip_vision\"),),\r\n            \"style_model_name\": ([\".none\"]         + folder_paths.get_filename_list(\"style_models\"),),\r\n        }}\r\n\r\n    RETURN_TYPES = (\"MODEL\", \"CLIP\", \"VAE\", \"CLIP_VISION\", \"STYLE_MODEL\")\r\n    RETURN_NAMES = (\"model\", \"clip\", \"vae\", \"clip_vision\", \"style_model\")\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/loaders\"\r\n\r\n    def main(self, model_name, weight_dtype, clip_name1, clip_name2_opt, vae_name, clip_vision_name, style_model_name):\r\n        model_options = self.process_weight_dtype(weight_dtype)\r\n        \r\n        torch.manual_seed(42)\r\n        torch.cuda.manual_seed_all(42)\r\n\r\n        if clip_name1 == \".use_ckpt_clip\" and clip_name2_opt != \".none\":\r\n            raise ValueError(\"Cannot specify both \\\".use_ckpt_clip\\\" and another clip\")\r\n        \r\n        output_vae = vae_name == \".use_ckpt_vae\"\r\n        output_clip = clip_name1 == \".use_ckpt_clip\"\r\n        ckpt_out = self.load_checkpoint(model_name, output_vae, output_clip, model_options)\r\n\r\n        if clip_name1 == \".use_ckpt_clip\":\r\n            if ckpt_out[1] is None:\r\n                raise ValueError(\"Model does not have a clip\")\r\n            clip = ckpt_out[1]\r\n        else:\r\n            clip_paths = [folder_paths.get_full_path_or_raise(\"text_encoders\", clip_name1)]\r\n            if clip_name2_opt != \".none\":\r\n                clip_paths.append(folder_paths.get_full_path_or_raise(\"text_encoders\", clip_name2_opt))\r\n            clip = comfy.sd.load_clip(clip_paths, \r\n                                    embedding_directory=folder_paths.get_folder_paths(\"embeddings\"),\r\n                                    clip_type=comfy.sd.CLIPType.FLUX)\r\n\r\n        clip_vision = None if clip_vision_name == \".none\" else \\\r\n            load_clipvision(folder_paths.get_full_path_or_raise(\"clip_vision\", clip_vision_name))\r\n\r\n        style_model = None if style_model_name == \".none\" else \\\r\n            comfy.sd.load_style_model(folder_paths.get_full_path_or_raise(\"style_models\", style_model_name))\r\n            \r\n        vae = self.load_vae(vae_name, ckpt_out)\r\n        \r\n        return (ckpt_out[0], clip, vae, clip_vision, style_model)\r\n\r\nclass SD35Loader(BaseModelLoader):\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\r\n            \"model_name\": (s.get_model_files(),),\r\n            \"weight_dtype\": (s.get_weight_options(),),\r\n            \"clip_name1\": (s.get_clip_options(),),\r\n            \"clip_name2_opt\": ([\".none\"] + folder_paths.get_filename_list(\"text_encoders\"),),\r\n            \"clip_name3_opt\": ([\".none\"] + folder_paths.get_filename_list(\"text_encoders\"),),\r\n            \"vae_name\": ([\".use_ckpt_vae\"] + folder_paths.get_filename_list(\"vae\") + [\"taesd\", \"taesdxl\", \"taesd3\", \"taef1\"],),\r\n        }}\r\n\r\n    RETURN_TYPES = (\"MODEL\", \"CLIP\", \"VAE\")\r\n    RETURN_NAMES = (\"model\", \"clip\", \"vae\")\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/loaders\"\r\n    \r\n    def main(self, model_name, weight_dtype, clip_name1, clip_name2_opt, clip_name3_opt, vae_name):\r\n        model_options = self.process_weight_dtype(weight_dtype)\r\n        \r\n        torch.manual_seed(42)\r\n        torch.cuda.manual_seed_all(42)\r\n        \r\n        if clip_name1 == \".use_ckpt_clip\" and (clip_name2_opt != \".none\" or clip_name3_opt != \".none\"):\r\n            raise ValueError(\"Cannot specify both \\\".use_ckpt_clip\\\" and another clip\")\r\n\r\n        output_vae = vae_name == \".use_ckpt_vae\"\r\n        output_clip = clip_name1 == \".use_ckpt_clip\"\r\n        ckpt_out = self.load_checkpoint(model_name, output_vae, output_clip, model_options)\r\n\r\n        if clip_name1 == \".use_ckpt_clip\":\r\n            if ckpt_out[1] is None:\r\n                raise ValueError(\"Model does not have a clip\")\r\n            clip = ckpt_out[1]\r\n        else:\r\n            clip_paths = [folder_paths.get_full_path_or_raise(\"text_encoders\", clip_name1)]\r\n            for clip_name in [clip_name2_opt, clip_name3_opt]:\r\n                if clip_name != \".none\":\r\n                    clip_paths.append(folder_paths.get_full_path_or_raise(\"text_encoders\", clip_name))\r\n            clip = comfy.sd.load_clip(clip_paths,\r\n                                    embedding_directory=folder_paths.get_folder_paths(\"embeddings\"),\r\n                                    clip_type=comfy.sd.CLIPType.SD3)\r\n\r\n        vae = self.load_vae(vae_name, ckpt_out)\r\n\r\n        return (ckpt_out[0], clip, vae)\r\n    \r\nclass RES4LYFModelLoader(BaseModelLoader):\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\r\n            \"model_name\":     (s.get_model_files(),),\r\n            \"weight_dtype\":   (s.get_weight_options(),),\r\n            \"clip_name1_opt\": ([\".none\"] + s.get_clip_options(),),\r\n            \"clip_name2_opt\": ([\".none\"] + folder_paths.get_filename_list(\"text_encoders\"),),\r\n            \"clip_name3_opt\": ([\".none\"] + folder_paths.get_filename_list(\"text_encoders\"),),\r\n            \"clip_name4_opt\": ([\".none\"] + folder_paths.get_filename_list(\"text_encoders\"),),\r\n            \"clip_type\":      ([\".auto\"] + clip_types,),\r\n            \"vae_name\":       ([\".none\", \".use_ckpt_vae\"] + folder_paths.get_filename_list(\"vae\") + [\"taesd\", \"taesdxl\", \"taesd3\", \"taef1\"],),\r\n        }}\r\n\r\n    RETURN_TYPES = (\"MODEL\", \"CLIP\", \"VAE\")\r\n    RETURN_NAMES = (\"model\", \"clip\", \"vae\")\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/loaders\"\r\n    \r\n    def main(self, model_name, weight_dtype, clip_name1_opt, clip_name2_opt, clip_name3_opt, clip_name4_opt, clip_type, vae_name):\r\n        model_options = self.process_weight_dtype(weight_dtype)\r\n        \r\n        torch.manual_seed(42)\r\n        torch.cuda.manual_seed_all(42)\r\n\r\n        if clip_name1_opt == \".use_ckpt_clip\" and (clip_name2_opt != \".none\" or clip_name3_opt != \".none\" or clip_name4_opt != \".none\"):\r\n            raise ValueError(\"Cannot specify both \\\".use_ckpt_clip\\\" and another clip\")\r\n\r\n        output_vae = vae_name == \".use_ckpt_vae\"\r\n        output_clip = clip_name1_opt == \".use_ckpt_clip\"\r\n        ckpt_out = self.load_checkpoint(model_name, output_vae, output_clip, model_options)\r\n\r\n        if clip_name1_opt == \".use_ckpt_clip\":\r\n            if ckpt_out[1] is None:\r\n                raise ValueError(\"Model does not have a clip\")\r\n            clip = ckpt_out[1]\r\n        elif clip_name1_opt == \".none\":\r\n            clip = None\r\n        else:\r\n            clip_paths = [folder_paths.get_full_path_or_raise(\"text_encoders\", clip_name1_opt)]\r\n            for clip_name in [clip_name2_opt, clip_name3_opt, clip_name4_opt]:\r\n                if clip_name != \".none\":\r\n                    clip_paths.append(folder_paths.get_full_path_or_raise(\"text_encoders\", clip_name))\r\n            if \"auto\" in clip_type and ckpt_out[0].model is not None:\r\n                sdCLIPType = getattr(comfy.sd.CLIPType, self.guess_clip_type(ckpt_out[0].model), comfy.sd.CLIPType.STABLE_DIFFUSION)\r\n            else:\r\n                sdCLIPType = getattr(comfy.sd.CLIPType, clip_type.upper(), comfy.sd.CLIPType.STABLE_DIFFUSION)\r\n            clip = comfy.sd.load_clip(clip_paths,\r\n                                    embedding_directory=folder_paths.get_folder_paths(\"embeddings\"),\r\n                                    clip_type=sdCLIPType)\r\n\r\n        vae = self.load_vae(vae_name, ckpt_out)\r\n\r\n        return (ckpt_out[0], clip, vae)\r\n\r\nfrom .style_transfer import Retrojector\r\nimport torch.nn as nn\r\n\r\nclass LayerPatcher:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": {\r\n            \"model\":       (\"MODEL\",),\r\n            \"embedder\":    (s.get_model_patches(),),\r\n            \"gates\":       (s.get_model_patches(),),\r\n            \"last_layer\":  (s.get_model_patches(),),\r\n            \"dtype\":       ([\"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n            #\"retrojector\": (s.get_model_patches(),),\r\n        }}\r\n\r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/patchers\"\r\n    \r\n    @staticmethod\r\n    def get_model_patches():\r\n        return [f for f in folder_paths.get_filename_list(\"diffusion_models\") if f.endswith((\".safetensors\", \".sft\"))]\r\n    \r\n    def main(self, model, embedder, gates, last_layer, retrojector=None, dtype=\"float64\"):\r\n        \r\n        dtype = getattr(torch, dtype)\r\n        \r\n        embedder    = comfy.utils.load_torch_file(folder_paths.get_full_path_or_raise(\"diffusion_models\", embedder))\r\n        last_layer  = comfy.utils.load_torch_file(folder_paths.get_full_path_or_raise(\"diffusion_models\", last_layer))\r\n        #retrojector = comfy.utils.load_torch_file(folder_paths.get_full_path_or_raise(\"diffusion_models\", retrojector))\r\n        \r\n        gates  = comfy.utils.load_torch_file(folder_paths.get_full_path_or_raise(\"diffusion_models\", gates))\r\n        m = model.model.diffusion_model\r\n        \r\n        if embedder:\r\n            m.x_embedder.proj = nn.Linear(\r\n                m.x_embedder.proj.in_features, \r\n                m.x_embedder.proj.out_features, \r\n                bias=True,\r\n                device=m.x_embedder.proj.weight.data.device,\r\n                dtype=dtype\r\n                )\r\n            m.x_embedder.proj.weight.data = embedder['x_embedder.proj.weight'].to(dtype).cuda()\r\n            m.x_embedder.proj.bias.data   = embedder['x_embedder.proj.bias'].to(dtype).cuda()\r\n        \r\n        if gates:\r\n            for key, tensor in gates.items():\r\n                #print(f\"Patching {key} with shape {tensor.shape}\")\r\n                set_nested_attr(model=m, key=key, value=tensor, dtype=dtype)\r\n        \r\n        if last_layer:\r\n            m.final_layer.linear.weight.data = last_layer['final_layer.linear.weight'].to(dtype).cuda()\r\n            m.final_layer.linear.bias.data   = last_layer['final_layer.linear.bias'].to(dtype).cuda()\r\n            m.final_layer.adaLN_modulation[1].weight.data = last_layer['final_layer.adaLN_modulation.1.weight'].to(dtype).cuda()\r\n            m.final_layer.adaLN_modulation[1].bias.data = last_layer['final_layer.adaLN_modulation.1.bias'].to(dtype).cuda()\r\n\r\n        #if retrojector:\r\n        #    m.Retrojector = Retrojector(model.model.diffusion_model.img_in, pinv_dtype=style_dtype, dtype=style_dtype)\r\n        #    m.final_layer.linear.weight.data = last_layer['final_layer.linear.weight']\r\n        #    m.final_layer.linear.bias.data   = last_layer['final_layer.linear.bias']\r\n        #    m.final_layer.adaLN_modulation[1].weight.data = last_layer['final_layer.adaLN_modulation.1.weight']\r\n        #    m.final_layer.adaLN_modulation[1].bias.data = last_layer['final_layer.adaLN_modulation.1.bias']\r\n\r\n        return (model,)\r\n\r\n\r\n\r\ndef set_nested_attr(model, key, value, dtype):\r\n    parts = key.split(\".\")\r\n    attr = model\r\n    for p in parts[:-1]:\r\n        if p.isdigit():\r\n            attr = attr[int(p)]\r\n        else:\r\n            attr = getattr(attr, p)\r\n    getattr(attr, parts[-1]).data.copy_(value.to(getattr(attr, parts[-1]).device, dtype=dtype))\r\n\r\n\r\n\r\n"
  },
  {
    "path": "misc_scripts/replace_metadata.py",
    "content": "#!/usr/bin/env python3\n\nimport argparse\nfrom PIL import Image\nfrom PIL.PngImagePlugin import PngInfo\n\ndef extract_metadata(image_path):\n    image = Image.open(image_path)\n    metadata = image.info\n    return metadata\n\ndef replace_metadata(source_image_path, target_image_path, output_image_path):\n    metadata = extract_metadata(source_image_path)\n\n    target_image = Image.open(target_image_path)\n    \n    png_info = PngInfo()\n    for key, value in metadata.items():\n        png_info.add_text(key, str(value))\n    \n    target_image.save(output_image_path, pnginfo=png_info)\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"Copy metadata from one PNG image to another.\")\n    parser.add_argument('source', type=str, help=\"Path to the source PNG image with the metadata.\")\n    parser.add_argument('target', type=str, help=\"Path to the target PNG image to replace metadata.\")\n    parser.add_argument('output', type=str, help=\"Path for the output PNG image with replaced metadata.\")\n\n    args = parser.parse_args()\n\n    replace_metadata(args.source, args.target, args.output)\n\n    print(f\"Metadata from '{args.source}' has been copied to '{args.output}'.\")\n\nif __name__ == \"__main__\":\n    main()\n\n\n\n"
  },
  {
    "path": "models.py",
    "content": "import torch\r\nimport types\r\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKING, TypeVar\r\nimport re\r\n\r\nimport folder_paths\r\nimport os\r\nimport json\r\nimport math\r\n\r\nimport comfy.samplers\r\nimport comfy.sample\r\nimport comfy.sampler_helpers\r\nimport comfy.utils\r\nimport comfy.model_management\r\n\r\nfrom comfy.cli_args import args\r\n\r\nfrom .flux.redux import ReReduxImageEncoder\r\nfrom comfy.ldm.flux.redux import ReduxImageEncoder\r\n\r\nfrom comfy.ldm.flux.model  import Flux\r\nfrom comfy.ldm.flux.layers import SingleStreamBlock, DoubleStreamBlock\r\n\r\nfrom .flux.model  import ReFlux\r\nfrom .flux.layers import SingleStreamBlock as ReSingleStreamBlock, DoubleStreamBlock as ReDoubleStreamBlock\r\n\r\nfrom comfy.ldm.flux.model  import Flux\r\nfrom comfy.ldm.flux.layers import SingleStreamBlock, DoubleStreamBlock\r\n\r\nfrom comfy.ldm.hidream.model import HiDreamImageTransformer2DModel\r\nfrom comfy.ldm.hidream.model import HiDreamImageBlock, HiDreamImageSingleTransformerBlock, HiDreamImageTransformerBlock, HiDreamAttention\r\n\r\nfrom .hidream.model import HDModel\r\nfrom .hidream.model import HDBlock, HDBlockDouble, HDBlockSingle, HDAttention, HDMoEGate, HDMOEFeedForwardSwiGLU, HDFeedForwardSwiGLU, HDLastLayer\r\n\r\nfrom comfy.ldm.modules.diffusionmodules.mmdit import OpenAISignatureMMDITWrapper, JointBlock\r\nfrom .sd35.mmdit import ReOpenAISignatureMMDITWrapper, ReJointBlock\r\n\r\nfrom comfy.ldm.aura.mmdit import MMDiT, DiTBlock, MMDiTBlock, SingleAttention, DoubleAttention\r\nfrom .aura.mmdit import ReMMDiT, ReDiTBlock, ReMMDiTBlock, ReSingleAttention, ReDoubleAttention\r\n\r\nfrom comfy.ldm.wan.model import WanAttentionBlock, WanI2VCrossAttention, WanModel, WanSelfAttention, WanT2VCrossAttention\r\nfrom .wan.model import ReWanAttentionBlock, ReWanI2VCrossAttention, ReWanModel, ReWanRawSelfAttention, ReWanSelfAttention, ReWanSlidingSelfAttention, ReWanT2VSlidingCrossAttention, ReWanT2VCrossAttention, ReWanT2VRawCrossAttention\r\n\r\nfrom comfy.ldm.chroma.model import Chroma\r\nfrom comfy.ldm.chroma.layers import SingleStreamBlock as ChromaSingleStreamBlock, DoubleStreamBlock as ChromaDoubleStreamBlock\r\n\r\nfrom .chroma.model import ReChroma\r\nfrom .chroma.layers import ReChromaSingleStreamBlock, ReChromaDoubleStreamBlock\r\n\r\nfrom comfy.ldm.lightricks.model import LTXVModel\r\n#from comfy.ldm.chroma.layers import SingleStreamBlock as ChromaSingleStreamBlock, DoubleStreamBlock as ChromaDoubleStreamBlock\r\n\r\nfrom .lightricks.model import ReLTXVModel\r\n#from .chroma.layers import ReChromaSingleStreamBlock, ReChromaDoubleStreamBlock\r\n\r\nfrom comfy.ldm.modules.diffusionmodules.openaimodel import UNetModel, ResBlock\r\nfrom comfy.ldm.modules.attention import SpatialTransformer, BasicTransformerBlock, CrossAttention\r\nfrom .sd.openaimodel import ReUNetModel, ReResBlock\r\nfrom .sd.attention import ReBasicTransformerBlock, ReCrossAttention, ReSpatialTransformer\r\n\r\nfrom .latents import get_orthogonal, get_cosine_similarity\r\nfrom .style_transfer import StyleWCT, WaveletStyleWCT, Retrojector, StyleMMDiT_Model\r\nfrom .res4lyf import RESplain\r\n\r\nfrom .helper import parse_range_string\r\n\r\nfrom comfy.model_sampling import *\r\n\r\nclass PRED:\r\n    TYPE_VP    = {CONST}\r\n    TYPE_VE    = {EPS}\r\n    TYPE_VPRED = {V_PREDICTION, EDM}\r\n    TYPE_X0    = {X0, IMG_TO_IMG}\r\n    \r\n    TYPE_ALL   = TYPE_VP | TYPE_VE | TYPE_VPRED | TYPE_X0\r\n    \r\n    @classmethod\r\n    def get_type(cls, model_sampling):\r\n        bases = type(model_sampling).__mro__\r\n        return next((v_type for v_type in bases if v_type in cls.TYPE_ALL), None)\r\n\r\n\r\ndef time_snr_shift_exponential(alpha, t):\r\n    return math.exp(alpha) / (math.exp(alpha) + (1 / t - 1) ** 1.0)\r\n\r\ndef time_snr_shift_linear(alpha, t):\r\n    if alpha == 1.0:\r\n        return t\r\n    return alpha * t / (1 + (alpha - 1) * t)\r\n\r\n\r\nCOMPILE_MODES = [\"default\", \"max-autotune\", \"max-autotune-no-cudagraphs\", \"reduce-overhead\"]\r\n\r\n\r\nclass TorchCompileModels: \r\n    def __init__(self):\r\n        self._compiled = False\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n                    \"model\"                   : (\"MODEL\",),\r\n                    \"backend\"                 : ([\"inductor\", \"cudagraphs\"],),\r\n                    \"fullgraph\"               : (\"BOOLEAN\",                    {\"default\": False, \"tooltip\": \"Enable full graph mode\"}),\r\n                    \"mode\"                    : (COMPILE_MODES,                {\"default\": \"default\"}),\r\n                    \"dynamic\"                 : (\"BOOLEAN\",                    {\"default\": False, \"tooltip\": \"Enable dynamic mode\"}),\r\n                    \"dynamo_cache_size_limit\" : (\"INT\",                        {\"default\": 64, \"min\": 0, \"max\": 1024,       \"step\": 1, \"tooltip\": \"torch._dynamo.config.cache_size_limit\"}),\r\n                    \"triton_max_block_x\"      : (\"INT\",                        {\"default\": 0,  \"min\": 0, \"max\": 4294967296, \"step\": 1})\r\n                }}\r\n        \r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n\r\n    def main(self,\r\n            model,\r\n            backend       = \"inductor\",\r\n            mode          = \"default\",\r\n            fullgraph     = False,\r\n            dynamic       = False,\r\n            dynamo_cache_size_limit = 64,\r\n            triton_max_block_x = 0,\r\n            ):\r\n        \r\n        m = model.clone()\r\n        diffusion_model = m.get_model_object(\"diffusion_model\")\r\n        torch._dynamo.config.cache_size_limit = dynamo_cache_size_limit\r\n        \r\n        if triton_max_block_x > 0:\r\n            import os\r\n            os.environ[\"TRITON_MAX_BLOCK_X\"] = \"4096\"\r\n        \r\n        if not self._compiled:\r\n            try:\r\n                if hasattr(diffusion_model, \"double_blocks\"):\r\n                    for i, block in enumerate(diffusion_model.double_blocks):\r\n                        m.add_object_patch(f\"diffusion_model.double_blocks.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                    self._compiled = True\r\n                    \r\n                if hasattr(diffusion_model, \"single_blocks\"):\r\n                    for i, block in enumerate(diffusion_model.single_blocks):\r\n                        m.add_object_patch(f\"diffusion_model.single_blocks.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                    self._compiled = True\r\n                    \r\n                if hasattr(diffusion_model, \"double_layers\"):\r\n                    for i, block in enumerate(diffusion_model.double_layers):\r\n                        m.add_object_patch(f\"diffusion_model.double_layers.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                    self._compiled = True\r\n                    \r\n                if hasattr(diffusion_model, \"single_layers\"):\r\n                    for i, block in enumerate(diffusion_model.single_layers):\r\n                        m.add_object_patch(f\"diffusion_model.single_layers.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                    self._compiled = True\r\n                    \r\n                    \r\n                    \r\n                if hasattr(diffusion_model, \"double_stream_blocks\"):\r\n                    for i, block in enumerate(diffusion_model.double_stream_blocks):\r\n                        m.add_object_patch(f\"diffusion_model.double_stream_blocks.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                    self._compiled = True\r\n                    \r\n                if hasattr(diffusion_model, \"single_stream_blocks\"):\r\n                    for i, block in enumerate(diffusion_model.single_stream_blocks):\r\n                        m.add_object_patch(f\"diffusion_model.single_stream_blocks.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                    self._compiled = True\r\n                                        \r\n                                        \r\n                                        \r\n                if hasattr(diffusion_model, \"joint_blocks\"):\r\n                    for i, block in enumerate(diffusion_model.joint_blocks):\r\n                        m.add_object_patch(f\"diffusion_model.joint_blocks.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                    self._compiled = True\r\n                    \r\n                if hasattr(diffusion_model, \"blocks\"):\r\n                    for i, block in enumerate(diffusion_model.blocks):\r\n                        m.add_object_patch(f\"diffusion_model.blocks.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                    self._compiled = True\r\n                    \r\n                if self._compiled == False:\r\n                    raise RuntimeError(\"Model not compiled. Verify that this is a Flux, SD3.5, HiDream, WAN, or Aura model!\")\r\n                \r\n                compile_settings = {\r\n                    \"backend\": backend,\r\n                    \"mode\": mode,\r\n                    \"fullgraph\": fullgraph,\r\n                    \"dynamic\": dynamic,\r\n                }\r\n                \r\n                setattr(m.model, \"compile_settings\", compile_settings)\r\n            except:\r\n                raise RuntimeError(\"Failed to compile model. Verify that this is a Flux, SD3.5, HiDream, WAN, or Aura model!\")\r\n        \r\n        return (m, )\r\n\r\n\r\nclass ReWanPatcherAdvanced:\r\n    def __init__(self):\r\n        self.sliding_window_size = 0\r\n        self.sliding_window_self_attn = \"false\"\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": { \r\n                \"model\"                    : (\"MODEL\",),\r\n                #\"self_attn_blocks\" : (\"STRING\",  {\"default\": \"0,1,2,3,4,5,6,7,8,9,\", \"multiline\": True}),\r\n                \"self_attn_blocks\"         : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"cross_attn_blocks\"        : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"enable\"                   : (\"BOOLEAN\", {\"default\": True}),\r\n                \"sliding_window_self_attn\" : (['false', 'standard', 'circular'], {\"default\": \"false\"}),\r\n                \"sliding_window_frames\"    : (\"INT\",   {\"default\": 60,   \"min\": 4,    \"max\": 0xffffffffffffffff, \"step\": 4, \"tooltip\": \"How many real frames each frame sees. Divide frames by 4 to get real frames.\"}),\r\n            }\r\n        }\r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n    FUNCTION     = \"main\"\r\n\r\n    def main(self, model, self_attn_blocks, cross_attn_blocks, sliding_window_self_attn=\"false\", sliding_window_frames=60, style_dtype=\"float32\", enable=True, force=False):\r\n        \r\n        style_dtype = getattr(torch, style_dtype) if style_dtype != \"default\" else None\r\n        model.model.diffusion_model.style_dtype = style_dtype\r\n        model.model.diffusion_model.proj_weights = None\r\n        model.model.diffusion_model.y0_adain_embed = None\r\n        \r\n        sliding_window_size = sliding_window_frames // 4\r\n        \r\n        self_attn_blocks  = parse_range_string(self_attn_blocks)\r\n        cross_attn_blocks = parse_range_string(cross_attn_blocks)\r\n        \r\n        T2V = type(model.model.model_config) is comfy.supported_models.WAN21_T2V\r\n        \r\n        if (enable or force) and model.model.diffusion_model.__class__ == WanModel:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__     = ReWanModel\r\n            m.model.diffusion_model.threshold_inv = False\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.blocks):\r\n                block.__class__            = ReWanAttentionBlock\r\n                if i in self_attn_blocks:\r\n                    if sliding_window_self_attn != \"false\":\r\n                        block.self_attn.__class__ = ReWanSlidingSelfAttention\r\n                        block.self_attn.winderz = sliding_window_size\r\n                        block.self_attn.winderz_type = sliding_window_self_attn\r\n                    else:\r\n                        block.self_attn.__class__  = ReWanSelfAttention\r\n                        block.self_attn.winderz_type = \"false\"\r\n                else:\r\n                    block.self_attn.__class__  = ReWanRawSelfAttention\r\n                if i in cross_attn_blocks:\r\n                    if T2V:\r\n                        if False: #sliding_window_self_attn != \"false\":\r\n                            block.cross_attn.__class__ = ReWanT2VSlidingCrossAttention\r\n                            block.cross_attn.winderz = sliding_window_size\r\n                            block.cross_attn.winderz_type = sliding_window_self_attn\r\n                        else:\r\n                            block.cross_attn.__class__ = ReWanT2VCrossAttention\r\n\r\n                    else:\r\n                        block.cross_attn.__class__ = ReWanI2VCrossAttention\r\n                block.idx            = i\r\n                block.self_attn.idx  = i\r\n                block.cross_attn.idx = i # 40 total blocks (i == 39)\r\n                \r\n        elif enable and (sliding_window_self_attn != self.sliding_window_self_attn or sliding_window_size != self.sliding_window_size) and model.model.diffusion_model.__class__ == ReWanModel:\r\n            m = model.clone()\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.blocks):\r\n                if i in self_attn_blocks:\r\n                    block.self_attn.winderz = sliding_window_size\r\n                    block.self_attn.winderz_type = sliding_window_self_attn\r\n        \r\n        elif not enable and model.model.diffusion_model.__class__ == ReWanModel:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__ = WanModel\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.blocks):\r\n                block.__class__            = WanAttentionBlock\r\n                block.self_attn.__class__  = WanSelfAttention\r\n                block.cross_attn.__class__ = WanT2VCrossAttention\r\n                block.idx       = i\r\n\r\n        elif model.model.diffusion_model.__class__ not in {ReWanModel, WanModel}:\r\n            raise ValueError(\"This node is for enabling regional conditioning for WAN only!\")\r\n            m = model\r\n        \r\n        return (m,)\r\n    \r\nclass ReWanPatcher(ReWanPatcherAdvanced):\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"model\"  : (\"MODEL\",),\r\n                \"enable\" : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n\r\n    def main(self, model, enable=True, force=False):\r\n        return super().main(\r\n            model             = model,\r\n            self_attn_blocks  = \"all\",\r\n            cross_attn_blocks = \"all\",\r\n            enable            = enable,\r\n            force             = force\r\n        )    \r\n\r\nclass ReDoubleStreamBlockNoMask(ReDoubleStreamBlock):\r\n    def forward(self, c, mask=None):\r\n        return super().forward(c, mask=None)\r\n    \r\nclass ReSingleStreamBlockNoMask(ReSingleStreamBlock):\r\n    def forward(self, c, mask=None):\r\n        return super().forward(c, mask=None)\r\n\r\nclass ReFluxPatcherAdvanced:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": { \r\n                \"model\"               : (\"MODEL\",),\r\n                \"doublestream_blocks\" : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"singlestream_blocks\" : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"style_dtype\"         : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"              : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n    FUNCTION     = \"main\"\r\n\r\n    def main(self, model, doublestream_blocks, singlestream_blocks, style_dtype, enable=True, force=False):\r\n        \r\n        doublestream_blocks = parse_range_string(doublestream_blocks)\r\n        singlestream_blocks = parse_range_string(singlestream_blocks)\r\n        \r\n        style_dtype = getattr(torch, style_dtype) if style_dtype != \"default\" else None\r\n        \r\n        model.model.diffusion_model.style_dtype = style_dtype\r\n        model.model.diffusion_model.proj_weights = None\r\n        model.model.diffusion_model.y0_adain_embed = None\r\n        model.model.diffusion_model.adain_pw_cache = None\r\n        \r\n        model.model.diffusion_model.StyleWCT = StyleWCT()\r\n        model.model.diffusion_model.Retrojector = Retrojector(model.model.diffusion_model.img_in, pinv_dtype=style_dtype, dtype=style_dtype)\r\n        \r\n        if (enable or force) and model.model.diffusion_model.__class__ == Flux:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__     = ReFlux\r\n            m.model.diffusion_model.threshold_inv = False\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.double_blocks):\r\n                if i in doublestream_blocks:\r\n                    block.__class__ = ReDoubleStreamBlock\r\n                else:\r\n                    block.__class__ = ReDoubleStreamBlockNoMask\r\n                block.idx       = i\r\n\r\n            for i, block in enumerate(m.model.diffusion_model.single_blocks):\r\n                if i in singlestream_blocks:\r\n                    block.__class__ = ReSingleStreamBlock\r\n                else:\r\n                    block.__class__ = ReSingleStreamBlockNoMask\r\n                block.idx       = i\r\n                \r\n        \r\n        elif not enable and model.model.diffusion_model.__class__ == ReFlux:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__ = Flux\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.double_blocks):\r\n                block.__class__ = DoubleStreamBlock\r\n                block.idx       = i\r\n\r\n            for i, block in enumerate(m.model.diffusion_model.single_blocks):\r\n                block.__class__ = SingleStreamBlock\r\n                block.idx       = i\r\n                \r\n        #elif model.model.diffusion_model.__class__ != Flux and model.model.diffusion_model.__class__ != ReFlux:\r\n        elif model.model.diffusion_model.__class__ not in {ReFlux, Flux}:\r\n            raise ValueError(\"This node is for enabling regional conditioning for Flux only!\")\r\n        else:\r\n            m = model\r\n        \r\n        return (m,)\r\n    \r\nclass ReFluxPatcher(ReFluxPatcherAdvanced):\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"model\"       : (\"MODEL\",),\r\n                \"style_dtype\" : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"      : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n\r\n    def main(self, model, style_dtype=\"float32\", enable=True, force=False):\r\n        return super().main(\r\n            model               = model,\r\n            doublestream_blocks = \"all\",\r\n            singlestream_blocks = \"all\",\r\n            style_dtype         = style_dtype,\r\n            enable              = enable,\r\n            force               = force\r\n        )    \r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ReReduxPatcher:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": { \r\n                \"style_model\" : (\"STYLE_MODEL\",),\r\n                \"style_dtype\" : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"      : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n    RETURN_TYPES = (\"STYLE_MODEL\",)\r\n    RETURN_NAMES = (\"style_model\",)\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n    FUNCTION     = \"main\"\r\n    EXPERIMENTAL = True\r\n\r\n    def main(self, style_model, style_dtype, enable=True, force=False):\r\n        \r\n        style_model.model.style_dtype = getattr(torch, style_dtype) if style_dtype != \"default\" else None\r\n        style_model.model.proj_weights = None\r\n        style_model.model.y0_adain_embed = None\r\n                \r\n        if (enable or force) and style_model.model.__class__ == ReduxImageEncoder:\r\n            m = style_model#.clone()\r\n            m.model.__class__     = ReReduxImageEncoder\r\n            m.model.threshold_inv = False\r\n        \r\n        elif not enable and style_model.model.__class__ == ReReduxImageEncoder:\r\n            m = style_model#.clone()\r\n            m.model.__class__ = ReduxImageEncoder\r\n            \r\n        elif style_model.model.__class__ not in {ReReduxImageEncoder, ReduxImageEncoder}:\r\n            raise ValueError(\"This node is for enabling style conditioning for Redux only!\")\r\n        else:\r\n            m = style_model\r\n        \r\n        return (m,)\r\n\r\n\r\n\r\nclass ReChromaDoubleStreamBlockNoMask(ReChromaDoubleStreamBlock):\r\n    def forward(self, c, mask=None):\r\n        return super().forward(c, mask=None)\r\n    \r\nclass ReChromaSingleStreamBlockNoMask(ReChromaSingleStreamBlock):\r\n    def forward(self, c, mask=None):\r\n        return super().forward(c, mask=None)\r\n\r\nclass ReChromaPatcherAdvanced:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": { \r\n                \"model\"               : (\"MODEL\",),\r\n                \"doublestream_blocks\" : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"singlestream_blocks\" : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"style_dtype\"         : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"              : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n    FUNCTION     = \"main\"\r\n\r\n    def main(self, model, doublestream_blocks, singlestream_blocks, style_dtype, enable=True, force=False):\r\n        \r\n        doublestream_blocks = parse_range_string(doublestream_blocks)\r\n        singlestream_blocks = parse_range_string(singlestream_blocks)\r\n        \r\n        style_dtype = getattr(torch, style_dtype) if style_dtype != \"default\" else None\r\n        model.model.diffusion_model.style_dtype = style_dtype\r\n        model.model.diffusion_model.proj_weights = None\r\n        model.model.diffusion_model.y0_adain_embed = None\r\n        \r\n        model.model.diffusion_model.StyleWCT = StyleWCT()\r\n        model.model.diffusion_model.Retrojector = Retrojector(model.model.diffusion_model.img_in, pinv_dtype=style_dtype, dtype=style_dtype)\r\n        \r\n        if (enable or force) and model.model.diffusion_model.__class__ == Chroma:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__     = ReChroma\r\n            m.model.diffusion_model.threshold_inv = False\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.double_blocks):\r\n                if i in doublestream_blocks:\r\n                    block.__class__ = ReChromaDoubleStreamBlock\r\n                else:\r\n                    block.__class__ = ReChromaDoubleStreamBlockNoMask\r\n                block.idx       = i\r\n\r\n            for i, block in enumerate(m.model.diffusion_model.single_blocks):\r\n                if i in singlestream_blocks:\r\n                    block.__class__ = ReChromaSingleStreamBlock\r\n                else:\r\n                    block.__class__ = ReChromaSingleStreamBlockNoMask\r\n                block.idx       = i\r\n                \r\n        \r\n        elif not enable and model.model.diffusion_model.__class__ == ReChroma:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__ = Chroma\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.double_blocks):\r\n                block.__class__ = DoubleStreamBlock\r\n                block.idx       = i\r\n\r\n            for i, block in enumerate(m.model.diffusion_model.single_blocks):\r\n                block.__class__ = SingleStreamBlock\r\n                block.idx       = i\r\n                \r\n        #elif model.model.diffusion_model.__class__ != Chroma and model.model.diffusion_model.__class__ != ReChroma:\r\n        elif model.model.diffusion_model.__class__ not in {ReChroma, Chroma}:\r\n            raise ValueError(\"This node is for enabling regional conditioning for Chroma only!\")\r\n        else:\r\n            m = model\r\n        \r\n        return (m,)\r\n    \r\nclass ReChromaPatcher(ReChromaPatcherAdvanced):\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"model\"       : (\"MODEL\",),\r\n                \"style_dtype\" : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"      : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n\r\n    def main(self, model, style_dtype=\"float32\", enable=True, force=False):\r\n        return super().main(\r\n            model               = model,\r\n            doublestream_blocks = \"all\",\r\n            singlestream_blocks = \"all\",\r\n            style_dtype         = style_dtype,\r\n            enable              = enable,\r\n            force               = force\r\n        )    \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\"\"\"class ReLTXVDoubleStreamBlockNoMask(ReLTXVDoubleStreamBlock):\r\n    def forward(self, c, mask=None):\r\n        return super().forward(c, mask=None)\r\n    \r\nclass ReLTXVSingleStreamBlockNoMask(ReLTXVSingleStreamBlock):\r\n    def forward(self, c, mask=None):\r\n        return super().forward(c, mask=None)\"\"\"\r\n\r\nclass ReLTXVPatcherAdvanced:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": { \r\n                \"model\"               : (\"MODEL\",),\r\n                \"doublestream_blocks\" : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"singlestream_blocks\" : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"style_dtype\"         : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"              : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n    FUNCTION     = \"main\"\r\n\r\n    def main(self, model, doublestream_blocks, singlestream_blocks, style_dtype, enable=True, force=False):\r\n        \r\n        doublestream_blocks = parse_range_string(doublestream_blocks)\r\n        singlestream_blocks = parse_range_string(singlestream_blocks)\r\n        \r\n        style_dtype = getattr(torch, style_dtype) if style_dtype != \"default\" else None\r\n        model.model.diffusion_model.style_dtype = style_dtype\r\n        model.model.diffusion_model.proj_weights = None\r\n        model.model.diffusion_model.y0_adain_embed = None\r\n        \r\n        model.model.diffusion_model.StyleWCT = StyleWCT()\r\n        model.model.diffusion_model.Retrojector = Retrojector(model.model.diffusion_model.patchify_proj, pinv_dtype=style_dtype, dtype=style_dtype)\r\n        \r\n        if (enable or force) and model.model.diffusion_model.__class__ == LTXVModel:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__     = ReLTXVModel\r\n            m.model.diffusion_model.threshold_inv = False\r\n            \r\n            \"\"\"for i, block in enumerate(m.model.diffusion_model.double_blocks):\r\n                if i in doublestream_blocks:\r\n                    block.__class__ = ReChromaDoubleStreamBlock\r\n                else:\r\n                    block.__class__ = ReChromaDoubleStreamBlockNoMask\r\n                block.idx       = i\r\n\r\n            for i, block in enumerate(m.model.diffusion_model.single_blocks):\r\n                if i in singlestream_blocks:\r\n                    block.__class__ = ReChromaSingleStreamBlock\r\n                else:\r\n                    block.__class__ = ReChromaSingleStreamBlockNoMask\r\n                block.idx       = i\"\"\"\r\n                \r\n        \r\n        elif not enable and model.model.diffusion_model.__class__ == ReLTXVModel:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__ = LTXVModel\r\n            \r\n            \"\"\"for i, block in enumerate(m.model.diffusion_model.double_blocks):\r\n                block.__class__ = DoubleStreamBlock\r\n                block.idx       = i\r\n\r\n            for i, block in enumerate(m.model.diffusion_model.single_blocks):\r\n                block.__class__ = SingleStreamBlock\r\n                block.idx       = i\"\"\"\r\n                \r\n        #elif model.model.diffusion_model.__class__ != LTXVModel and model.model.diffusion_model.__class__ != ReLTXVModel:\r\n        elif model.model.diffusion_model.__class__ not in {ReLTXVModel, LTXVModel}:\r\n            raise ValueError(\"This node is for enabling regional conditioning for LTXV only!\")\r\n        else:\r\n            m = model\r\n        \r\n        return (m,)\r\n    \r\nclass ReLTXVPatcher(ReLTXVPatcherAdvanced):\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"model\"       : (\"MODEL\",),\r\n                \"style_dtype\" : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"      : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n\r\n    def main(self, model, style_dtype=\"float32\", enable=True, force=False):\r\n        return super().main(\r\n            model               = model,\r\n            doublestream_blocks = \"all\",\r\n            singlestream_blocks = \"all\",\r\n            style_dtype         = style_dtype,\r\n            enable              = enable,\r\n            force               = force\r\n        )    \r\n\r\n\r\n\r\n\r\n\r\nclass ReSDPatcherAdvanced:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": { \r\n                \"model\"               : (\"MODEL\",),\r\n                \"doublestream_blocks\" : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"singlestream_blocks\" : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"style_dtype\"         : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"              : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n    FUNCTION     = \"main\"\r\n    #EXPERIMENTAL = True\r\n\r\n    def main(self, model, doublestream_blocks, singlestream_blocks, style_dtype, enable=True, force=False):\r\n        \r\n        doublestream_blocks = parse_range_string(doublestream_blocks)\r\n        singlestream_blocks = parse_range_string(singlestream_blocks)\r\n        \r\n        style_dtype = getattr(torch, style_dtype) if style_dtype != \"default\" else None\r\n        model.model.diffusion_model.style_dtype = style_dtype\r\n        model.model.diffusion_model.proj_weights = None\r\n        model.model.diffusion_model.y0_adain_embed = None\r\n        \r\n        model.model.diffusion_model.StyleWCT    = StyleWCT()\r\n        model.model.diffusion_model.Retrojector = Retrojector(model.model.diffusion_model.input_blocks[0][0], pinv_dtype=style_dtype, dtype=style_dtype, patch_size=1)\r\n        \r\n        if (enable or force) and model.model.diffusion_model.__class__ == UNetModel:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__     = ReUNetModel\r\n            m.model.diffusion_model.threshold_inv = False\r\n                \r\n            for i in range(len(m.model.diffusion_model.input_blocks)):\r\n                for j in range(len(m.model.diffusion_model.input_blocks[i])):\r\n                    if isinstance(m.model.diffusion_model.input_blocks[i][j], ResBlock):\r\n                        m.model.diffusion_model.input_blocks[i][j].__class__ = ReResBlock\r\n                    if isinstance(m.model.diffusion_model.input_blocks[i][j], SpatialTransformer):\r\n                        m.model.diffusion_model.input_blocks[i][j].__class__ = ReSpatialTransformer\r\n                        for k in range(len(m.model.diffusion_model.input_blocks[i][j].transformer_blocks)):\r\n                            m.model.diffusion_model.input_blocks[i][j].transformer_blocks[k].__class__ = ReBasicTransformerBlock\r\n                            m.model.diffusion_model.input_blocks[i][j].transformer_blocks[k].attn1.__class__ = ReCrossAttention\r\n                            m.model.diffusion_model.input_blocks[i][j].transformer_blocks[k].attn2.__class__ = ReCrossAttention\r\n        \r\n            #m.model.diffusion_model.middle_block[1].transformer_blocks[0].__class__ = ReBasicTransformerBlock\r\n            for i in range(len(m.model.diffusion_model.middle_block)):\r\n                if isinstance(m.model.diffusion_model.middle_block[i], ResBlock):\r\n                    m.model.diffusion_model.middle_block[i].__class__ = ReResBlock\r\n                if isinstance(m.model.diffusion_model.middle_block[i], SpatialTransformer):\r\n                    m.model.diffusion_model.middle_block[i].__class__ = ReSpatialTransformer\r\n                    for k in range(len(m.model.diffusion_model.middle_block[i].transformer_blocks)):\r\n                        m.model.diffusion_model.middle_block[i].transformer_blocks[k].__class__ = ReBasicTransformerBlock\r\n                        m.model.diffusion_model.middle_block[i].transformer_blocks[k].attn1.__class__ = ReCrossAttention\r\n                        m.model.diffusion_model.middle_block[i].transformer_blocks[k].attn2.__class__ = ReCrossAttention\r\n\r\n            for i in range(len(m.model.diffusion_model.output_blocks)):\r\n                for j in range(len(m.model.diffusion_model.output_blocks[i])):\r\n                    if isinstance(m.model.diffusion_model.output_blocks[i][j], ResBlock):\r\n                        m.model.diffusion_model.output_blocks[i][j].__class__ = ReResBlock\r\n                    if isinstance(m.model.diffusion_model.output_blocks[i][j], SpatialTransformer):\r\n                        m.model.diffusion_model.output_blocks[i][j].__class__ = ReSpatialTransformer\r\n                        for k in range(len(m.model.diffusion_model.output_blocks[i][j].transformer_blocks)):\r\n                            m.model.diffusion_model.output_blocks[i][j].transformer_blocks[k].__class__ = ReBasicTransformerBlock\r\n                            m.model.diffusion_model.output_blocks[i][j].transformer_blocks[k].attn1.__class__ = ReCrossAttention\r\n                            m.model.diffusion_model.output_blocks[i][j].transformer_blocks[k].attn2.__class__ = ReCrossAttention\r\n\r\n        elif not enable and model.model.diffusion_model.__class__ == ReUNetModel:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__ = UNetModel\r\n            \r\n            for i in range(len(m.model.diffusion_model.input_blocks)):\r\n                for j in range(len(m.model.diffusion_model.input_blocks[i])):\r\n                    if isinstance(m.model.diffusion_model.input_blocks[i][j], ReResBlock):\r\n                        m.model.diffusion_model.input_blocks[i][j].__class__ = ResBlock\r\n                    if isinstance(m.model.diffusion_model.input_blocks[i][j], ReSpatialTransformer):\r\n                        m.model.diffusion_model.input_blocks[i][j].__class__ = SpatialTransformer\r\n                        for k in range(len(m.model.diffusion_model.input_blocks[i][j].transformer_blocks)):\r\n                            m.model.diffusion_model.input_blocks[i][j].transformer_blocks[k].__class__ = BasicTransformerBlock\r\n                            m.model.diffusion_model.input_blocks[i][j].transformer_blocks[k].attn1.__class__ = CrossAttention\r\n                            m.model.diffusion_model.input_blocks[i][j].transformer_blocks[k].attn2.__class__ = CrossAttention\r\n        \r\n            #m.model.diffusion_model.middle_block[1].transformer_blocks[0].__class__ = BasicTransformerBlock\r\n            for i in range(len(m.model.diffusion_model.middle_block)):\r\n                if isinstance(m.model.diffusion_model.middle_block[i], ReResBlock):\r\n                    m.model.diffusion_model.middle_block[i].__class__ = ResBlock\r\n                if isinstance(m.model.diffusion_model.middle_block[i], ReSpatialTransformer):\r\n                    m.model.diffusion_model.middle_block[i].__class__ = SpatialTransformer\r\n                    for k in range(len(m.model.diffusion_model.middle_block[i].transformer_blocks)):\r\n                        m.model.diffusion_model.middle_block[i].transformer_blocks[k].__class__ = BasicTransformerBlock\r\n                        m.model.diffusion_model.middle_block[i].transformer_blocks[k].attn1.__class__ = CrossAttention\r\n                        m.model.diffusion_model.middle_block[i].transformer_blocks[k].attn2.__class__ = CrossAttention\r\n\r\n            for i in range(len(m.model.diffusion_model.output_blocks)):\r\n                for j in range(len(m.model.diffusion_model.output_blocks[i])):\r\n                    if isinstance(m.model.diffusion_model.output_blocks[i][j], ReResBlock):\r\n                        m.model.diffusion_model.output_blocks[i[j]].__class__ = ResBlock\r\n                    if isinstance(m.model.diffusion_model.output_blocks[i][j], ReSpatialTransformer):\r\n                        m.model.diffusion_model.output_blocks[i[j]].__class__ = SpatialTransformer\r\n                        for k in range(len(m.model.diffusion_model.output_blocks[i][j].transformer_blocks)):\r\n                            m.model.diffusion_model.output_blocks[i][j].transformer_blocks[k].__class__ = BasicTransformerBlock\r\n                            m.model.diffusion_model.output_blocks[i][j].transformer_blocks[k].attn1.__class__ = CrossAttention\r\n                            m.model.diffusion_model.output_blocks[i][j].transformer_blocks[k].attn2.__class__ = CrossAttention\r\n\r\n        #elif model.model.diffusion_model.__class__ != UNetModel and model.model.diffusion_model.__class__ != ReUNetModel:\r\n        elif model.model.diffusion_model.__class__ not in {ReUNetModel, UNetModel}:\r\n            raise ValueError(\"This node is for enabling regional conditioning for SD1.5 and SDXL only!\")\r\n        else:\r\n            m = model\r\n        \r\n        return (m,)\r\n    \r\nclass ReSDPatcher(ReSDPatcherAdvanced):\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"model\"       : (\"MODEL\",),\r\n                \"style_dtype\" : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"      : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n\r\n    def main(self, model, style_dtype=\"float32\", enable=True, force=False):\r\n        return super().main(\r\n            model               = model,\r\n            doublestream_blocks = \"all\",\r\n            singlestream_blocks = \"all\",\r\n            style_dtype         = style_dtype,\r\n            enable              = enable,\r\n            force               = force\r\n        )    \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass HDBlockDoubleNoMask(HDBlockDouble):\r\n    def forward(self, c, mask=None):\r\n        return super().forward(c, mask=None)\r\n    \r\nclass HDBlockSingleNoMask(HDBlockSingle):\r\n    def forward(self, c, mask=None):\r\n        return super().forward(c, mask=None)\r\n\r\n\r\nclass ReHiDreamPatcherAdvanced:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": { \r\n                \"model\"                : (\"MODEL\",),\r\n                \"double_stream_blocks\" : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"single_stream_blocks\" : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"style_dtype\"          : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"               : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n    FUNCTION     = \"main\"\r\n\r\n    def main(self, model, double_stream_blocks, single_stream_blocks, style_dtype, enable=True, force=False):\r\n        \r\n        double_stream_blocks = parse_range_string(double_stream_blocks)\r\n        single_stream_blocks = parse_range_string(single_stream_blocks)\r\n        \r\n        style_dtype = getattr(torch, style_dtype) if style_dtype != \"default\" else None\r\n        model.model.diffusion_model.style_dtype = style_dtype\r\n        model.model.diffusion_model.proj_weights = None\r\n        model.model.diffusion_model.y0_adain_embed = None\r\n        \r\n        model.model.diffusion_model.StyleWCT    = StyleWCT()\r\n        model.model.diffusion_model.WaveletStyleWCT = WaveletStyleWCT()\r\n        model.model.diffusion_model.Retrojector = Retrojector(model.model.diffusion_model.x_embedder.proj, pinv_dtype=style_dtype, dtype=style_dtype)\r\n        #model.model.diffusion_model.Endojector  = Retrojector(model.model.diffusion_model.final_layer.linear, pinv_dtype=style_dtype, dtype=style_dtype, ENDO=True)\r\n        \r\n        #model.model.diffusion_model.Style = StyleMMDiT_HiDream()\r\n        #model.model.diffusion_model.Style.Retrojector = Retrojector(model.model.diffusion_model.x_embedder.proj, pinv_dtype=style_dtype, dtype=style_dtype)\r\n        \r\n        sort_buffer = {}\r\n        \r\n        if (enable or force) and model.model.diffusion_model.__class__ == HiDreamImageTransformer2DModel:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__     = HDModel\r\n            m.model.diffusion_model.threshold_inv = False\r\n            m.model.diffusion_model.final_layer.__class__ = HDLastLayer\r\n            \r\n            m.model.diffusion_model.final_layer.linear.weight.data = m.model.diffusion_model.final_layer.linear.weight.data.to(torch.bfloat16)\r\n            m.model.diffusion_model.final_layer.linear.bias.data = m.model.diffusion_model.final_layer.linear.bias.data.to(torch.bfloat16)\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.double_stream_blocks):\r\n                block.__class__             = HDBlock\r\n\r\n                if i in double_stream_blocks:\r\n                    block.block.__class__   = HDBlockDouble\r\n                else:\r\n                    block.block.__class__   = HDBlockDoubleNoMask\r\n                    \r\n                block.block.attn1.__class__ = HDAttention\r\n                    \r\n                block.block.ff_i.__class__  = HDMOEFeedForwardSwiGLU\r\n                block.block.ff_i.shared_experts.__class__ = HDFeedForwardSwiGLU\r\n                for j in range(len(block.block.ff_i.experts)):\r\n                    block.block.ff_i.experts[j].__class__ = HDFeedForwardSwiGLU\r\n                block.block.ff_i.gate.__class__ = HDMoEGate\r\n                block.block.ff_t.__class__  = HDFeedForwardSwiGLU\r\n                    \r\n                block.block.attn1.single_stream = False\r\n                block.block.attn1.double_stream = True\r\n                \r\n                block.block.sort_buffer       = sort_buffer\r\n                block.block.attn1.sort_buffer = sort_buffer\r\n                \r\n                block.idx             = i\r\n                block.block.idx       = i\r\n                block.block.attn1.idx = i\r\n\r\n            for i, block in enumerate(m.model.diffusion_model.single_stream_blocks):\r\n                block.__class__             = HDBlock\r\n\r\n                if i in single_stream_blocks:\r\n                    block.block.__class__       = HDBlockSingle\r\n                else:\r\n                    block.block.__class__       = HDBlockSingleNoMask\r\n\r\n                block.block.attn1.__class__ = HDAttention\r\n                block.block.ff_i.__class__  = HDMOEFeedForwardSwiGLU\r\n                block.block.ff_i.shared_experts.__class__ = HDFeedForwardSwiGLU\r\n                for j in range(len(block.block.ff_i.experts)):\r\n                    block.block.ff_i.experts[j].__class__ = HDFeedForwardSwiGLU\r\n                block.block.ff_i.gate.__class__ = HDMoEGate\r\n                \r\n                block.block.attn1.single_stream = True\r\n                block.block.attn1.double_stream = False\r\n                \r\n                block.block.sort_buffer       = sort_buffer\r\n                block.block.attn1.sort_buffer = sort_buffer\r\n                \r\n                block.idx             = i\r\n                block.block.idx       = i\r\n                block.block.attn1.idx = i\r\n\r\n        elif not enable and model.model.diffusion_model.__class__ == HDModel:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__ = HiDreamImageTransformer2DModel\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.double_stream_blocks):\r\n                if i in double_stream_blocks:\r\n                    block.__class__             = HiDreamImageBlock\r\n                    block.block.__class__       = HiDreamImageTransformerBlock\r\n                    block.block.attn1.__class__ = HiDreamAttention\r\n                block.idx       = i\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.single_stream_blocks):\r\n                if i in single_stream_blocks:\r\n                    block.__class__             = HiDreamImageBlock\r\n                    block.block.__class__       = HiDreamImageSingleTransformerBlock\r\n                    block.block.attn1.__class__ = HiDreamAttention\r\n                block.idx       = i\r\n                \r\n        #elif model.model.diffusion_model.__class__ != HDModel and model.model.diffusion_model.__class__ != HiDreamImageTransformer2DModel:\r\n        elif model.model.diffusion_model.__class__ not in {HDModel, HiDreamImageTransformer2DModel}:\r\n            raise ValueError(\"This node is for enabling regional conditioning for HiDream only!\")\r\n        else:\r\n            m = model\r\n        \r\n        return (m,)\r\n    \r\nclass ReHiDreamPatcher(ReHiDreamPatcherAdvanced):\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"model\"       : (\"MODEL\",),\r\n                \"style_dtype\" : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"      : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n\r\n    def main(self, model, style_dtype=\"default\", enable=True, force=False):\r\n        return super().main(\r\n            model                = model,\r\n            double_stream_blocks = \"all\",\r\n            single_stream_blocks = \"all\",\r\n            style_dtype          = style_dtype,\r\n            enable               = enable,\r\n            force                = force\r\n        )    \r\n\r\n\r\n\r\nclass ReJointBlockNoMask(ReJointBlock):\r\n    def forward(self, c, mask=None):\r\n        return super().forward(c, mask=None)\r\n\r\nclass ReSD35PatcherAdvanced:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": { \r\n                \"model\"        : (\"MODEL\",),\r\n                \"joint_blocks\" : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"style_dtype\"  : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"       : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n    FUNCTION     = \"main\"\r\n\r\n    def main(self, model, joint_blocks, style_dtype, enable=True, force=False):\r\n        \r\n        style_dtype = getattr(torch, style_dtype) if style_dtype != \"default\" else None\r\n        model.model.diffusion_model.style_dtype = style_dtype\r\n        model.model.diffusion_model.proj_weights = None\r\n        model.model.diffusion_model.y0_adain_embed = None\r\n        \r\n        model.model.diffusion_model.StyleWCT    = StyleWCT()\r\n        model.model.diffusion_model.Retrojector = Retrojector(model.model.diffusion_model.x_embedder.proj, pinv_dtype=style_dtype, dtype=style_dtype)\r\n        \r\n        joint_blocks = parse_range_string(joint_blocks)\r\n        \r\n        if (enable or force) and model.model.diffusion_model.__class__ == OpenAISignatureMMDITWrapper:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__     = ReOpenAISignatureMMDITWrapper\r\n            m.model.diffusion_model.threshold_inv = False\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.joint_blocks):\r\n                if i in joint_blocks:\r\n                    block.__class__ = ReJointBlock\r\n                else:\r\n                    ReJointBlockNoMask\r\n                block.idx       = i\r\n\r\n        elif not enable and model.model.diffusion_model.__class__ == ReOpenAISignatureMMDITWrapper:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__ = OpenAISignatureMMDITWrapper\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.joint_blocks):\r\n                block.__class__ = JointBlock\r\n                block.idx       = i\r\n\r\n        elif model.model.diffusion_model.__class__ not in {ReOpenAISignatureMMDITWrapper, OpenAISignatureMMDITWrapper}:\r\n            raise ValueError(\"This node is for enabling regional conditioning for SD3.5 only!\")\r\n            m = model\r\n        \r\n        return (m,)\r\n    \r\nclass ReSD35Patcher(ReSD35PatcherAdvanced):\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"model\"       : (\"MODEL\",),\r\n                \"style_dtype\" : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"      : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n\r\n    def main(self, model, style_dtype=\"float32\", enable=True, force=False):\r\n        return super().main(\r\n            model        = model,\r\n            joint_blocks = \"all\",\r\n            style_dtype  = style_dtype,\r\n            enable       = enable,\r\n            force        = force\r\n        )    \r\n\r\nclass ReDoubleAttentionNoMask(ReDoubleAttention):\r\n    def forward(self, c, mask=None):\r\n        return super().forward(c, mask=None)\r\n    \r\nclass ReSingleAttentionNoMask(ReSingleAttention):\r\n    def forward(self, c, mask=None):\r\n        return super().forward(c, mask=None)\r\n\r\nclass ReAuraPatcherAdvanced:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": { \r\n                \"model\"              : (\"MODEL\",),\r\n                \"doublelayer_blocks\" : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"singlelayer_blocks\" : (\"STRING\",  {\"default\": \"all\", \"multiline\": True}),\r\n                \"style_dtype\"        : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"             : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n    FUNCTION     = \"main\"\r\n\r\n    def main(self, model, doublelayer_blocks, singlelayer_blocks, style_dtype, enable=True, force=False):\r\n        \r\n        doublelayer_blocks = parse_range_string(doublelayer_blocks)\r\n        singlelayer_blocks = parse_range_string(singlelayer_blocks)\r\n        \r\n        style_dtype = getattr(torch, style_dtype) if style_dtype != \"default\" else None\r\n        model.model.diffusion_model.style_dtype = style_dtype\r\n        model.model.diffusion_model.proj_weights = None\r\n        model.model.diffusion_model.y0_adain_embed = None\r\n        \r\n        model.model.diffusion_model.StyleWCT    = StyleWCT()\r\n        model.model.diffusion_model.Retrojector = Retrojector(model.model.diffusion_model.init_x_linear, pinv_dtype=style_dtype, dtype=style_dtype)\r\n        \r\n        if (enable or force) and model.model.diffusion_model.__class__ == MMDiT:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__     = ReMMDiT\r\n            m.model.diffusion_model.threshold_inv = False\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.double_layers):\r\n                block.__class__ = ReMMDiTBlock\r\n                if i in doublelayer_blocks:\r\n                    block.attn.__class__ = ReDoubleAttention\r\n                else:\r\n                    block.attn.__class__ = ReDoubleAttentionNoMask\r\n                block.idx       = i\r\n                \r\n            for i, block in enumerate(m.model.diffusion_model.single_layers):\r\n                block.__class__ = ReDiTBlock\r\n                if i in singlelayer_blocks:\r\n                    block.attn.__class__ = ReSingleAttention\r\n                else:\r\n                    block.attn.__class__ = ReSingleAttentionNoMask\r\n                block.idx       = i\r\n\r\n        elif not enable and model.model.diffusion_model.__class__ == ReMMDiT:\r\n            m = model.clone()\r\n            m.model.diffusion_model.__class__ = MMDiT\r\n            \r\n            for i, block in enumerate(m.model.diffusion_model.double_layers):\r\n                block.__class__ = MMDiTBlock\r\n                block.attn.__class__ = DoubleAttention\r\n                block.idx       = i\r\n                \r\n            for i, block in enumerate(m.model.diffusion_model.single_layers):\r\n                block.__class__ = DiTBlock\r\n                block.attn.__class__ = SingleAttention\r\n                block.idx       = i\r\n\r\n        elif model.model.diffusion_model.__class__ not in {ReMMDiT, MMDiT}:\r\n            raise ValueError(\"This node is for enabling regional conditioning for AuraFlow only!\")\r\n            m = model\r\n        \r\n        return (m,)\r\n\r\nclass ReAuraPatcher(ReAuraPatcherAdvanced):\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"model\"       : (\"MODEL\",),\r\n                \"style_dtype\" : ([\"default\", \"bfloat16\", \"float16\", \"float32\", \"float64\"],  {\"default\": \"float64\"}),\r\n                \"enable\"      : (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n\r\n    def main(self, model, style_dtype=\"float32\", enable=True, force=False):\r\n        return super().main(\r\n            model              = model,\r\n            doublelayer_blocks = \"all\",\r\n            singlelayer_blocks = \"all\",\r\n            style_dtype         = style_dtype,\r\n            enable             = enable,\r\n            force              = force\r\n        )    \r\n\r\n\r\nclass FluxOrthoCFGPatcher:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n            \"model\":        (\"MODEL\",),\r\n            \"enable\":       (\"BOOLEAN\", {\"default\": True}),\r\n            \"ortho_T5\":     (\"BOOLEAN\", {\"default\": True}),\r\n            \"ortho_clip_L\": (\"BOOLEAN\", {\"default\": True}),\r\n            \"zero_clip_L\":  (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n        \r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n    FUNCTION     = \"main\"\r\n    EXPERIMENTAL = True\r\n    \r\n    original_forward = Flux.forward\r\n\r\n    @staticmethod\r\n    def new_forward(self, x, timestep, context, y, guidance, control=None, transformer_options={}, **kwargs):\r\n\r\n        for _ in range(500):\r\n            if self.ortho_T5 and get_cosine_similarity(context[0], context[1]) != 0:\r\n                context[0] = get_orthogonal(context[0], context[1])\r\n            if self.ortho_clip_L and get_cosine_similarity(y[0], y[1]) != 0:\r\n                y[0] = get_orthogonal(y[0].unsqueeze(0), y[1].unsqueeze(0)).squeeze(0)\r\n                \r\n        RESplain(\"postcossim1: \", get_cosine_similarity(context[0], context[1]))\r\n        RESplain(\"postcossim2: \", get_cosine_similarity(y[0], y[1]))\r\n        \r\n        if self.zero_clip_L:\r\n            y[0] = torch.zeros_like(y[0])\r\n        \r\n        return FluxOrthoCFGPatcher.original_forward(self, x, timestep, context, y, guidance, control, transformer_options, **kwargs)\r\n\r\n    def main(self, model, enable=True, ortho_T5=True, ortho_clip_L=True, zero_clip_L=True):\r\n        m = model.clone()\r\n\r\n        if enable:\r\n            m.model.diffusion_model.ortho_T5     = ortho_T5\r\n            m.model.diffusion_model.ortho_clip_L = ortho_clip_L\r\n            m.model.diffusion_model.zero_clip_L  = zero_clip_L\r\n            Flux.forward = types.MethodType(FluxOrthoCFGPatcher.new_forward, m.model.diffusion_model)\r\n        else:\r\n            Flux.forward = FluxOrthoCFGPatcher.original_forward\r\n\r\n        return (m,)\r\n    \r\n    \r\n    \r\n    \r\nclass FluxGuidanceDisable:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": { \r\n                \"model\":       (\"MODEL\",),\r\n                \"disable\":     (\"BOOLEAN\", {\"default\": True}),\r\n                \"zero_clip_L\": (\"BOOLEAN\", {\"default\": True}),\r\n            }\r\n        }\r\n        \r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n\r\n    original_forward = Flux.forward\r\n\r\n    @staticmethod\r\n    def new_forward(self, x, timestep, context, y, guidance, control=None, transformer_options={}, **kwargs):\r\n\r\n        y = torch.zeros_like(y)\r\n        \r\n        return FluxGuidanceDisable.original_forward(self, x, timestep, context, y, guidance, control, transformer_options, **kwargs)\r\n\r\n    def main(self, model, disable=True, zero_clip_L=True):\r\n        m = model.clone()\r\n        if disable:\r\n            m.model.diffusion_model.params.guidance_embed = False\r\n        else:\r\n            m.model.diffusion_model.params.guidance_embed = True\r\n            \r\n        #m.model.diffusion_model.zero_clip_L = zero_clip_L\r\n        if zero_clip_L:\r\n            Flux.forward = types.MethodType(FluxGuidanceDisable.new_forward, m.model.diffusion_model)\r\n\r\n        return (m,)\r\n\r\n\r\n\r\nclass ModelSamplingAdvanced:\r\n    # this is used to set the \"shift\" using either exponential scaling (default for SD3.5M and Flux) or linear scaling (default for SD3.5L and SD3 2B beta)\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n                    \"model\":   (\"MODEL\",),\r\n                    \"scaling\": ([\"exponential\", \"linear\"], {\"default\": 'exponential'}), \r\n                    \"shift\":   (\"FLOAT\",                   {\"default\": 3.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False}),\r\n                    }\r\n                }\r\n    \r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/model_shift\"\r\n\r\n    def sigma_exponential(self, timestep):\r\n        return time_snr_shift_exponential(self.timestep_shift, timestep / self.multiplier)\r\n\r\n    def sigma_linear(self, timestep):\r\n        return time_snr_shift_linear(self.timestep_shift, timestep / self.multiplier)\r\n\r\n    def main(self, model, scaling, shift):\r\n        m = model.clone()\r\n        \r\n        self.timestep_shift = shift\r\n        self.multiplier     = 1000\r\n        timesteps           = 1000\r\n        sampling_base       = None\r\n        \r\n        if isinstance(m.model.model_config, comfy.supported_models.Flux) or isinstance(m.model.model_config, comfy.supported_models.FluxSchnell) or isinstance(m.model.model_config, comfy.supported_models.Chroma):\r\n            self.multiplier = 1\r\n            timesteps = 10000\r\n            sampling_base = comfy.model_sampling.ModelSamplingFlux\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        elif isinstance(m.model.model_config, comfy.supported_models.AuraFlow):\r\n            self.multiplier = 1\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n        \r\n        elif isinstance(m.model.model_config, comfy.supported_models.SD3):\r\n            self.multiplier = 1000\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        elif isinstance(m.model.model_config, comfy.supported_models.HiDream):\r\n            self.multiplier = 1000\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        elif isinstance(m.model.model_config, comfy.supported_models.HunyuanVideo):\r\n            self.multiplier = 1000\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        if isinstance(m.model.model_config, comfy.supported_models.WAN21_T2V) or isinstance(m.model.model_config, comfy.supported_models.WAN21_I2V):\r\n            self.multiplier = 1000\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        elif isinstance(m.model.model_config, comfy.supported_models.CosmosT2V) or isinstance(m.model.model_config, comfy.supported_models.CosmosI2V):\r\n            self.multiplier = 1\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingContinuousEDM\r\n            sampling_type = comfy.model_sampling.CONST\r\n\r\n        elif isinstance(m.model.model_config, comfy.supported_models.LTXV):\r\n            self.multiplier = 1000 # incorrect?\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingFlux\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        if sampling_base is None:\r\n            raise ValueError(\"Model not supported by ModelSamplingAdvanced\")\r\n\r\n        class ModelSamplingAdvanced(sampling_base, sampling_type):\r\n            pass\r\n\r\n        m.object_patches['model_sampling'] = m.model.model_sampling = ModelSamplingAdvanced(m.model.model_config)\r\n\r\n        m.model.model_sampling.__dict__['shift']      = self.timestep_shift\r\n        m.model.model_sampling.__dict__['multiplier'] = self.multiplier\r\n\r\n        s_range = torch.arange(1, timesteps + 1, 1).to(torch.float64)\r\n        if scaling == \"exponential\": \r\n            ts = self.sigma_exponential((s_range / timesteps) * self.multiplier)\r\n        elif scaling == \"linear\": \r\n            ts = self.sigma_linear((s_range / timesteps) * self.multiplier)\r\n\r\n        m.model.model_sampling.register_buffer('sigmas', ts)\r\n        m.object_patches['model_sampling'].sigmas = m.model.model_sampling.sigmas\r\n        \r\n        return (m,)\r\n\r\n\r\n\r\nclass ModelSamplingAdvancedResolution:\r\n    # this is used to set the \"shift\" using either exponential scaling (default for SD3.5M and Flux) or linear scaling (default for SD3.5L and SD3 2B beta)\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n                    \"model\":        (\"MODEL\",),\r\n                    \"scaling\":      ([\"exponential\", \"linear\"], {\"default\": 'exponential'}), \r\n                    \"max_shift\":    (\"FLOAT\",                   {\"default\": 1.35, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False}),\r\n                    \"base_shift\":   (\"FLOAT\",                   {\"default\": 0.85, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False}),\r\n                    \"latent_image\": (\"LATENT\",),\r\n                }\r\n                }\r\n    \r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/model_shift\"\r\n\r\n    def sigma_exponential(self, timestep):\r\n        return time_snr_shift_exponential(self.timestep_shift, timestep / self.multiplier)\r\n\r\n    def sigma_linear(self, timestep):\r\n        return time_snr_shift_linear(self.timestep_shift, timestep / self.multiplier)\r\n\r\n    def main(self, model, scaling, max_shift, base_shift, latent_image):\r\n        m = model.clone()\r\n        \r\n        height, width = latent_image['samples'].shape[-2:]\r\n        frames = latent_image['samples'].shape[-3] if latent_image['samples'].ndim == 5 else 1\r\n        \r\n        x1    = 256\r\n        x2    = 4096\r\n        mm    = (max_shift - base_shift) / (x2 - x1)\r\n        b     = base_shift - mm * x1\r\n        shift = (1 * width * height / (8 * 8 * 2 * 2)) * mm + b\r\n        \r\n        self.timestep_shift = shift\r\n        self.multiplier     = 1000\r\n        timesteps           = 1000\r\n        \r\n        if isinstance(m.model.model_config, comfy.supported_models.Flux) or isinstance(m.model.model_config, comfy.supported_models.FluxSchnell) or isinstance(m.model.model_config, comfy.supported_models.Chroma):\r\n            self.multiplier = 1\r\n            timesteps = 10000\r\n            sampling_base = comfy.model_sampling.ModelSamplingFlux\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        elif isinstance(m.model.model_config, comfy.supported_models.AuraFlow):\r\n            self.multiplier = 1\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        elif isinstance(m.model.model_config, comfy.supported_models.SD3):\r\n            self.multiplier = 1000\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        elif isinstance(m.model.model_config, comfy.supported_models.HiDream):\r\n            self.multiplier = 1000\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        elif isinstance(m.model.model_config, comfy.supported_models.HunyuanVideo):\r\n            self.multiplier = 1000\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        if isinstance(m.model.model_config, comfy.supported_models.WAN21_T2V) or isinstance(m.model.model_config, comfy.supported_models.WAN21_I2V):\r\n            self.multiplier = 1000\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingDiscreteFlow\r\n            sampling_type = comfy.model_sampling.CONST\r\n            \r\n        elif isinstance(m.model.model_config, comfy.supported_models.CosmosT2V) or isinstance(m.model.model_config, comfy.supported_models.CosmosI2V):\r\n            self.multiplier = 1\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingContinuousEDM\r\n            sampling_type = comfy.model_sampling.CONST\r\n\r\n        elif isinstance(m.model.model_config, comfy.supported_models.LTXV):\r\n            self.multiplier = 1000\r\n            timesteps = 1000\r\n            sampling_base = comfy.model_sampling.ModelSamplingFlux\r\n            sampling_type = comfy.model_sampling.CONST\r\n\r\n        class ModelSamplingAdvanced(sampling_base, sampling_type):\r\n            pass\r\n\r\n        m.object_patches['model_sampling'] = m.model.model_sampling = ModelSamplingAdvanced(m.model.model_config)\r\n\r\n        m.model.model_sampling.__dict__['shift'] = self.timestep_shift\r\n        m.model.model_sampling.__dict__['multiplier'] = self.multiplier\r\n\r\n        s_range = torch.arange(1, timesteps + 1, 1).to(torch.float64)\r\n        if scaling == \"exponential\": \r\n            ts = self.sigma_exponential((s_range / timesteps) * self.multiplier)\r\n        elif scaling == \"linear\": \r\n            ts = self.sigma_linear((s_range / timesteps) * self.multiplier)\r\n\r\n        m.model.model_sampling.register_buffer('sigmas', ts)\r\n        m.object_patches['model_sampling'].sigmas = m.model.model_sampling.sigmas\r\n        \r\n        return (m,)\r\n    \r\n# Code adapted from https://github.com/comfyanonymous/ComfyUI/\r\nclass UNetSave:\r\n    def __init__(self):\r\n        self.output_dir = folder_paths.get_output_directory()\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"model\":           (\"MODEL\",),\r\n                \"filename_prefix\": (\"STRING\", {\"default\": \"models/ComfyUI\"}),\r\n                },\r\n            \"hidden\": {\r\n                \"prompt\": \"PROMPT\", \"extra_pnginfo\": \"EXTRA_PNGINFO\"\r\n                },\r\n            }\r\n        \r\n    RETURN_TYPES = ()\r\n    FUNCTION = \"save\"\r\n    OUTPUT_NODE = True\r\n\r\n    CATEGORY = \"RES4LYF/model_merging\"\r\n    DESCRIPTION = \"Save a .safetensors containing only the model data.\"\r\n\r\n    def save(self, model, filename_prefix, prompt=None, extra_pnginfo=None):\r\n        save_checkpoint(\r\n            model, \r\n            clip            = None,\r\n            vae             = None,\r\n            filename_prefix = filename_prefix,\r\n            output_dir      = self.output_dir,\r\n            prompt          = prompt,\r\n            extra_pnginfo   = extra_pnginfo,\r\n            )\r\n        \r\n        return {}\r\n\r\n\r\ndef save_checkpoint(\r\n        model,\r\n        clip            = None,\r\n        vae             = None,\r\n        clip_vision     = None,\r\n        filename_prefix = None,\r\n        output_dir      = None,\r\n        prompt          = None,\r\n        extra_pnginfo   = None,\r\n        ):\r\n    \r\n    full_output_folder, filename, counter, subfolder, filename_prefix = folder_paths.get_save_image_path(filename_prefix, output_dir)\r\n    prompt_info = \"\"\r\n    if prompt is not None:\r\n        prompt_info = json.dumps(prompt)\r\n\r\n    metadata = {}\r\n\r\n    enable_modelspec = True\r\n    if isinstance(model.model, comfy.model_base.SDXL):\r\n        if isinstance(model.model, comfy.model_base.SDXL_instructpix2pix):\r\n            metadata[\"modelspec.architecture\"] = \"stable-diffusion-xl-v1-edit\"\r\n        else:\r\n            metadata[\"modelspec.architecture\"] = \"stable-diffusion-xl-v1-base\"\r\n    elif isinstance(model.model, comfy.model_base.SDXLRefiner):\r\n        metadata[\"modelspec.architecture\"] = \"stable-diffusion-xl-v1-refiner\"\r\n    elif isinstance(model.model, comfy.model_base.SVD_img2vid):\r\n        metadata[\"modelspec.architecture\"] = \"stable-video-diffusion-img2vid-v1\"\r\n    elif isinstance(model.model, comfy.model_base.SD3):\r\n        metadata[\"modelspec.architecture\"] = \"stable-diffusion-v3-medium\" #TODO: other SD3 variants\r\n    else:\r\n        enable_modelspec = False\r\n\r\n    if enable_modelspec:\r\n        metadata[\"modelspec.sai_model_spec\"] = \"1.0.0\"\r\n        metadata[\"modelspec.implementation\"] = \"sgm\"\r\n        metadata[\"modelspec.title\"] = \"{} {}\".format(filename, counter)\r\n\r\n    #TODO:\r\n    # \"stable-diffusion-v1\", \"stable-diffusion-v1-inpainting\", \"stable-diffusion-v2-512\",\r\n    # \"stable-diffusion-v2-768-v\", \"stable-diffusion-v2-unclip-l\", \"stable-diffusion-v2-unclip-h\",\r\n    # \"v2-inpainting\"\r\n\r\n    extra_keys = {}\r\n    model_sampling = model.get_model_object(\"model_sampling\")\r\n    if isinstance(model_sampling, comfy.model_sampling.ModelSamplingContinuousEDM):\r\n        if isinstance(model_sampling, comfy.model_sampling.V_PREDICTION):\r\n            extra_keys[\"edm_vpred.sigma_max\"] = torch.tensor(model_sampling.sigma_max).float()\r\n            extra_keys[\"edm_vpred.sigma_min\"] = torch.tensor(model_sampling.sigma_min).float()\r\n\r\n    if model.model.model_type == comfy.model_base.ModelType.EPS:\r\n        metadata[\"modelspec.predict_key\"] = \"epsilon\"\r\n    elif model.model.model_type == comfy.model_base.ModelType.V_PREDICTION:\r\n        metadata[\"modelspec.predict_key\"] = \"v\"\r\n\r\n    if not args.disable_metadata:\r\n        metadata[\"prompt\"] = prompt_info\r\n        if extra_pnginfo is not None:\r\n            for x in extra_pnginfo:\r\n                metadata[x] = json.dumps(extra_pnginfo[x])\r\n\r\n    output_checkpoint = f\"{filename}_{counter:05}_.safetensors\"\r\n    output_checkpoint = os.path.join(full_output_folder, output_checkpoint)\r\n\r\n    sd_save_checkpoint(output_checkpoint, model, clip, vae, clip_vision, metadata=metadata, extra_keys=extra_keys)\r\n\r\n\r\ndef sd_save_checkpoint(output_path, model, clip=None, vae=None, clip_vision=None, metadata=None, extra_keys={}):\r\n    clip_sd = None\r\n    load_models = [model]\r\n    if clip is not None:\r\n        load_models.append(clip.load_model())\r\n        clip_sd = clip.get_sd()\r\n\r\n    comfy.model_management.load_models_gpu(load_models, force_patch_weights=True)\r\n    clip_vision_sd = clip_vision.get_sd() if clip_vision is not None else None\r\n    vae_sd = vae.get_sd() if vae is not None else None                             #THIS ALLOWS SAVING UNET ONLY\r\n    sd = model.model.state_dict_for_saving(clip_sd, vae_sd, clip_vision_sd)\r\n    for k in extra_keys:\r\n        sd[k] = extra_keys[k]\r\n\r\n    for k in sd:\r\n        t = sd[k]\r\n        if not t.is_contiguous():\r\n            sd[k] = t.contiguous()\r\n\r\n    comfy.utils.save_torch_file(sd, output_path, metadata=metadata)\r\n\r\n\r\n\r\n# Code adapted from https://github.com/kijai/ComfyUI-KJNodes\r\nclass TorchCompileModelFluxAdvanced: \r\n    def __init__(self):\r\n        self._compiled = False\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n                    \"model\":         (\"MODEL\",),\r\n                    \"backend\":       ([\"inductor\", \"cudagraphs\"],),\r\n                    \"fullgraph\":     (\"BOOLEAN\",                                                                    {\"default\": False, \"tooltip\": \"Enable full graph mode\"}),\r\n                    \"mode\":          ([\"default\", \"max-autotune\", \"max-autotune-no-cudagraphs\", \"reduce-overhead\"], {\"default\": \"default\"}),\r\n                    \"double_blocks\": (\"STRING\",                                                                     {\"default\": \"0-18\", \"multiline\": True}),\r\n                    \"single_blocks\": (\"STRING\",                                                                     {\"default\": \"0-37\", \"multiline\": True}),\r\n                    \"dynamic\":       (\"BOOLEAN\",                                                                    {\"default\": False, \"tooltip\": \"Enable dynamic mode\"}),\r\n                }}\r\n        \r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n\r\n    def parse_blocks(self, blocks_str):\r\n        blocks = []\r\n        for part in blocks_str.split(','):\r\n            part = part.strip()\r\n            if '-' in part:\r\n                start, end = map(int, part.split('-'))\r\n                blocks.extend(range(start, end + 1))\r\n            else:\r\n                blocks.append(int(part))\r\n        return blocks\r\n\r\n    def main(self,\r\n            model,\r\n            backend       = \"inductor\",\r\n            mode          = \"default\",\r\n            fullgraph     = False,\r\n            single_blocks = \"0-37\",\r\n            double_blocks = \"0-18\",\r\n            dynamic       = False,\r\n            ):\r\n        \r\n        single_block_list = self.parse_blocks(single_blocks)\r\n        double_block_list = self.parse_blocks(double_blocks)\r\n        m = model.clone()\r\n        diffusion_model = m.get_model_object(\"diffusion_model\")\r\n        \r\n        if not self._compiled:\r\n            try:\r\n                for i, block in enumerate(diffusion_model.double_blocks):\r\n                    if i in double_block_list:\r\n                        m.add_object_patch(f\"diffusion_model.double_blocks.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                for i, block in enumerate(diffusion_model.single_blocks):\r\n                    if i in single_block_list:\r\n                        m.add_object_patch(f\"diffusion_model.single_blocks.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                self._compiled = True\r\n                compile_settings = {\r\n                    \"backend\": backend,\r\n                    \"mode\": mode,\r\n                    \"fullgraph\": fullgraph,\r\n                    \"dynamic\": dynamic,\r\n                }\r\n                setattr(m.model, \"compile_settings\", compile_settings)\r\n            except:\r\n                raise RuntimeError(\"Failed to compile model. Verify that this is a Flux model!\")\r\n        \r\n        return (m, )\r\n        # rest of the layers that are not patched\r\n        # diffusion_model.final_layer = torch.compile(diffusion_model.final_layer, mode=mode, fullgraph=fullgraph, backend=backend)\r\n        # diffusion_model.guidance_in = torch.compile(diffusion_model.guidance_in, mode=mode, fullgraph=fullgraph, backend=backend)\r\n        # diffusion_model.img_in = torch.compile(diffusion_model.img_in, mode=mode, fullgraph=fullgraph, backend=backend)\r\n        # diffusion_model.time_in = torch.compile(diffusion_model.time_in, mode=mode, fullgraph=fullgraph, backend=backend)\r\n        # diffusion_model.txt_in = torch.compile(diffusion_model.txt_in, mode=mode, fullgraph=fullgraph, backend=backend)\r\n        # diffusion_model.vector_in = torch.compile(diffusion_model.vector_in, mode=mode, fullgraph=fullgraph, backend=backend)\r\n        \r\n        #   @torch.compile(mode=\"default\", dynamic=False, fullgraph=False, backend=\"inductor\")\r\n        \r\n\r\nclass TorchCompileModelAura:\r\n    def __init__(self):\r\n        self._compiled = False\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n                    \"model\":                   (\"MODEL\",),\r\n                    \"backend\":                 ([\"inductor\", \"cudagraphs\"],),\r\n                    \"fullgraph\":               (\"BOOLEAN\",                    {\"default\": False,                                \"tooltip\": \"Enable full graph mode\"}),\r\n                    \"mode\":                    (COMPILE_MODES               , {\"default\": \"default\"}),\r\n                    \"dynamic\":                 (\"BOOLEAN\",                    {\"default\": False,                                \"tooltip\": \"Enable dynamic mode\"}),\r\n                    \"dynamo_cache_size_limit\": (\"INT\",                        {\"default\": 64, \"min\": 0, \"max\": 1024, \"step\": 1, \"tooltip\": \"torch._dynamo.config.cache_size_limit\"}),\r\n                }}\r\n        \r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n\r\n    def main(self,\r\n            model,\r\n            backend       = \"inductor\",\r\n            mode          = \"default\",\r\n            fullgraph     = False,\r\n            dynamic       = False,\r\n            dynamo_cache_size_limit = 64,\r\n            ):\r\n\r\n        m = model.clone()\r\n        diffusion_model = m.get_model_object(\"diffusion_model\")\r\n        torch._dynamo.config.cache_size_limit = dynamo_cache_size_limit\r\n        \r\n        if not self._compiled:\r\n            try:\r\n                for i, block in enumerate(diffusion_model.double_layers):\r\n                    m.add_object_patch(f\"diffusion_model.double_layers.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                for i, block in enumerate(diffusion_model.single_layers):\r\n                    m.add_object_patch(f\"diffusion_model.single_layers.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                self._compiled = True\r\n                compile_settings = {\r\n                    \"backend\": backend,\r\n                    \"mode\": mode,\r\n                    \"fullgraph\": fullgraph,\r\n                    \"dynamic\": dynamic,\r\n                }\r\n                setattr(m.model, \"compile_settings\", compile_settings)\r\n            except:\r\n                raise RuntimeError(\"Failed to compile model. Verify that this is an AuraFlow model!\")\r\n        \r\n        return (m, )\r\n\r\nclass TorchCompileModelSD35:\r\n    def __init__(self):\r\n        self._compiled = False\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\": { \r\n                    \"model\":                   (\"MODEL\",),\r\n                    \"backend\":                 ([\"inductor\", \"cudagraphs\"],),\r\n                    \"fullgraph\":               (\"BOOLEAN\",                    {\"default\": False,                                \"tooltip\": \"Enable full graph mode\"}),\r\n                    \"mode\":                    (COMPILE_MODES               , {\"default\": \"default\"}),\r\n                    \"dynamic\":                 (\"BOOLEAN\",                    {\"default\": False,                                \"tooltip\": \"Enable dynamic mode\"}),\r\n                    \"dynamo_cache_size_limit\": (\"INT\",                        {\"default\": 64, \"min\": 0, \"max\": 1024, \"step\": 1, \"tooltip\": \"torch._dynamo.config.cache_size_limit\"}),\r\n                }}\r\n        \r\n    RETURN_TYPES = (\"MODEL\",)\r\n    RETURN_NAMES = (\"model\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/model_patches\"\r\n\r\n    def main(self,\r\n            model,\r\n            backend       = \"inductor\",\r\n            mode          = \"default\",\r\n            fullgraph     = False,\r\n            dynamic       = False,\r\n            dynamo_cache_size_limit = 64,\r\n            ):\r\n        \r\n        m = model.clone()\r\n        diffusion_model = m.get_model_object(\"diffusion_model\")\r\n        torch._dynamo.config.cache_size_limit = dynamo_cache_size_limit\r\n        \r\n        if not self._compiled:\r\n            try:\r\n                for i, block in enumerate(diffusion_model.joint_blocks):\r\n                    m.add_object_patch(f\"diffusion_model.joint_blocks.{i}\", torch.compile(block, mode=mode, dynamic=dynamic, fullgraph=fullgraph, backend=backend))\r\n                self._compiled = True\r\n                compile_settings = {\r\n                    \"backend\"  : backend,\r\n                    \"mode\"     : mode,\r\n                    \"fullgraph\": fullgraph,\r\n                    \"dynamic\"  : dynamic,\r\n                }\r\n                setattr(m.model, \"compile_settings\", compile_settings)\r\n            except:\r\n                raise RuntimeError(\"Failed to compile model. Verify that this is a SD3.5 model!\")\r\n        \r\n        return (m, )\r\n\r\n\r\nclass ClownpileModelWanVideo:\r\n    def __init__(self):\r\n        self._compiled = False\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"model\"                     : (\"MODEL\",),\r\n                \"backend\"                   : ([\"inductor\",\"cudagraphs\"], {\"default\" : \"inductor\"}),\r\n                \"fullgraph\"                 : (\"BOOLEAN\",                 {\"default\"                : False, \"tooltip\"                   : \"Enable full graph mode\"}),\r\n                \"mode\"                      : (COMPILE_MODES,             {\"default\": \"default\"}),\r\n                \"dynamic\"                   : (\"BOOLEAN\",                 {\"default\"                : False, \"tooltip\"                   : \"Enable dynamic mode\"}),\r\n                \"dynamo_cache_size_limit\"   : (\"INT\",                     {\"default\"                : 64, \"min\"                          : 0, \"max\": 1024, \"step\": 1, \"tooltip\": \"torch._dynamo.config.cache_size_limit\"}),\r\n                #\"compile_self_attn_blocks\" : (\"INT\",                     {\"default\"                : 0, \"min\"                           : 0, \"max\": 100, \"step\" : 1, \"tooltip\": \"Maximum blocks to compile. These use huge amounts of VRAM with large attention masks.\"}),\r\n                \"skip_self_attn_blocks\"     : (\"STRING\",                  {\"default\"                 : \"0,1,2,3,4,5,6,7,8,9,\", \"multiline\": True, \"tooltip\": \"For WAN only: select self-attn blocks to disable. Due to the size of the self-attn masks, VRAM required to compile blocks using regional WAN is excessive. List any blocks selected in the ReWanPatcher node.\"}),\r\n                \"compile_transformer_blocks\": (\"BOOLEAN\",                 {\"default\"                : True,  \"tooltip\"                    : \"Compile all transformer blocks\"}),\r\n                \"force_recompile\"           : (\"BOOLEAN\",                 {\"default\": False, \"tooltip\": \"Force recompile.\"}),\r\n            },\r\n        }\r\n    RETURN_TYPES = (\"MODEL\",)\r\n    FUNCTION = \"patch\"\r\n\r\n    CATEGORY = \"RES4LYF/model\"\r\n    EXPERIMENTAL = True\r\n\r\n    def patch(self, model, backend, fullgraph, mode, dynamic, dynamo_cache_size_limit, skip_self_attn_blocks, compile_transformer_blocks, force_recompile):\r\n        m = model.clone()\r\n        diffusion_model = m.get_model_object(\"diffusion_model\")\r\n        torch._dynamo.config.cache_size_limit = dynamo_cache_size_limit\r\n        \r\n        skip_self_attn_blocks = parse_range_string(skip_self_attn_blocks)\r\n        \r\n        if force_recompile:\r\n            self._compiled = False\r\n        \r\n        if not self._compiled:\r\n            try:\r\n                if compile_transformer_blocks:\r\n                    for i, block in enumerate(diffusion_model.blocks):\r\n                        #if i % 2 == 1:\r\n                        if i not in skip_self_attn_blocks:\r\n                            compiled_block = torch.compile(block, fullgraph=fullgraph, dynamic=dynamic, backend=backend, mode=mode)\r\n                            m.add_object_patch(f\"diffusion_model.blocks.{i}\", compiled_block)\r\n                        #block.self_attn = torch.compile(block.self_attn, fullgraph=fullgraph, dynamic=dynamic, backend=backend, mode=mode)\r\n                        #block.cross_attn = torch.compile(block.cross_attn, fullgraph=fullgraph, dynamic=dynamic, backend=backend, mode=mode)\r\n                        #if i < compile_self_attn_blocks:\r\n                        #    block.self_attn = torch.compile(block.self_attn, fullgraph=fullgraph, dynamic=dynamic, backend=backend, mode=mode)\r\n                        #    #compiled_block = torch.compile(block, fullgraph=fullgraph, dynamic=dynamic, backend=backend, mode=mode)\r\n                        #    #m.add_object_patch(f\"diffusion_model.blocks.{i}\", compiled_block)\r\n                        #block.cross_attn = torch.compile(block.cross_attn, fullgraph=fullgraph, dynamic=dynamic, backend=backend, mode=mode)\r\n                self._compiled = True\r\n                compile_settings = {\r\n                    \"backend\": backend,\r\n                    \"mode\": mode,\r\n                    \"fullgraph\": fullgraph,\r\n                    \"dynamic\": dynamic,\r\n                }\r\n                setattr(m.model, \"compile_settings\", compile_settings)\r\n            except:\r\n                raise RuntimeError(\"Failed to compile model. Verify that this is a WAN model!\")\r\n        return (m, )\r\n\r\n"
  },
  {
    "path": "nodes_latents.py",
    "content": "import torch.nn.functional as F\n\nimport copy\n\nimport comfy.samplers\nimport comfy.sample\nimport comfy.sampler_helpers\nimport comfy.utils\n    \nimport itertools\n\nimport torch\nimport math\n\nfrom nodes import MAX_RESOLUTION\n#MAX_RESOLUTION=8192\n\nfrom .helper             import ExtraOptions, initialize_or_scale, extra_options_flag, get_extra_options_list\nfrom .latents            import latent_meancenter_channels, latent_stdize_channels, get_edge_mask, apply_to_state_info_tensors\nfrom .beta.noise_classes import NOISE_GENERATOR_NAMES, NOISE_GENERATOR_CLASSES, prepare_noise\n\ndef fp_or(tensor1, tensor2):\n    return torch.maximum(tensor1, tensor2)\n\ndef fp_and(tensor1, tensor2):\n    return torch.minimum(tensor1, tensor2)\n\n\nclass AdvancedNoise:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\":{\n                \"alpha\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.1, \"round\": 0.01}),\n                \"k\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":2.0, \"round\": 0.01}),\n                \"noise_seed\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\n                \"noise_type\": (NOISE_GENERATOR_NAMES, ),\n            },\n        }\n\n    RETURN_TYPES = (\"NOISE\",)\n    FUNCTION = \"get_noise\"\n    CATEGORY = \"RES4LYF/noise\"\n\n    def get_noise(self, noise_seed, noise_type, alpha, k):\n        return (Noise_RandomNoise(noise_seed, noise_type, alpha, k),)\n\n\n\nclass Noise_RandomNoise:\n    def __init__(self, seed, noise_type, alpha, k):\n        self.seed = seed\n        self.noise_type = noise_type\n        self.alpha = alpha\n        self.k = k\n\n    def generate_noise(self, input_latent):\n        latent_image = input_latent[\"samples\"]\n        batch_inds = input_latent[\"batch_index\"] if \"batch_index\" in input_latent else None\n        return prepare_noise(latent_image, self.seed, self.noise_type, batch_inds, self.alpha, self.k)\n\n\n\nclass LatentNoised:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\"required\":\n                    {\n                    \"add_noise\": (\"BOOLEAN\", {\"default\": True}),\n                    \"noise_is_latent\": (\"BOOLEAN\", {\"default\": False}),\n                    \"noise_type\": (NOISE_GENERATOR_NAMES, ),\n                    \"alpha\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.1, \"round\": 0.01}),\n                    \"k\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":2.0, \"round\": 0.01}),\n                    \"noise_seed\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\n                    \"latent_image\": (\"LATENT\", ),\n                    \"noise_strength\": (\"FLOAT\", {\"default\": 1.0, \"min\": -20.0, \"max\": 20.0, \"step\": 0.01, \"round\": 0.01}),\n                    \"normalize\": ([\"false\", \"true\"], {\"default\": \"false\"}),\n                     },\n                \"optional\": \n                    {\n                    \"latent_noise\": (\"LATENT\", ),\n                    \"mask\": (\"MASK\", ),\n                    }\n                }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent_noised\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/noise\"\n    \n    def main(self,\n            add_noise,\n            noise_is_latent,\n            noise_type,\n            noise_seed,\n            alpha,\n            k,\n            latent_image,\n            noise_strength,\n            normalize,\n            latent_noise = None,\n            mask         = None\n            ):\n        \n        latent_out = latent_image.copy()\n        samples = latent_out[\"samples\"].clone()\n\n        torch.manual_seed(noise_seed)\n\n        if not add_noise:\n            noise = torch.zeros(samples.size(), dtype=samples.dtype, layout=samples.layout, device=\"cpu\")\n        elif latent_noise is None:\n            batch_inds = latent_out[\"batch_index\"] if \"batch_index\" in latent_out else None\n            noise = prepare_noise(samples, noise_seed, noise_type, batch_inds, alpha, k)\n        else:\n            noise = latent_noise[\"samples\"]\n\n        if normalize == \"true\":\n            latent_mean = samples.mean()\n            latent_std = samples.std()\n            noise = noise * latent_std + latent_mean\n\n        if noise_is_latent:\n            noise += samples.cpu()\n            noise.sub_(noise.mean()).div_(noise.std())\n        \n        noise = noise * noise_strength\n\n        if mask is not None:\n            if len(samples.shape) == 5:\n                b, c, t, h, w = samples.shape\n                mask_resized = F.interpolate(mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])), \n                                size=(h, w), \n                                mode=\"bilinear\")\n                if mask_resized.shape[0] < b:\n                    mask_resized = mask_resized.repeat((b - 1) // mask_resized.shape[0] + 1, 1, 1, 1)[:b]\n                elif mask_resized.shape[0] > b:\n                    mask_resized = mask_resized[:b]\n                mask_expanded = mask_resized.expand((-1, c, -1, -1))\n                mask_temporal = mask_expanded.unsqueeze(2).expand(-1, -1, t, -1, -1).to(samples.device)\n                noise = mask_temporal * noise + (1 - mask_temporal) * torch.zeros_like(noise)\n            else:\n                mask = F.interpolate(mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])), \n                                    size=(samples.shape[2], samples.shape[3]), \n                                    mode=\"bilinear\")\n                mask = mask.expand((-1, samples.shape[1], -1, -1)).to(samples.device)\n                if mask.shape[0] < samples.shape[0]:\n                    mask = mask.repeat((samples.shape[0] - 1) // mask.shape[0] + 1, 1, 1, 1)[:samples.shape[0]]\n                elif mask.shape[0] > samples.shape[0]:\n                    mask = mask[:samples.shape[0]]\n                \n                noise = mask * noise + (1 - mask) * torch.zeros_like(noise)\n\n        latent_out[\"samples\"] = samples.cpu() + noise\n\n        return (latent_out,)\n\n\n\n\nclass LatentNoiseList:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"latent\": (\"LATENT\",),\n                \"alpha\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"k_flip\": (\"BOOLEAN\", {\"default\": False}),\n                \"steps\": (\"INT\", {\"default\": 0, \"min\": -10000, \"max\": 10000}),\n                \"seed\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\n            },\n            \"optional\": {\n                \"alphas\": (\"SIGMAS\", ),\n                \"ks\": (\"SIGMAS\", ),\n            }\n        }\n\n    RETURN_TYPES   = (\"LATENT\",)\n    RETURN_NAMES   = (\"latent_list\",)\n    OUTPUT_IS_LIST = (True,)\n    FUNCTION       = \"main\"\n    CATEGORY       = \"RES4LYF/noise\"\n\n    def main(self,\n            seed,\n            latent,\n            alpha,\n            k_flip,\n            steps,\n            alphas = None,\n            ks     = None\n            ):\n        \n        alphas = initialize_or_scale(alphas, alpha, steps)\n        k_flip = -1 if k_flip else 1\n        ks = initialize_or_scale(ks, k_flip, steps)    \n\n        latent_samples = latent[\"samples\"]\n        latents = []\n        size = latent_samples.shape\n\n        steps = len(alphas) if steps == 0 else steps\n\n        noise_sampler = NOISE_GENERATOR_CLASSES.get('fractal')(x=latent_samples, seed=seed)\n\n        for i in range(steps):\n            noise = noise_sampler(alpha=alphas[i].item(), k=ks[i].item(), scale=0.1)\n            noisy_latent = latent_samples + noise\n            new_latent = {\"samples\": noisy_latent}\n            latents.append(new_latent)\n\n        return (latents, )\n\n\n\nclass MaskToggle:\n    def __init__(self):\n        pass\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                    \"enable\": (\"BOOLEAN\", {\"default\": True}),    \n                    \"mask\":   (\"MASK\", ),\n                     },\n                }\n\n    RETURN_TYPES = (\"MASK\",)\n    RETURN_NAMES = (\"mask\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/masks\"\n\n\n    def main(self, enable=True, mask=None):\n        if enable == False:\n            mask = None\n        return (mask, )\n\n\n\nclass latent_to_raw_x:\n    def __init__(self):\n        pass\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                    \"latent\": (\"LATENT\", ),      \n                     },\n                }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent_raw_x\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, latent,):\n        if 'state_info' not in latent:\n            latent['state_info'] = {}\n        \n        latent['state_info']['raw_x'] = latent['samples'].to(torch.float64)\n        return (latent,)\n\n\n# Adapted from https://github.com/comfyanonymous/ComfyUI/blob/5ee381c058d606209dcafb568af20196e7884fc8/comfy_extras/nodes_wan.py\nclass TrimVideoLatent_state_info:\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\"required\": {\"samples\": (\"LATENT\",),\n                             \"trim_amount\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 99999}),\n                            }}\n\n    RETURN_TYPES = (\"LATENT\",)\n    FUNCTION = \"op\"\n    CATEGORY = \"RES4LYF/latents\"\n    EXPERIMENTAL = True\n\n    @staticmethod\n    def _trim_tensor(tensor, trim_amount):\n        \"\"\"Trim frames from beginning of tensor along temporal dimension (-3)\"\"\"\n        if tensor.shape[-3] > trim_amount:\n            return tensor.narrow(-3, trim_amount, tensor.shape[-3] - trim_amount)\n        return tensor\n    \n    def op(self, samples, trim_amount):\n        ref_shape = samples[\"samples\"].shape\n        samples_out = apply_to_state_info_tensors(samples, ref_shape, self._trim_tensor, trim_amount)\n        return (samples_out,)\n\n# Adapted from https://github.com/comfyanonymous/ComfyUI/blob/05df2df489f6b237f63c5f7d42a943ae2be417e9/nodes.py\nclass LatentUpscaleBy_state_info:\n    upscale_methods = [\"nearest-exact\", \"bilinear\", \"area\", \"bicubic\", \"bislerp\"]\n\n    @classmethod\n    def INPUT_TYPES(s):\n        return {\"required\": { \"samples\": (\"LATENT\",), \"upscale_method\": (s.upscale_methods,),\n                              \"scale_by\": (\"FLOAT\", {\"default\": 1.5, \"min\": 0.01, \"max\": 8.0, \"step\": 0.01}),}}\n    RETURN_TYPES = (\"LATENT\",)\n    FUNCTION = \"op\"\n\n    CATEGORY = \"latent\"\n\n    def _upscale_tensor(tensor, upscale_method, scale_by):\n        width = round(tensor.shape[-1] * scale_by)\n        height = round(tensor.shape[-2] * scale_by)\n        tensor = comfy.utils.common_upscale(tensor, width, height, upscale_method, \"disabled\")\n        return tensor\n    \n    def op(self, samples, upscale_method, scale_by):\n        ref_shape = samples[\"samples\"].shape\n        samples_out = apply_to_state_info_tensors(samples, ref_shape, self._upscale_tensor, upscale_method, scale_by)\n        return (samples_out,)\n\nclass latent_clear_state_info:\n    def __init__(self):\n        pass\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                    \"latent\": (\"LATENT\", ),      \n                     },\n                }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, latent,):\n        latent_out = {}\n        if 'samples' in latent:\n            latent_out['samples'] = latent['samples']\n        return (latent_out,)\n\n\nclass latent_replace_state_info:\n    def __init__(self):\n        pass\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                    \"latent\": (\"LATENT\", ),\n                    \"clear_raw_x\": (\"BOOLEAN\", {\"default\": False}),\n                    \"replace_end_step\": (\"INT\", {\"default\": 0, \"min\": -10000, \"max\": 10000}),\n                     },\n                }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, latent, clear_raw_x, replace_end_step):\n        latent_out = copy.deepcopy(latent)\n        if 'state_info' not in latent_out:\n            latent_out['state_info'] = {}\n        if clear_raw_x:\n            latent_out['state_info']['raw_x'] = None\n        if replace_end_step != 0:\n            latent_out['state_info']['end_step'] = replace_end_step\n        return (latent_out,)\n\n\nclass latent_display_state_info:\n    def __init__(self):\n        pass\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                    \"latent\": (\"LATENT\", ),      \n                     },\n                }\n\n    RETURN_TYPES = (\"STRING\",)\n    FUNCTION     = \"execute\"\n    CATEGORY     = \"RES4LYF/latents\"\n    OUTPUT_NODE  = True\n\n    def execute(self, latent):\n        text = \"\"\n        if 'state_info' in latent:\n            for key, value in latent['state_info'].items():\n                if isinstance(value, torch.Tensor):\n                    if value.numel() == 0:\n                        value_text = \"empty tensor\"\n                    elif value.numel() == 1:\n                        if value.dtype == torch.bool:\n                            value_text = f\"bool({value.item()})\"\n                        else:\n                            value_text = f\"str({value.item():.3f}), dtype: {value.dtype}\"\n                    else:\n                        shape_str = str(list(value.shape)).replace(\" \", \"\")\n                        dtype = value.dtype\n\n                        if torch.is_floating_point(value) is False:\n                            if value.dtype == torch.bool:\n                                value_text = f\"shape: {shape_str}, dtype: {dtype}, true: {value.sum().item()}, false: {(~value).sum().item()}\"\n                            else:\n                                max_val = value.float().max().item()\n                                min_val = value.float().min().item()\n                                value_text = f\"shape: {shape_str}, dtype: {dtype}, max: {max_val}, min: {min_val}\"\n                        else:\n                            mean = value.float().mean().item()\n                            std = value.float().std().item()\n                            value_text = f\"shape: {shape_str}, dtype: {dtype}, mean: {mean:.3f}, std: {std:.3f}\"\n                else:\n                    value_text = str(value)\n\n                text += f\"{key}: {value_text}\\n\"\n        else:\n            text = \"No state info in latent\"\n\n        return {\"ui\": {\"text\": text}, \"result\": (text,)}\n\n\n\n\n\nclass latent_transfer_state_info:\n    def __init__(self):\n        pass\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                    \"latent_to\":   (\"LATENT\", ),      \n                    \"latent_from\": (\"LATENT\", ),      \n                    },\n                }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, latent_to, latent_from):\n        #if 'state_info' not in latent:\n        #    latent['state_info'] = {}\n        \n        latent_to['state_info'] = copy.deepcopy(latent_from['state_info'])\n        return (latent_to,)\n\n\n\n\nclass latent_mean_channels_from_to:\n    def __init__(self):\n        pass\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                    \"latent_to\":   (\"LATENT\", ),      \n                    \"latent_from\": (\"LATENT\", ),      \n                    },\n                }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, latent_to, latent_from):\n        latent_to['samples'] = latent_to['samples'] - latent_to['samples'].mean(dim=(-2,-1), keepdim=True) + latent_from['samples'].mean(dim=(-2,-1), keepdim=True)\n        return (latent_to,)\n\n\n\nclass latent_get_channel_means:\n    def __init__(self):\n        pass\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                    \"latent\":   (\"LATENT\", ),      \n                    },\n                }\n\n    RETURN_TYPES = (\"SIGMAS\",)\n    RETURN_NAMES = (\"channel_means\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, latent):\n        channel_means = latent['samples'].mean(dim=(-2,-1)).squeeze(0)\n        return (channel_means,)\n\n\n\n\n\n\nclass latent_to_cuda:\n    def __init__(self):\n        pass\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                    \"latent\": (\"LATENT\", ),      \n                    \"to_cuda\": (\"BOOLEAN\", {\"default\": True}),\n                     },\n                }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"passthrough\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, latent, to_cuda):\n        match to_cuda:\n            case \"True\":\n                latent = latent.to('cuda')\n            case \"False\":\n                latent = latent.to('cpu')\n        return (latent,)\n\n\n\nclass latent_batch:\n    def __init__(self):\n        pass\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                    \"latent\":     (\"LATENT\", ),      \n                    \"batch_size\": (\"INT\", {\"default\": 0, \"min\": -10000, \"max\": 10000}),\n                    },\n                }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent_batch\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, latent, batch_size):\n        latent = latent[\"samples\"]\n        b, c, h, w = latent.shape\n        batch_latents = torch.zeros([batch_size, 4, h, w], device=latent.device)\n        for i in range(batch_size):\n            batch_latents[i] = latent\n        return ({\"samples\": batch_latents}, )\n\n\n\nclass MaskFloatToBoolean:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"mask\": (\"MASK\",),\n            },\n            \"optional\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"MASK\",)\n    RETURN_NAMES = (\"binary_mask\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/masks\"\n\n    def main(self, mask=None,):\n        return (mask.bool().to(mask.dtype),)\n    \n\n\n\n\nclass MaskEdge:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"dilation\": (\"INT\", {\"default\": 20, \"min\": -10000, \"max\": 10000}),\n                \"mode\": [[\"percent\", \"absolute\"], {\"default\": \"percent\"}],\n                \"internal\": (\"FLOAT\", {\"default\": 1.0, \"min\": -1.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"external\": (\"FLOAT\", {\"default\": 1.0, \"min\": -1.0, \"max\": 10000.0, \"step\": 0.01}),\n                #\"blur\": (\"BOOLEAN\", {\"default\": False}),\n                \"mask\": (\"MASK\",),\n            },\n            \"optional\": {\n            },\n        }\n    \n    RETURN_TYPES = (\"MASK\",)\n    RETURN_NAMES = (\"edge_mask\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/masks\"\n\n    def main(self, dilation=20, mode=\"percent\", internal=1.0, external=1.0, blur=False, mask=None,):\n        \n        mask_dtype = mask.dtype\n        mask = mask.float()\n        \n        if mode == \"percent\":\n            dilation = (dilation/100) * int(mask.sum() ** 0.5)\n        \n        #if not blur:\n        if int(internal * dilation) > 0:\n            edge_mask_internal = get_edge_mask(mask, int(internal * dilation))\n            edge_mask_internal = fp_and(edge_mask_internal,   mask)\n        else:\n            edge_mask_internal = mask\n        \n        if int(external * dilation) > 0:\n            edge_mask_external = get_edge_mask(mask, int(external * dilation))\n            edge_mask_external = fp_and(edge_mask_external, 1-mask)\n        else:\n            edge_mask_external = 1-mask\n        \n        edge_mask = fp_or(edge_mask_internal, edge_mask_external)\n\n        return (edge_mask.to(mask_dtype),)\n    \n    \n    \n\n\nclass Frame_Select_Latent_Raw:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"frames\": (\"IMAGE\",),\n                \"select\": (\"INT\",  {\"default\": 0, \"min\": 0, \"max\": 10000}),\n                \n            },\n            \"optional\": {\n            },\n        }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, frames=None, select=0):\n        frame = frames['state_info']['raw_x'][:,:,select,:,:].clone().unsqueeze(dim=2)\n        return (frame,)\n    \n\nclass Frames_Slice_Latent_Raw:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"frames\": (\"LATENT\",),\n                \"start\":  (\"INT\",  {\"default\": 0, \"min\": 0, \"max\": 10000}),\n                \"stop\":   (\"INT\",  {\"default\": 1, \"min\": 1, \"max\": 10000}),\n            },\n            \"optional\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, frames=None, start=0, stop=1):\n        frames_slice = frames['state_info']['raw_x'][:,:,start:stop,:,:].clone()\n        return (frames_slice,)\n\n\nclass Frames_Concat_Latent_Raw:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"frames_0\": (\"LATENT\",),\n                \"frames_1\": (\"LATENT\",),\n            },\n            \"optional\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, frames_0, frames_1):\n        frames_concat = torch.cat((frames_0, frames_1), dim=2).clone()\n        return (frames_concat,)\n    \n\n\nclass Frame_Select_Latent:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"frames\": (\"IMAGE\",),\n                \"select\": (\"INT\",  {\"default\": 0, \"min\": 0, \"max\": 10000}),\n                \n            },\n            \"optional\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, frames=None, select=0):\n        frame = frames['samples'][:,:,select,:,:].clone().unsqueeze(dim=2)\n        return ({\"samples\": frame},)\n    \n\nclass Frames_Slice_Latent:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"frames\": (\"LATENT\",),\n                \"start\":  (\"INT\",  {\"default\": 0, \"min\": 0, \"max\": 10000}),\n                \"stop\":   (\"INT\",  {\"default\": 1, \"min\": 1, \"max\": 10000}),\n            },\n            \"optional\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, frames=None, start=0, stop=1):\n        frames_slice = frames['samples'][:,:,start:stop,:,:].clone()\n        return ({\"samples\": frames_slice},)\n\n\nclass Frames_Concat_Latent:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"frames_0\": (\"LATENT\",),\n                \"frames_1\": (\"LATENT\",),\n            },\n            \"optional\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, frames_0, frames_1):\n        frames_concat = torch.cat((frames_0['samples'], frames_1['samples']), dim=2).clone()\n        return ({\"samples\": frames_concat},)\n    \n\n\n\n\nclass Frames_Concat_Masks:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"frames_0\": (\"MASK\",),\n                \"frames_1\": (\"MASK\",),\n\n            },\n            \"optional\": {\n                \"frames_2\": (\"MASK\",),\n                \"frames_3\": (\"MASK\",),\n                \"frames_4\": (\"MASK\",),\n                \"frames_5\": (\"MASK\",),\n                \"frames_6\": (\"MASK\",),\n                \"frames_7\": (\"MASK\",),\n                \"frames_8\": (\"MASK\",),\n                \"frames_9\": (\"MASK\",),\n            },\n        }\n        \n    RETURN_TYPES = (\"MASK\",)\n    RETURN_NAMES = (\"temporal_mask\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/masks\"\n\n    def main(self, frames_0, frames_1, frames_2=None, frames_3=None, frames_4=None, frames_5=None, frames_6=None, frames_7=None, frames_8=None, frames_9=None):\n        frames_concat = torch.cat((frames_0,      frames_1), dim=-3).clone()\n        \n        frames_concat = torch.cat((frames_concat, frames_2), dim=-3).clone() if frames_2 is not None else frames_concat\n        frames_concat = torch.cat((frames_concat, frames_3), dim=-3).clone() if frames_3 is not None else frames_concat\n        frames_concat = torch.cat((frames_concat, frames_4), dim=-3).clone() if frames_4 is not None else frames_concat\n        frames_concat = torch.cat((frames_concat, frames_5), dim=-3).clone() if frames_5 is not None else frames_concat\n        frames_concat = torch.cat((frames_concat, frames_6), dim=-3).clone() if frames_6 is not None else frames_concat\n        frames_concat = torch.cat((frames_concat, frames_7), dim=-3).clone() if frames_7 is not None else frames_concat\n        frames_concat = torch.cat((frames_concat, frames_8), dim=-3).clone() if frames_8 is not None else frames_concat\n        frames_concat = torch.cat((frames_concat, frames_9), dim=-3).clone() if frames_9 is not None else frames_concat\n        \n        if frames_concat.ndim == 3:\n            frames_concat.unsqueeze_(0)\n\n        return (frames_concat,)\n    \n\n\n\n\nclass Frames_Masks_Uninterpolate:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"raw_temporal_mask\": (\"MASK\",),\n                \"frame_chunk_size\" : (\"INT\", {\"default\": 4, \"min\": 1, \"max\": 10000, \"step\": 1}),\n            },\n            \"optional\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"MASK\",)\n    RETURN_NAMES = (\"temporal_mask\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/masks\"\n\n    def main(self, raw_temporal_mask, frame_chunk_size):\n        #assert raw_temporal_mask.ndim == 3, \"Not a raw temporal mask!\"\n        \n        raw_frames = raw_temporal_mask.shape[-3]\n        raw_frames_offset = raw_frames - 1\n        frames = raw_frames_offset // frame_chunk_size + 1\n        indices = torch.linspace(0, raw_frames_offset, steps=frames).long()\n        \n        temporal_mask = raw_temporal_mask[...,indices,:,:].unsqueeze(0)\n        return (temporal_mask,)\n\n\n\nclass Frames_Masks_ZeroOut:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"temporal_mask\": (\"MASK\",),\n                \"zero_out_frame\" : (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 10000, \"step\": 1}),\n            },\n            \"optional\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"MASK\",)\n    RETURN_NAMES = (\"temporal_mask\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/masks\"\n\n    def main(self, temporal_mask, zero_out_frame):\n        temporal_mask[...,zero_out_frame:zero_out_frame+1,:,:] = 1.0\n        return (temporal_mask,)\n\n\nclass Frames_Latent_ReverseOrder:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n\n            \"required\": {\n                \"frames\": (\"LATENT\",),\n            },\n            \"optional\": {\n            },\n        }\n        \n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"frames_reversed\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/masks\"\n\n    def main(self, frames,):\n        samples = frames['samples']\n        flipped_frames = torch.zeros_like(samples)\n        \n        t_len = samples.shape[-3]\n        \n        for i in range(t_len):\n            flipped_frames[:,:,t_len-i-1,:,:] = samples[:,:,i,:,:]\n        return (  {\"samples\": flipped_frames },)\n        \n        #return (  {\"samples\": torch.flip(frames['samples'], dims=[-3]) },)\n\n\n\nclass LatentPhaseMagnitude:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"latent_0_batch\":               (\"LATENT\",),\n                \"latent_1_batch\":               (\"LATENT\",),\n\n                \"phase_mix_power\":              (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"magnitude_mix_power\":          (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n\n                \"phase_luminosity\":             (\"FLOAT\",   {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"phase_cyan_red\":               (\"FLOAT\",   {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"phase_lime_purple\":            (\"FLOAT\",   {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"phase_pattern_structure\":      (\"FLOAT\",   {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n\n                \"magnitude_luminosity\":         (\"FLOAT\",   {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"magnitude_cyan_red\":           (\"FLOAT\",   {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"magnitude_lime_purple\":        (\"FLOAT\",   {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"magnitude_pattern_structure\":  (\"FLOAT\",   {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n\n                \"latent_0_normal\":              (\"BOOLEAN\", {\"default\": True}),\n                \"latent_1_normal\":              (\"BOOLEAN\", {\"default\": True}),\n                \"latent_out_normal\":            (\"BOOLEAN\", {\"default\": True}),\n                \"latent_0_stdize\":              (\"BOOLEAN\", {\"default\": True}),\n                \"latent_1_stdize\":              (\"BOOLEAN\", {\"default\": True}),\n                \"latent_out_stdize\":            (\"BOOLEAN\", {\"default\": True}),\n                \"latent_0_meancenter\":          (\"BOOLEAN\", {\"default\": True}),\n                \"latent_1_meancenter\":          (\"BOOLEAN\", {\"default\": True}),\n                \"latent_out_meancenter\":        (\"BOOLEAN\", {\"default\": True}),\n            },\n            \"optional\": {\n                \"phase_mix_powers\":             (\"SIGMAS\", ),\n                \"magnitude_mix_powers\":         (\"SIGMAS\", ),\n\n                \"phase_luminositys\":            (\"SIGMAS\", ),\n                \"phase_cyan_reds\":              (\"SIGMAS\", ),\n                \"phase_lime_purples\":           (\"SIGMAS\", ),\n                \"phase_pattern_structures\":     (\"SIGMAS\", ),\n\n                \"magnitude_luminositys\":        (\"SIGMAS\", ),\n                \"magnitude_cyan_reds\":          (\"SIGMAS\", ),\n                \"magnitude_lime_purples\":       (\"SIGMAS\", ),\n                \"magnitude_pattern_structures\": (\"SIGMAS\", ),\n            }\n        }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n    \n    @staticmethod\n    def latent_repeat(latent, batch_size):\n        b, c, h, w = latent.shape\n        batch_latents = torch.zeros((batch_size, c, h, w), dtype=latent.dtype, layout=latent.layout, device=latent.device)\n        for i in range(batch_size):\n            batch_latents[i] = latent\n        return batch_latents\n\n    @staticmethod\n    def mix_latent_phase_magnitude(latent_0,\n                                    latent_1,\n                                    power_phase,\n                                    power_magnitude,\n                                    phase_luminosity,\n                                    phase_cyan_red,\n                                    phase_lime_purple,\n                                    phase_pattern_structure,\n                                    magnitude_luminosity,\n                                    magnitude_cyan_red,\n                                    magnitude_lime_purple,\n                                    magnitude_pattern_structure,\n                                    ):\n        \n        dtype = torch.promote_types(latent_0.dtype, latent_1.dtype)\n        # big accuracy problems with fp32 FFT! let's avoid that\n        latent_0 = latent_0.double()\n        latent_1 = latent_1.double()\n\n        latent_0_fft = torch.fft.fft2(latent_0)\n        latent_1_fft = torch.fft.fft2(latent_1)\n\n        latent_0_phase = torch.angle(latent_0_fft)\n        latent_1_phase = torch.angle(latent_1_fft)\n        latent_0_magnitude = torch.abs(latent_0_fft)\n        latent_1_magnitude = torch.abs(latent_1_fft)\n\n        # DC corruption...? handle separately??\n        #dc_index = (0, 0)\n        #dc_0 = latent_0_fft[:, :, dc_index[0], dc_index[1]]\n        #dc_1 = latent_1_fft[:, :, dc_index[0], dc_index[1]]\n        #mixed_dc = dc_0 * 0.5 + dc_1 * 0.5\n        #mixed_dc = dc_0 * (1 - phase_weight) + dc_1 * phase_weight\n\n        # create complex FFT using a weighted mix of phases\n        chan_weights_phase     = [w for w in [phase_luminosity,     phase_cyan_red,     phase_lime_purple,     phase_pattern_structure    ]]\n        chan_weights_magnitude = [w for w in [magnitude_luminosity, magnitude_cyan_red, magnitude_lime_purple, magnitude_pattern_structure]]\n        mixed_phase     = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\n        mixed_magnitude = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\n\n        for i in range(4):\n            mixed_phase[:, i]     = ( (latent_0_phase[:,i]     * (1-chan_weights_phase[i]))     ** power_phase     + (latent_1_phase[:,i]     * chan_weights_phase[i])     ** power_phase)     ** (1/power_phase)\n            mixed_magnitude[:, i] = ( (latent_0_magnitude[:,i] * (1-chan_weights_magnitude[i])) ** power_magnitude + (latent_1_magnitude[:,i] * chan_weights_magnitude[i]) ** power_magnitude) ** (1/power_magnitude)\n\n        new_fft = mixed_magnitude * torch.exp(1j * mixed_phase)\n\n        #new_fft[:, :, dc_index[0], dc_index[1]] = mixed_dc\n\n        # inverse FFT to convert back to spatial domain\n        mixed_phase_magnitude = torch.fft.ifft2(new_fft).real\n\n        return mixed_phase_magnitude.to(dtype)\n    \n    def main(self,\n            #batch_size,\n            latent_1_repeat,\n            latent_0_batch,\n            latent_1_batch,\n            latent_0_normal,\n            latent_1_normal,\n            latent_out_normal,\n            latent_0_stdize,\n            latent_1_stdize,\n            latent_out_stdize,\n\n            latent_0_meancenter,\n            latent_1_meancenter,\n            latent_out_meancenter,\n\n            phase_mix_power,\n            magnitude_mix_power,\n\n            phase_luminosity,\n            phase_cyan_red,\n            phase_lime_purple,\n            phase_pattern_structure,\n\n            magnitude_luminosity,\n            magnitude_cyan_red,\n            magnitude_lime_purple,\n            magnitude_pattern_structure,\n\n            phase_mix_powers             = None, \n            magnitude_mix_powers         = None,\n            phase_luminositys            = None,\n            phase_cyan_reds              = None,\n            phase_lime_purples           = None,\n            phase_pattern_structures     = None,\n            magnitude_luminositys        = None,\n            magnitude_cyan_reds          = None,\n            magnitude_lime_purples       = None,\n            magnitude_pattern_structures = None\n            ):\n        \n        latent_0_batch = latent_0_batch[\"samples\"].double()\n        latent_1_batch = latent_1_batch[\"samples\"].double().to(latent_0_batch.device)\n\n        #if batch_size == 0:\n        batch_size = latent_0_batch.shape[0]\n        if latent_1_batch.shape[0] == 1:\n            latent_1_batch = self.latent_repeat(latent_1_batch, batch_size)\n\n        magnitude_mix_powers         = initialize_or_scale(magnitude_mix_powers,         magnitude_mix_power,         batch_size)\n        phase_mix_powers             = initialize_or_scale(phase_mix_powers,             phase_mix_power,             batch_size)\n\n        phase_luminositys            = initialize_or_scale(phase_luminositys,            phase_luminosity,            batch_size)\n        phase_cyan_reds              = initialize_or_scale(phase_cyan_reds,              phase_cyan_red,              batch_size)\n        phase_lime_purples           = initialize_or_scale(phase_lime_purples,           phase_lime_purple,           batch_size)\n        phase_pattern_structures     = initialize_or_scale(phase_pattern_structures,     phase_pattern_structure,     batch_size)\n\n        magnitude_luminositys        = initialize_or_scale(magnitude_luminositys,        magnitude_luminosity,        batch_size)\n        magnitude_cyan_reds          = initialize_or_scale(magnitude_cyan_reds,          magnitude_cyan_red,          batch_size)\n        magnitude_lime_purples       = initialize_or_scale(magnitude_lime_purples,       magnitude_lime_purple,       batch_size)\n        magnitude_pattern_structures = initialize_or_scale(magnitude_pattern_structures, magnitude_pattern_structure, batch_size)    \n\n        mixed_phase_magnitude_batch = torch.zeros(latent_0_batch.shape, device=latent_0_batch.device)\n\n        if latent_0_normal == True:\n            latent_0_batch = latent_normalize_channels(latent_0_batch)\n        if latent_1_normal == True:\n            latent_1_batch = latent_normalize_channels(latent_1_batch)\n        if latent_0_meancenter == True:\n            latent_0_batch = latent_meancenter_channels(latent_0_batch)\n        if latent_1_meancenter == True:\n            latent_1_batch = latent_meancenter_channels(latent_1_batch)\n        if latent_0_stdize == True:\n            latent_0_batch = latent_stdize_channels(latent_0_batch)\n        if latent_1_stdize == True:\n            latent_1_batch = latent_stdize_channels(latent_1_batch)\n \n        for i in range(batch_size):\n            mixed_phase_magnitude = self.mix_latent_phase_magnitude(latent_0_batch[i:i+1],\n                                                                    latent_1_batch[i:i+1],\n                                                                    \n                                                                    phase_mix_powers[i]            .item(),\n                                                                    magnitude_mix_powers[i]        .item(),\n\n                                                                    phase_luminositys[i]           .item(),\n                                                                    phase_cyan_reds[i]             .item(),\n                                                                    phase_lime_purples[i]          .item(),\n                                                                    phase_pattern_structures[i]    .item(),\n\n                                                                    magnitude_luminositys[i]       .item(),\n                                                                    magnitude_cyan_reds[i]         .item(),\n                                                                    magnitude_lime_purples[i]      .item(),\n                                                                    magnitude_pattern_structures[i].item()\n                                                                    )\n            \n            if latent_out_normal == True:\n                mixed_phase_magnitude = latent_normalize_channels(mixed_phase_magnitude)\n            if latent_out_stdize == True:\n                mixed_phase_magnitude = latent_stdize_channels(mixed_phase_magnitude)\n            if latent_out_meancenter == True:\n                mixed_phase_magnitude = latent_meancenter_channels(mixed_phase_magnitude)                                \n\n            mixed_phase_magnitude_batch[i, :, :, :] = mixed_phase_magnitude\n\n        return ({\"samples\": mixed_phase_magnitude_batch}, )\n\nclass LatentPhaseMagnitudeMultiply:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"latent_0_batch\":               (\"LATENT\",),\n\n                \"phase_luminosity\":             (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"phase_cyan_red\":               (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"phase_lime_purple\":            (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"phase_pattern_structure\":      (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n\n                \"magnitude_luminosity\":         (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"magnitude_cyan_red\":           (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"magnitude_lime_purple\":        (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"magnitude_pattern_structure\":  (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n\n                \"latent_0_normal\":              (\"BOOLEAN\", {\"default\": False}),\n                \"latent_out_normal\":            (\"BOOLEAN\", {\"default\": False}),\n            },\n            \"optional\": {\n                \"phase_luminositys\":            (\"SIGMAS\", ),\n                \"phase_cyan_reds\":              (\"SIGMAS\", ),\n                \"phase_lime_purples\":           (\"SIGMAS\", ),\n                \"phase_pattern_structures\":     (\"SIGMAS\", ),\n\n                \"magnitude_luminositys\":        (\"SIGMAS\", ),\n                \"magnitude_cyan_reds\":          (\"SIGMAS\", ),\n                \"magnitude_lime_purples\":       (\"SIGMAS\", ),\n                \"magnitude_pattern_structures\": (\"SIGMAS\", ),\n            }\n        }\n\n    RETURN_TYPES = (\"LATENT\",)\n    FUNCTION = \"main\"\n    CATEGORY = \"RES4LYF/latents\"\n\n    @staticmethod\n    def latent_repeat(latent, batch_size):\n        b, c, h, w = latent.shape\n        batch_latents = torch.zeros((batch_size, c, h, w), dtype=latent.dtype, layout=latent.layout, device=latent.device)\n        for i in range(batch_size):\n            batch_latents[i] = latent\n        return batch_latents\n\n    @staticmethod\n    def mix_latent_phase_magnitude(latent_0,\n                                    \n                                    phase_luminosity,\n                                    phase_cyan_red,\n                                    phase_lime_purple,\n                                    phase_pattern_structure,\n                                    \n                                    magnitude_luminosity,\n                                    magnitude_cyan_red,\n                                    magnitude_lime_purple,\n                                    magnitude_pattern_structure\n                                    ):\n        dtype = latent_0.dtype\n        # avoid big accuracy problems with fp32 FFT!\n        latent_0 = latent_0.double()\n\n        latent_0_fft = torch.fft.fft2(latent_0)\n\n        latent_0_phase     = torch.angle(latent_0_fft)\n        latent_0_magnitude = torch.abs  (latent_0_fft)\n\n        # create new complex FFT using weighted mix of phases\n        chan_weights_phase     = [w for w in [phase_luminosity,     phase_cyan_red,     phase_lime_purple,     phase_pattern_structure    ]]\n        chan_weights_magnitude = [w for w in [magnitude_luminosity, magnitude_cyan_red, magnitude_lime_purple, magnitude_pattern_structure]]\n        mixed_phase     = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\n        mixed_magnitude = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\n\n        for i in range(4):\n            mixed_phase[:, i]     = latent_0_phase[:,i]     * chan_weights_phase[i]\n            mixed_magnitude[:, i] = latent_0_magnitude[:,i] * chan_weights_magnitude[i]\n\n        new_fft = mixed_magnitude * torch.exp(1j * mixed_phase)\n        \n        # inverse FFT to convert back to spatial domain\n        mixed_phase_magnitude = torch.fft.ifft2(new_fft).real\n\n        return mixed_phase_magnitude.to(dtype)\n    \n    def main(self,\n             latent_0_batch, latent_0_normal, latent_out_normal,\n             phase_luminosity,           phase_cyan_red,           phase_lime_purple,           phase_pattern_structure, \n             magnitude_luminosity,       magnitude_cyan_red,       magnitude_lime_purple,       magnitude_pattern_structure, \n             phase_luminositys=None,     phase_cyan_reds=None,     phase_lime_purples=None,     phase_pattern_structures=None,\n             magnitude_luminositys=None, magnitude_cyan_reds=None, magnitude_lime_purples=None, magnitude_pattern_structures=None\n             ):\n        latent_0_batch = latent_0_batch[\"samples\"].double()\n\n        batch_size = latent_0_batch.shape[0]\n\n        phase_luminositys            = initialize_or_scale(phase_luminositys,            phase_luminosity,            batch_size)\n        phase_cyan_reds              = initialize_or_scale(phase_cyan_reds,              phase_cyan_red,              batch_size)\n        phase_lime_purples           = initialize_or_scale(phase_lime_purples,           phase_lime_purple,           batch_size)\n        phase_pattern_structures     = initialize_or_scale(phase_pattern_structures,     phase_pattern_structure,     batch_size)\n\n        magnitude_luminositys        = initialize_or_scale(magnitude_luminositys,        magnitude_luminosity,        batch_size)\n        magnitude_cyan_reds          = initialize_or_scale(magnitude_cyan_reds,          magnitude_cyan_red,          batch_size)\n        magnitude_lime_purples       = initialize_or_scale(magnitude_lime_purples,       magnitude_lime_purple,       batch_size)\n        magnitude_pattern_structures = initialize_or_scale(magnitude_pattern_structures, magnitude_pattern_structure, batch_size)    \n\n        mixed_phase_magnitude_batch = torch.zeros(latent_0_batch.shape, device=latent_0_batch.device)\n\n        if latent_0_normal == True:\n            latent_0_batch = latent_normalize_channels(latent_0_batch)\n \n        for i in range(batch_size):\n            mixed_phase_magnitude = self.mix_latent_phase_magnitude(latent_0_batch[i:i+1],\n\n                                                                    phase_luminositys[i].item(),\n                                                                    phase_cyan_reds[i].item(),\n                                                                    phase_lime_purples[i].item(),\n                                                                    phase_pattern_structures[i].item(),\n\n                                                                    magnitude_luminositys[i].item(),\n                                                                    magnitude_cyan_reds[i].item(),\n                                                                    magnitude_lime_purples[i].item(),\n                                                                    magnitude_pattern_structures[i].item()\n                                                                    )\n            if latent_out_normal == True:\n                mixed_phase_magnitude = latent_normalize_channels(mixed_phase_magnitude)\n\n            mixed_phase_magnitude_batch[i, :, :, :] = mixed_phase_magnitude\n\n        return ({\"samples\": mixed_phase_magnitude_batch}, )\n\n\n\nclass LatentPhaseMagnitudeOffset:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"latent_0_batch\":               (\"LATENT\",),\n\n                \"phase_luminosity\":             (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"phase_cyan_red\":               (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"phase_lime_purple\":            (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"phase_pattern_structure\":      (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n\n                \"magnitude_luminosity\":         (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"magnitude_cyan_red\":           (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"magnitude_lime_purple\":        (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"magnitude_pattern_structure\":  (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n\n                \"latent_0_normal\":              (\"BOOLEAN\", {\"default\": False}),\n                \"latent_out_normal\":            (\"BOOLEAN\", {\"default\": False}),\n            },\n            \"optional\": {\n                \"phase_luminositys\":            (\"SIGMAS\", ),\n                \"phase_cyan_reds\":              (\"SIGMAS\", ),\n                \"phase_lime_purples\":           (\"SIGMAS\", ),\n                \"phase_pattern_structures\":     (\"SIGMAS\", ),\n\n                \"magnitude_luminositys\":        (\"SIGMAS\", ),\n                \"magnitude_cyan_reds\":          (\"SIGMAS\", ),\n                \"magnitude_lime_purples\":       (\"SIGMAS\", ),\n                \"magnitude_pattern_structures\": (\"SIGMAS\", ),\n            }\n        }\n\n    RETURN_TYPES = (\"LATENT\",)\n    FUNCTION = \"main\"\n    CATEGORY = \"RES4LYF/latents\"\n    \n    @staticmethod\n    def latent_repeat(latent, batch_size):\n        b, c, h, w = latent.shape\n        batch_latents = torch.zeros((batch_size, c, h, w), dtype=latent.dtype, layout=latent.layout, device=latent.device)\n        for i in range(batch_size):\n            batch_latents[i] = latent\n        return batch_latents\n\n    @staticmethod\n    def mix_latent_phase_magnitude(latent_0,\n     \n                                    phase_luminosity,\n                                    phase_cyan_red,\n                                    phase_lime_purple,\n                                    phase_pattern_structure,\n                                                                        \n                                    magnitude_luminosity,\n                                    magnitude_cyan_red,\n                                    magnitude_lime_purple,\n                                    magnitude_pattern_structure\n                                    ):\n        dtype = latent_0.dtype\n        # avoid big accuracy problems with fp32 FFT!\n        latent_0 = latent_0.double()\n\n        latent_0_fft = torch.fft.fft2(latent_0)\n\n        latent_0_phase = torch.angle(latent_0_fft)\n        latent_0_magnitude = torch.abs(latent_0_fft)\n\n        # create new complex FFT using a weighted mix of phases\n        chan_weights_phase     = [w for w in [phase_luminosity,     phase_cyan_red,     phase_lime_purple,     phase_pattern_structure    ]]\n        chan_weights_magnitude = [w for w in [magnitude_luminosity, magnitude_cyan_red, magnitude_lime_purple, magnitude_pattern_structure]]\n        mixed_phase     = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\n        mixed_magnitude = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\n\n        for i in range(4):\n            mixed_phase[:, i]     = latent_0_phase[:,i]     + chan_weights_phase[i]\n            mixed_magnitude[:, i] = latent_0_magnitude[:,i] + chan_weights_magnitude[i]\n\n        new_fft = mixed_magnitude * torch.exp(1j * mixed_phase)\n        \n        # inverse FFT to convert back to spatial domain\n        mixed_phase_magnitude = torch.fft.ifft2(new_fft).real\n\n        return mixed_phase_magnitude.to(dtype)\n    \n    def main(self,\n             latent_0_batch, latent_0_normal, latent_out_normal,\n             phase_luminosity,           phase_cyan_red,           phase_lime_purple,           phase_pattern_structure, \n             magnitude_luminosity,       magnitude_cyan_red,       magnitude_lime_purple,       magnitude_pattern_structure, \n             phase_luminositys=None,     phase_cyan_reds=None,     phase_lime_purples=None,     phase_pattern_structures=None,\n             magnitude_luminositys=None, magnitude_cyan_reds=None, magnitude_lime_purples=None, magnitude_pattern_structures=None\n             ):\n        latent_0_batch = latent_0_batch[\"samples\"].double()\n\n        batch_size = latent_0_batch.shape[0]\n\n        phase_luminositys            = initialize_or_scale(phase_luminositys,            phase_luminosity,            batch_size)\n        phase_cyan_reds              = initialize_or_scale(phase_cyan_reds,              phase_cyan_red,              batch_size)\n        phase_lime_purples           = initialize_or_scale(phase_lime_purples,           phase_lime_purple,           batch_size)\n        phase_pattern_structures     = initialize_or_scale(phase_pattern_structures,     phase_pattern_structure,     batch_size)\n\n        magnitude_luminositys        = initialize_or_scale(magnitude_luminositys,        magnitude_luminosity,        batch_size)\n        magnitude_cyan_reds          = initialize_or_scale(magnitude_cyan_reds,          magnitude_cyan_red,          batch_size)\n        magnitude_lime_purples       = initialize_or_scale(magnitude_lime_purples,       magnitude_lime_purple,       batch_size)\n        magnitude_pattern_structures = initialize_or_scale(magnitude_pattern_structures, magnitude_pattern_structure, batch_size)    \n\n        mixed_phase_magnitude_batch = torch.zeros(latent_0_batch.shape, device=latent_0_batch.device)\n\n        if latent_0_normal == True:\n            latent_0_batch = latent_normalize_channels(latent_0_batch)\n \n        for i in range(batch_size):\n            mixed_phase_magnitude = self.mix_latent_phase_magnitude(latent_0_batch[i:i+1],\n\n                                                                    phase_luminositys[i]           .item(),\n                                                                    phase_cyan_reds[i]             .item(),\n                                                                    phase_lime_purples[i]          .item(),\n                                                                    phase_pattern_structures[i]    .item(),\n\n                                                                    magnitude_luminositys[i]       .item(),\n                                                                    magnitude_cyan_reds[i]         .item(),\n                                                                    magnitude_lime_purples[i]      .item(),\n                                                                    magnitude_pattern_structures[i].item()\n                                                                    )\n            if latent_out_normal == True:\n                mixed_phase_magnitude = latent_normalize_channels(mixed_phase_magnitude)\n\n            mixed_phase_magnitude_batch[i, :, :, :] = mixed_phase_magnitude\n\n        return ({\"samples\": mixed_phase_magnitude_batch}, )\n\n\n\nclass LatentPhaseMagnitudePower:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"latent_0_batch\":               (\"LATENT\",),\n\n                \"phase_luminosity\":             (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"phase_cyan_red\":               (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"phase_lime_purple\":            (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"phase_pattern_structure\":      (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n\n                \"magnitude_luminosity\":         (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"magnitude_cyan_red\":           (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"magnitude_lime_purple\":        (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"magnitude_pattern_structure\":  (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n\n                \"latent_0_normal\":              (\"BOOLEAN\", {\"default\": False}),\n                \"latent_out_normal\":            (\"BOOLEAN\", {\"default\": False}),\n            },\n            \"optional\": {\n                \"phase_luminositys\":            (\"SIGMAS\", ),\n                \"phase_cyan_reds\":              (\"SIGMAS\", ),\n                \"phase_lime_purples\":           (\"SIGMAS\", ),\n                \"phase_pattern_structures\":     (\"SIGMAS\", ),\n\n                \"magnitude_luminositys\":        (\"SIGMAS\", ),\n                \"magnitude_cyan_reds\":          (\"SIGMAS\", ),\n                \"magnitude_lime_purples\":       (\"SIGMAS\", ),\n                \"magnitude_pattern_structures\": (\"SIGMAS\", ),\n            }\n        }\n\n    RETURN_TYPES = (\"LATENT\",)\n    FUNCTION = \"main\"\n    CATEGORY = \"RES4LYF/latents\"\n    \n    @staticmethod\n    def latent_repeat(latent, batch_size):\n        b, c, h, w = latent.shape\n        batch_latents = torch.zeros((batch_size, c, h, w), dtype=latent.dtype, layout=latent.layout, device=latent.device)\n        for i in range(batch_size):\n            batch_latents[i] = latent\n        return batch_latents\n\n    @staticmethod\n    def mix_latent_phase_magnitude(latent_0,  \n                                    phase_luminosity,\n                                    phase_cyan_red,\n                                    phase_lime_purple,\n                                    phase_pattern_structure,\n                                                                        \n                                    magnitude_luminosity,\n                                    magnitude_cyan_red,\n                                    magnitude_lime_purple,\n                                    magnitude_pattern_structure\n                                    ):\n        dtype = latent_0.dtype\n        # avoid big accuracy problems with fp32 FFT!\n        latent_0 = latent_0.double()\n\n        latent_0_fft = torch.fft.fft2(latent_0)\n\n        latent_0_phase = torch.angle(latent_0_fft)\n        latent_0_magnitude = torch.abs(latent_0_fft)\n\n        # create new complex FFT using a weighted mix of phases\n        chan_weights_phase     = [w for w in [phase_luminosity,     phase_cyan_red,     phase_lime_purple,     phase_pattern_structure    ]]\n        chan_weights_magnitude = [w for w in [magnitude_luminosity, magnitude_cyan_red, magnitude_lime_purple, magnitude_pattern_structure]]\n        mixed_phase     = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\n        mixed_magnitude = torch.zeros_like(latent_0, dtype=latent_0.dtype, layout=latent_0.layout, device=latent_0.device)\n\n        for i in range(4):\n            mixed_phase[:, i]     = latent_0_phase[:,i]     ** chan_weights_phase[i]\n            mixed_magnitude[:, i] = latent_0_magnitude[:,i] ** chan_weights_magnitude[i]\n\n        new_fft = mixed_magnitude * torch.exp(1j * mixed_phase)\n        \n        # inverse FFT to convert back to spatial domain\n        mixed_phase_magnitude = torch.fft.ifft2(new_fft).real\n\n        return mixed_phase_magnitude.to(dtype)\n    \n    def main(self,\n            latent_0_batch, latent_0_normal, latent_out_normal,\n            phase_luminosity,           phase_cyan_red,           phase_lime_purple,           phase_pattern_structure, \n            magnitude_luminosity,       magnitude_cyan_red,       magnitude_lime_purple,       magnitude_pattern_structure, \n            phase_luminositys=None,     phase_cyan_reds=None,     phase_lime_purples=None,     phase_pattern_structures=None,\n            magnitude_luminositys=None, magnitude_cyan_reds=None, magnitude_lime_purples=None, magnitude_pattern_structures=None\n            ):\n        latent_0_batch = latent_0_batch[\"samples\"].double()\n\n        batch_size = latent_0_batch.shape[0]\n\n        phase_luminositys            = initialize_or_scale(phase_luminositys,            phase_luminosity,            batch_size)\n        phase_cyan_reds              = initialize_or_scale(phase_cyan_reds,              phase_cyan_red,              batch_size)\n        phase_lime_purples           = initialize_or_scale(phase_lime_purples,           phase_lime_purple,           batch_size)\n        phase_pattern_structures     = initialize_or_scale(phase_pattern_structures,     phase_pattern_structure,     batch_size)\n\n        magnitude_luminositys        = initialize_or_scale(magnitude_luminositys,        magnitude_luminosity,        batch_size)\n        magnitude_cyan_reds          = initialize_or_scale(magnitude_cyan_reds,          magnitude_cyan_red,          batch_size)\n        magnitude_lime_purples       = initialize_or_scale(magnitude_lime_purples,       magnitude_lime_purple,       batch_size)\n        magnitude_pattern_structures = initialize_or_scale(magnitude_pattern_structures, magnitude_pattern_structure, batch_size)    \n\n        mixed_phase_magnitude_batch = torch.zeros(latent_0_batch.shape, device=latent_0_batch.device)\n\n        if latent_0_normal == True:\n            latent_0_batch = latent_normalize_channels(latent_0_batch)\n\n        for i in range(batch_size):\n            mixed_phase_magnitude = self.mix_latent_phase_magnitude(latent_0_batch[i:i+1],\n\n                                                                    phase_luminositys[i]           .item(),\n                                                                    phase_cyan_reds[i]             .item(),\n                                                                    phase_lime_purples[i]          .item(),\n                                                                    phase_pattern_structures[i]    .item(),\n\n                                                                    magnitude_luminositys[i]       .item(),\n                                                                    magnitude_cyan_reds[i]         .item(),\n                                                                    magnitude_lime_purples[i]      .item(),\n                                                                    magnitude_pattern_structures[i].item()\n                                                                    )\n            if latent_out_normal == True:\n                mixed_phase_magnitude = latent_normalize_channels(mixed_phase_magnitude)\n\n            mixed_phase_magnitude_batch[i, :, :, :] = mixed_phase_magnitude\n\n        return ({\"samples\": mixed_phase_magnitude_batch}, )\n\n\n\nclass StableCascade_StageC_VAEEncode_Exact:\n    def __init__(self, device=\"cpu\"):\n        self.device = device\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"image\": (\"IMAGE\",),\n                \"vae\": (\"VAE\", ),\n                \"width\": (\"INT\", {\"default\": 24, \"min\": 1, \"max\": 1024, \"step\": 1}),\n                \"height\": (\"INT\", {\"default\": 24, \"min\": 1, \"max\": 1024, \"step\": 1}),\n            }\n        }\n        \n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"stage_c\",)\n    FUNCTION     = \"generate\"\n    CATEGORY     = \"RES4LYF/vae\"\n    \n    def generate(self, image, vae, width, height):\n        out_width  = (width)  * vae.downscale_ratio #downscale_ratio = 32\n        out_height = (height) * vae.downscale_ratio\n        #movedim(-1,1) goes from 1,1024,1024,3 to 1,3,1024,1024\n        s = comfy.utils.common_upscale(image.movedim(-1,1), out_width, out_height, \"lanczos\", \"center\").movedim(1,-1)\n\n        c_latent = vae.encode(s[:,:,:,:3]) #to slice off alpha channel?\n        return ({\n            \"samples\": c_latent,\n        },)\n        \n\n\nclass StableCascade_StageC_VAEEncode_Exact_Tiled:\n    def __init__(self, device=\"cpu\"):\n        self.device = device\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"image\": (\"IMAGE\",),\n                \"vae\": (\"VAE\", ),\n                \"tile_size\": (\"INT\", {\"default\": 512, \"min\": 320, \"max\": 4096, \"step\": 64}),\n                \"overlap\": (\"INT\", {\"default\": 16, \"min\": 8, \"max\": 128, \"step\": 8}),\n            }\n        }\n    \n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"stage_c\",)\n    FUNCTION     = \"generate\"\n    CATEGORY     = \"RES4LYF/vae\"\n\n    def generate(self, image, vae, tile_size, overlap):\n\n        upscale_amount = vae.downscale_ratio  # downscale_ratio = 32\n\n        image = image.movedim(-1, 1)  # bhwc -> bchw \n\n        encode_fn = lambda img: vae.encode(img.to(vae.device)).to(\"cpu\")\n\n        c_latent = tiled_scale_multidim(image,\n                                        encode_fn,\n                                        tile           = (tile_size // 8, tile_size // 8),\n                                        overlap        = overlap,\n                                        upscale_amount = upscale_amount,\n                                        out_channels   = 16, \n                                        output_device  = self.device\n                                        )\n\n        return ({\"samples\": c_latent,},)\n\n@torch.inference_mode()\ndef tiled_scale_multidim(samples,\n                        function,\n                        tile           = (64, 64),\n                        overlap        = 8,\n                        upscale_amount = 4,\n                        out_channels   = 3,\n                        output_device  = \"cpu\",\n                        pbar           = None\n                        ):\n    \n    dims = len(tile)\n    output_shape = [samples.shape[0], out_channels] + list(map(lambda a: round(a * upscale_amount), samples.shape[2:]))\n    output = torch.zeros(output_shape, device=output_device)\n\n    for b in range(samples.shape[0]):\n        for it in itertools.product(*map(lambda a: range(0, a[0], a[1] - overlap), zip(samples.shape[2:], tile))):\n            s_in = samples[b:b+1]\n            upscaled = []\n\n            for d in range(dims):\n                pos = max(0, min(s_in.shape[d + 2] - overlap, it[d]))\n                l = min(tile[d], s_in.shape[d + 2] - pos)\n                s_in = s_in.narrow(d + 2, pos, l)\n                upscaled.append(round(pos * upscale_amount))\n\n            ps = function(s_in).to(output_device)\n            mask = torch.ones_like(ps)\n            feather = round(overlap * upscale_amount)\n\n            for t in range(feather):\n                for d in range(2, dims + 2):\n                    mask.narrow(d, t, 1).mul_((1.0 / feather) * (t + 1))\n                    mask.narrow(d, mask.shape[d] - 1 - t, 1).mul_((1.0 / feather) * (t + 1))\n\n            o = output[b:b+1]\n            for d in range(dims):\n                o = o.narrow(d + 2, upscaled[d], mask.shape[d + 2])\n\n            o.add_(ps * mask)\n\n            if pbar is not None:\n                pbar.update(1)\n\n    return output\n\n    \n\n\nclass EmptyLatentImageCustom:\n    def __init__(self):\n        self.device = comfy.model_management.intermediate_device()\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": { \n                \"width\":       (\"INT\",                                       {\"default\": 24, \"min\": 1, \"max\": MAX_RESOLUTION, \"step\": 1}),\n                \"height\":      (\"INT\",                                       {\"default\": 24, \"min\": 1, \"max\": MAX_RESOLUTION, \"step\": 1}),\n                \"batch_size\":  (\"INT\",                                       {\"default\": 1,  \"min\": 1, \"max\": 4096}),\n                \"channels\":    (['4', '16'],                                 {\"default\": '4'}),\n                \"mode\":        (['sdxl', 'cascade_b', 'cascade_c', 'exact'], {\"default\": 'default'}),\n                \"compression\": (\"INT\",                                       {\"default\": 42, \"min\": 4, \"max\": 128, \"step\": 1}),\n                \"precision\":   (['fp16', 'fp32', 'fp64'],                    {\"default\": 'fp32'}),\n            }\n        }\n    \n    RETURN_TYPES = (\"LATENT\",)\n    FUNCTION = \"generate\"\n\n    CATEGORY = \"RES4LYF/latents\"\n\n    def generate(self,\n                width,\n                height,\n                batch_size,\n                channels,\n                mode,\n                compression,\n                precision\n                ):\n        \n        c = int(channels)\n\n        ratio = 1\n        match mode:\n            case \"sdxl\":\n                ratio = 8\n            case \"cascade_b\":\n                ratio = 4\n            case \"cascade_c\":\n                ratio = compression\n            case \"exact\":\n                ratio = 1\n\n        dtype=torch.float32\n        match precision:\n            case \"fp16\":\n                dtype=torch.float16\n            case \"fp32\":\n                dtype=torch.float32\n            case \"fp64\":\n                dtype=torch.float64\n\n        latent = torch.zeros([batch_size,\n                            c,\n                            height // ratio, \n                            width // ratio], \n                            dtype=dtype, \n                            device=self.device)\n        \n        return ({\"samples\":latent}, )\n\nclass EmptyLatentImage64:\n    def __init__(self):\n        self.device = comfy.model_management.intermediate_device()\n\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": { \n                \"width\":      (\"INT\", {\"default\": 1024, \"min\": 16, \"max\": MAX_RESOLUTION, \"step\": 8}),\n                \"height\":     (\"INT\", {\"default\": 1024, \"min\": 16, \"max\": MAX_RESOLUTION, \"step\": 8}),\n                \"batch_size\": (\"INT\", {\"default\": 1, \"min\": 1, \"max\": 4096})\n                }\n            }\n        \n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"generate\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def generate(self, width, height, batch_size=1):\n        latent = torch.zeros([batch_size, 4, height // 8, width // 8], dtype=torch.float64, device=self.device)\n        return ({\"samples\":latent}, )\n\n\n\n\nclass LatentNoiseBatch_perlin:\n    def __init__(self):\n        pass\n\n    @classmethod\n    def INPUT_TYPES(cls): \n        return {\"required\": {\n            \"seed\":         (\"INT\",   {\"default\": 0,    \"min\": 0, \"max\": 0xffffffffffffffff}),\n            \"width\":        (\"INT\",   {\"default\": 1024, \"min\": 8, \"max\": MAX_RESOLUTION, \"step\": 8}),\n            \"height\":       (\"INT\",   {\"default\": 1024, \"min\": 8, \"max\": MAX_RESOLUTION, \"step\": 8}),\n            \"batch_size\":   (\"INT\",   {\"default\": 1,    \"min\": 1, \"max\": 256}),\n            \"detail_level\": (\"FLOAT\", {\"default\": 0,    \"min\":-1, \"max\": 1.0,            \"step\": 0.1}),\n            },\n            \"optional\": {\n                \"details\":  (\"SIGMAS\", ),\n            }\n        }\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION = \"create_noisy_latents_perlin\"\n    CATEGORY = \"RES4LYF/noise\"\n\n    # found at https://gist.github.com/vadimkantorov/ac1b097753f217c5c11bc2ff396e0a57\n    # which was ported from https://github.com/pvigier/perlin-numpy/blob/master/perlin2d.py\n    def rand_perlin_2d(self, shape, res, fade = lambda t: 6*t**5 - 15*t**4 + 10*t**3):\n        delta = (res[0] / shape[0], res[1] / shape[1])\n        d = (shape[0] // res[0], shape[1] // res[1])\n        \n        grid = torch.stack(torch.meshgrid(torch.arange(0, res[0], delta[0]), torch.arange(0, res[1], delta[1])), dim = -1) % 1\n        angles = 2*math.pi*torch.rand(res[0]+1, res[1]+1)\n        gradients = torch.stack((torch.cos(angles), torch.sin(angles)), dim = -1)\n        \n        tile_grads = lambda slice1, slice2: gradients[slice1[0]:slice1[1], slice2[0]:slice2[1]].repeat_interleave(d[0], 0).repeat_interleave(d[1], 1)\n        dot = lambda grad, shift: (torch.stack((grid[:shape[0],:shape[1],0] + shift[0], grid[:shape[0],:shape[1], 1] + shift[1]  ), dim = -1) * grad[:shape[0], :shape[1]]).sum(dim = -1)\n        \n        n00 = dot(tile_grads([0, -1], [0, -1]), [0,  0])\n        n10 = dot(tile_grads([1, None], [0, -1]), [-1, 0])\n        n01 = dot(tile_grads([0, -1],[1, None]), [0, -1])\n        n11 = dot(tile_grads([1, None], [1, None]), [-1,-1])\n        t = fade(grid[:shape[0], :shape[1]])\n        return math.sqrt(2) * torch.lerp(torch.lerp(n00, n10, t[..., 0]), torch.lerp(n01, n11, t[..., 0]), t[..., 1])\n\n    def rand_perlin_2d_octaves(self, shape, res, octaves=1, persistence=0.5):\n        noise = torch.zeros(shape)\n        frequency = 1\n        amplitude = 1\n        for _ in range(octaves):\n            noise += amplitude * self.rand_perlin_2d(shape, (frequency*res[0], frequency*res[1]))\n            frequency *= 2\n            amplitude *= persistence\n        noise = torch.remainder(torch.abs(noise)*1000000,11)/11\n        # noise = (torch.sin(torch.remainder(noise*1000000,83))+1)/2\n        return noise\n    \n    def scale_tensor(self, x):\n        min_value = x.min()\n        max_value = x.max()\n        x = (x - min_value) / (max_value - min_value)\n        return x\n\n    def create_noisy_latents_perlin(self, seed, width, height, batch_size, detail_level, details=None):\n        if details is None:\n            details = torch.full((10000,), detail_level)\n        else:\n            details = detail_level * details\n        torch.manual_seed(seed)\n        noise = torch.zeros((batch_size, 4, height // 8, width // 8), dtype=torch.float32, device=\"cpu\").cpu()\n        for i in range(batch_size):\n            for j in range(4):\n                noise_values = self.rand_perlin_2d_octaves((height // 8, width // 8), (1,1), 1, 1)\n                result = (1+details[i]/10)*torch.erfinv(2 * noise_values - 1) * (2 ** 0.5)\n                result = torch.clamp(result,-5,5)\n                noise[i, j, :, :] = result\n        return ({\"samples\": noise},)\n\n\nclass LatentNoiseBatch_gaussian_channels:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"latent\":                  (\"LATENT\",),\n                \"mean\":                    (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"mean_luminosity\":         (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"mean_cyan_red\":           (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"mean_lime_purple\":        (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"mean_pattern_structure\":  (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"std\":                     (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"steps\":                   (\"INT\",   {\"default\": 0,   \"min\": -10000,   \"max\": 10000}),\n                \"seed\":                    (\"INT\",   {\"default\": 0,   \"min\": 0,        \"max\": 0xffffffffffffffff}),\n            },\n            \"optional\": {\n                \"means\":                   (\"SIGMAS\", ),\n                \"mean_luminositys\":        (\"SIGMAS\", ),\n                \"mean_cyan_reds\":          (\"SIGMAS\", ),\n                \"mean_lime_purples\":       (\"SIGMAS\", ),\n                \"mean_pattern_structures\": (\"SIGMAS\", ),\n                \"stds\":                    (\"SIGMAS\", ),\n            }\n        }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/noise\"\n\n    @staticmethod\n    def gaussian_noise_channels(x, mean_luminosity = -0.1, mean_cyan_red = 0.0, mean_lime_purple=0.0, mean_pattern_structure=0.0):\n        x = x.squeeze(0)\n\n        luminosity        = x[0:1] + mean_luminosity\n        cyan_red          = x[1:2] + mean_cyan_red\n        lime_purple       = x[2:3] + mean_lime_purple\n        pattern_structure = x[3:4] + mean_pattern_structure\n\n        x = torch.unsqueeze(torch.cat([luminosity, cyan_red, lime_purple, pattern_structure]), 0)\n\n        return x\n\n    def main(self, latent, steps, seed, \n            mean, mean_luminosity, mean_cyan_red, mean_lime_purple, mean_pattern_structure, std,\n            means=None, mean_luminositys=None, mean_cyan_reds=None, mean_lime_purples=None, mean_pattern_structures=None, stds=None):\n        if steps == 0:\n            steps = len(means)\n\n        x = latent[\"samples\"]\n        b, c, h, w = x.shape  \n\n        noise_latents = torch.zeros([steps, 4, h, w], dtype=x.dtype, layout=x.layout, device=x.device)\n\n        noise_sampler = NOISE_GENERATOR_CLASSES.get('gaussian')(x=x, seed = seed)\n\n        means                   = initialize_or_scale(means                  , mean                  , steps)\n        mean_luminositys        = initialize_or_scale(mean_luminositys       , mean_luminosity       , steps)\n        mean_cyan_reds          = initialize_or_scale(mean_cyan_reds         , mean_cyan_red         , steps)\n        mean_lime_purples       = initialize_or_scale(mean_lime_purples      , mean_lime_purple      , steps)\n        mean_pattern_structures = initialize_or_scale(mean_pattern_structures, mean_pattern_structure, steps)\n\n        stds = initialize_or_scale(stds, std, steps)\n\n        for i in range(steps):\n            noise = noise_sampler(mean=means[i].item(), std=stds[i].item())\n            noise = self.gaussian_noise_channels(noise, mean_luminositys[i].item(), mean_cyan_reds[i].item(), mean_lime_purples[i].item(), mean_pattern_structures[i].item())\n            noise_latents[i] = x + noise\n\n        return ({\"samples\": noise_latents}, )\n\nclass LatentNoiseBatch_gaussian:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"latent\": (\"LATENT\",),\n                \"mean\":   (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"std\":    (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"steps\":  (\"INT\",   {\"default\": 0,   \"min\": -10000,   \"max\": 10000}),\n                \"seed\":   (\"INT\",   {\"default\": 0,   \"min\": 0,        \"max\": 0xffffffffffffffff}),\n            },\n            \"optional\": {\n                \"means\":  (\"SIGMAS\", ),\n                \"stds\":   (\"SIGMAS\", ),\n                \"steps_\": (\"SIGMAS\", ),\n            }\n        }\n\n    RETURN_TYPES = (\"LATENT\",)\n    FUNCTION = \"main\"\n\n    CATEGORY = \"RES4LYF/noise\"\n\n    def main(self, latent, mean, std, steps, seed, means=None, stds=None, steps_=None):\n        if steps_ is not None:\n            steps = len(steps_)\n\n        means = initialize_or_scale(means, mean, steps)\n        stds  = initialize_or_scale(stds,  std,  steps)    \n\n        latent_samples = latent[\"samples\"]\n        b, c, h, w = latent_samples.shape  \n\n        noise_latents = torch.zeros([steps, c, h, w], dtype=latent_samples.dtype, layout=latent_samples.layout, device=latent_samples.device)\n\n        noise_sampler = NOISE_GENERATOR_CLASSES.get('gaussian')(x=latent_samples, seed = seed)\n\n        for i in range(steps):\n            noise_latents[i] = noise_sampler(mean=means[i].item(), std=stds[i].item())\n        return ({\"samples\": noise_latents}, )\n\nclass LatentNoiseBatch_fractal:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"latent\": (\"LATENT\",),\n                \"alpha\":  (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.001}),\n                \"k_flip\": (\"BOOLEAN\", {\"default\": False}),\n                \"steps\":  (\"INT\",     {\"default\": 0,   \"min\": -10000,   \"max\": 10000}),\n                \"seed\":   (\"INT\",     {\"default\": 0,   \"min\": 0,        \"max\": 0xffffffffffffffff}),\n            },\n            \"optional\": {\n                \"alphas\": (\"SIGMAS\", ),\n                \"ks\":     (\"SIGMAS\", ),\n                \"steps_\": (\"SIGMAS\", ),\n            }\n        }\n\n    RETURN_TYPES = (\"LATENT\",)\n    FUNCTION = \"main\"\n\n    CATEGORY = \"RES4LYF/noise\"\n\n    def main(self,\n            latent,\n            alpha,\n            k_flip,\n            steps,\n            seed    = 42,\n            alphas  = None,\n            ks      = None,\n            sigmas_ = None,\n            steps_  = None\n            ):\n        \n        if steps_ is not None:\n            steps = len(steps_)\n\n        alphas = initialize_or_scale(alphas, alpha, steps)\n        k_flip = -1 if k_flip else 1\n        ks     = initialize_or_scale(ks  , k_flip, steps)\n\n        latent_samples = latent[\"samples\"]\n        b, c, h, w = latent_samples.shape  \n        noise_latents = torch.zeros([steps, c, h, w], dtype=latent_samples.dtype, layout=latent_samples.layout, device=latent_samples.device)\n\n        noise_sampler = NOISE_GENERATOR_CLASSES.get('fractal')(x=latent_samples, seed = seed)\n\n        for i in range(steps):\n            noise_latents[i] = noise_sampler(alpha=alphas[i].item(), k=ks[i].item(), scale=0.1)\n\n        return ({\"samples\": noise_latents}, )\n\n\nclass LatentBatch_channels:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"latent\":             (\"LATENT\",),\n                \"mode\":               ([\"offset\", \"multiply\", \"power\"],),\n                \"luminosity\":         (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"cyan_red\":           (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"lime_purple\":        (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"pattern_structure\":  (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n            },\n            \"optional\": {\n                \"luminositys\":        (\"SIGMAS\", ),\n                \"cyan_reds\":          (\"SIGMAS\", ),\n                \"lime_purples\":       (\"SIGMAS\", ),\n                \"pattern_structures\": (\"SIGMAS\", ),\n            }\n        }\n\n    RETURN_TYPES = (\"LATENT\",)\n    FUNCTION = \"main\"\n\n    CATEGORY = \"RES4LYF/latents\"\n    \n    @staticmethod\n    def latent_channels_multiply(x, luminosity = -0.1, cyan_red = 0.0, lime_purple=0.0, pattern_structure=0.0):\n        luminosity        = x[0:1] * luminosity\n        cyan_red          = x[1:2] * cyan_red\n        lime_purple       = x[2:3] * lime_purple\n        pattern_structure = x[3:4] * pattern_structure\n\n        x = torch.unsqueeze(torch.cat([luminosity, cyan_red, lime_purple, pattern_structure]), 0)\n        return x\n\n    @staticmethod\n    def latent_channels_offset(x, luminosity = -0.1, cyan_red = 0.0, lime_purple=0.0, pattern_structure=0.0):\n        luminosity        = x[0:1] + luminosity\n        cyan_red          = x[1:2] + cyan_red\n        lime_purple       = x[2:3] + lime_purple\n        pattern_structure = x[3:4] + pattern_structure\n\n        x = torch.unsqueeze(torch.cat([luminosity, cyan_red, lime_purple, pattern_structure]), 0)\n        return x\n    \n    @staticmethod\n    def latent_channels_power(x, luminosity = -0.1, cyan_red = 0.0, lime_purple=0.0, pattern_structure=0.0):\n        luminosity        = x[0:1] ** luminosity\n        cyan_red          = x[1:2] ** cyan_red\n        lime_purple       = x[2:3] ** lime_purple\n        pattern_structure = x[3:4] ** pattern_structure\n\n        x = torch.unsqueeze(torch.cat([luminosity, cyan_red, lime_purple, pattern_structure]), 0)\n        return x\n\n    def main(self,\n            latent,\n            mode,\n            luminosity,\n            cyan_red,\n            lime_purple,\n            pattern_structure,\n            \n            luminositys        = None,\n            cyan_reds          = None,\n            lime_purples       = None,\n            pattern_structures = None):\n        \n        x = latent[\"samples\"]\n        b, c, h, w = x.shape  \n\n        noise_latents = torch.zeros([b, c, h, w], dtype=x.dtype, layout=x.layout, device=x.device)\n\n        luminositys = initialize_or_scale(luminositys, luminosity, b)\n        cyan_reds = initialize_or_scale(cyan_reds, cyan_red, b)\n        lime_purples = initialize_or_scale(lime_purples, lime_purple, b)\n        pattern_structures = initialize_or_scale(pattern_structures, pattern_structure, b)\n\n        for i in range(b):\n            if mode == \"offset\":\n                noise = self.latent_channels_offset(x[i], luminositys[i].item(), cyan_reds[i].item(), lime_purples[i].item(), pattern_structures[i].item())\n            elif mode == \"multiply\":  \n                noise = self.latent_channels_multiply(x[i], luminositys[i].item(), cyan_reds[i].item(), lime_purples[i].item(), pattern_structures[i].item())\n            elif mode == \"power\":  \n                noise = self.latent_channels_power(x[i], luminositys[i].item(), cyan_reds[i].item(), lime_purples[i].item(), pattern_structures[i].item())\n            noise_latents[i] = noise\n\n        return ({\"samples\": noise_latents}, )\n    \n\nclass LatentBatch_channels_16:\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                \"latent\":   (\"LATENT\",),\n                \"mode\":     ([\"offset\", \"multiply\", \"power\"],),\n                \"chan_1\":   (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_2\":   (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_3\":   (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_4\":   (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_5\":   (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_6\":   (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_7\":   (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_8\":   (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_9\":   (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_10\":  (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_11\":  (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_12\":  (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_13\":  (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_14\":  (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_15\":  (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n                \"chan_16\":  (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\n            }, \n            \"optional\": { \n                \"chan_1s\":  (\"SIGMAS\", ),\n                \"chan_2s\":  (\"SIGMAS\", ),\n                \"chan_3s\":  (\"SIGMAS\", ),\n                \"chan_4s\":  (\"SIGMAS\", ),\n                \"chan_5s\":  (\"SIGMAS\", ),\n                \"chan_6s\":  (\"SIGMAS\", ),\n                \"chan_7s\":  (\"SIGMAS\", ),\n                \"chan_8s\":  (\"SIGMAS\", ),\n                \"chan_9s\":  (\"SIGMAS\", ),\n                \"chan_10s\": (\"SIGMAS\", ),\n                \"chan_11s\": (\"SIGMAS\", ),\n                \"chan_12s\": (\"SIGMAS\", ),\n                \"chan_13s\": (\"SIGMAS\", ),\n                \"chan_14s\": (\"SIGMAS\", ),\n                \"chan_15s\": (\"SIGMAS\", ),\n                \"chan_16s\": (\"SIGMAS\", ),\n\n            }\n        }\n\n    RETURN_TYPES = (\"LATENT\",)\n    FUNCTION = \"main\"\n\n    CATEGORY = \"RES4LYF/latents\"\n    \n    @staticmethod\n    def latent_channels_multiply(x, chan_1 = 0.0, chan_2 = 0.0, chan_3 = 0.0, chan_4 = 0.0, chan_5 = 0.0, chan_6 = 0.0, chan_7 = 0.0, chan_8 = 0.0, chan_9 = 0.0, chan_10 = 0.0, chan_11 = 0.0, chan_12 = 0.0, chan_13 = 0.0, chan_14 = 0.0, chan_15 = 0.0, chan_16 = 0.0):\n        chan_1  = x[0:1]   * chan_1\n        chan_2  = x[1:2]   * chan_2\n        chan_3  = x[2:3]   * chan_3\n        chan_4  = x[3:4]   * chan_4\n        chan_5  = x[4:5]   * chan_5\n        chan_6  = x[5:6]   * chan_6\n        chan_7  = x[6:7]   * chan_7\n        chan_8  = x[7:8]   * chan_8\n        chan_9  = x[8:9]   * chan_9\n        chan_10 = x[9:10]  * chan_10\n        chan_11 = x[10:11] * chan_11\n        chan_12 = x[11:12] * chan_12\n        chan_13 = x[12:13] * chan_13\n        chan_14 = x[13:14] * chan_14\n        chan_15 = x[14:15] * chan_15\n        chan_16 = x[15:16] * chan_16\n\n        x = torch.unsqueeze(torch.cat([chan_1, chan_2, chan_3, chan_4, chan_5, chan_6, chan_7, chan_8, chan_9, chan_10, chan_11, chan_12, chan_13, chan_14, chan_15, chan_16]), 0)\n        return x\n\n    @staticmethod\n    def latent_channels_offset(x, chan_1 = 0.0, chan_2 = 0.0, chan_3 = 0.0, chan_4 = 0.0, chan_5 = 0.0, chan_6 = 0.0, chan_7 = 0.0, chan_8 = 0.0, chan_9 = 0.0, chan_10 = 0.0, chan_11 = 0.0, chan_12 = 0.0, chan_13 = 0.0, chan_14 = 0.0, chan_15 = 0.0, chan_16 = 0.0):\n        chan_1  = x[0:1]   + chan_1\n        chan_2  = x[1:2]   + chan_2\n        chan_3  = x[2:3]   + chan_3\n        chan_4  = x[3:4]   + chan_4\n        chan_5  = x[4:5]   + chan_5\n        chan_6  = x[5:6]   + chan_6\n        chan_7  = x[6:7]   + chan_7\n        chan_8  = x[7:8]   + chan_8\n        chan_9  = x[8:9]   + chan_9\n        chan_10 = x[9:10]  + chan_10\n        chan_11 = x[10:11] + chan_11\n        chan_12 = x[11:12] + chan_12\n        chan_13 = x[12:13] + chan_13\n        chan_14 = x[13:14] + chan_14\n        chan_15 = x[14:15] + chan_15\n        chan_16 = x[15:16] + chan_16\n\n        x = torch.unsqueeze(torch.cat([chan_1, chan_2, chan_3, chan_4, chan_5, chan_6, chan_7, chan_8, chan_9, chan_10, chan_11, chan_12, chan_13, chan_14, chan_15, chan_16]), 0)\n        return x\n\n    @staticmethod\n    def latent_channels_power(x, chan_1 = 0.0, chan_2 = 0.0, chan_3 = 0.0, chan_4 = 0.0, chan_5 = 0.0, chan_6 = 0.0, chan_7 = 0.0, chan_8 = 0.0, chan_9 = 0.0, chan_10 = 0.0, chan_11 = 0.0, chan_12 = 0.0, chan_13 = 0.0, chan_14 = 0.0, chan_15 = 0.0, chan_16 = 0.0):\n        chan_1 = x[0:1] ** chan_1\n        chan_2 = x[1:2] ** chan_2\n        chan_3 = x[2:3] ** chan_3\n        chan_4 = x[3:4] ** chan_4\n        chan_5 = x[4:5] ** chan_5\n        chan_6 = x[5:6] ** chan_6\n        chan_7 = x[6:7] ** chan_7\n        chan_8 = x[7:8] ** chan_8\n        chan_9 = x[8:9] ** chan_9\n        chan_10 = x[9:10] ** chan_10\n        chan_11 = x[10:11] ** chan_11\n        chan_12 = x[11:12] ** chan_12\n        chan_13 = x[12:13] ** chan_13\n        chan_14 = x[13:14] ** chan_14\n        chan_15 = x[14:15] ** chan_15\n        chan_16 = x[15:16] ** chan_16\n\n        x = torch.unsqueeze(torch.cat([chan_1, chan_2, chan_3, chan_4, chan_5, chan_6, chan_7, chan_8, chan_9, chan_10, chan_11, chan_12, chan_13, chan_14, chan_15, chan_16]), 0)\n        return x\n\n    def main(self, latent, mode,\n            chan_1, chan_2, chan_3, chan_4, chan_5, chan_6, chan_7, chan_8, chan_9, chan_10, chan_11, chan_12, chan_13, chan_14, chan_15, chan_16,\n            chan_1s=None, chan_2s=None, chan_3s=None, chan_4s=None, chan_5s=None, chan_6s=None, chan_7s=None, chan_8s=None, chan_9s=None, chan_10s=None, chan_11s=None, chan_12s=None, chan_13s=None, chan_14s=None, chan_15s=None, chan_16s=None):\n        \n        x = latent[\"samples\"]\n        b, c, h, w = x.shape  \n\n        noise_latents = torch.zeros([b, c, h, w], dtype=x.dtype, layout=x.layout, device=x.device)\n        chan_1s = initialize_or_scale(chan_1s, chan_1, b)\n        chan_2s = initialize_or_scale(chan_2s, chan_2, b)\n        chan_3s = initialize_or_scale(chan_3s, chan_3, b)\n        chan_4s = initialize_or_scale(chan_4s, chan_4, b)\n        chan_5s = initialize_or_scale(chan_5s, chan_5, b)\n        chan_6s = initialize_or_scale(chan_6s, chan_6, b)\n        chan_7s = initialize_or_scale(chan_7s, chan_7, b)\n        chan_8s = initialize_or_scale(chan_8s, chan_8, b)\n        chan_9s = initialize_or_scale(chan_9s, chan_9, b)\n        chan_10s = initialize_or_scale(chan_10s, chan_10, b)\n        chan_11s = initialize_or_scale(chan_11s, chan_11, b)\n        chan_12s = initialize_or_scale(chan_12s, chan_12, b)\n        chan_13s = initialize_or_scale(chan_13s, chan_13, b)\n        chan_14s = initialize_or_scale(chan_14s, chan_14, b)\n        chan_15s = initialize_or_scale(chan_15s, chan_15, b)\n        chan_16s = initialize_or_scale(chan_16s, chan_16, b)\n\n        for i in range(b):\n            if mode == \"offset\":\n                noise = self.latent_channels_offset(x[i], chan_1s[i].item(), chan_2s[i].item(), chan_3s[i].item(), chan_4s[i].item(), chan_5s[i].item(), chan_6s[i].item(), chan_7s[i].item(), chan_8s[i].item(), chan_9s[i].item(), chan_10s[i].item(), chan_11s[i].item(), chan_12s[i].item(), chan_13s[i].item(), chan_14s[i].item(), chan_15s[i].item(), chan_16s[i].item())\n            elif mode == \"multiply\":  \n                noise = self.latent_channels_multiply(x[i], chan_1s[i].item(), chan_2s[i].item(), chan_3s[i].item(), chan_4s[i].item(), chan_5s[i].item(), chan_6s[i].item(), chan_7s[i].item(), chan_8s[i].item(), chan_9s[i].item(), chan_10s[i].item(), chan_11s[i].item(), chan_12s[i].item(), chan_13s[i].item(), chan_14s[i].item(), chan_15s[i].item(), chan_16s[i].item())\n            elif mode == \"power\":  \n                noise = self.latent_channels_power(x[i], chan_1s[i].item(), chan_2s[i].item(), chan_3s[i].item(), chan_4s[i].item(), chan_5s[i].item(), chan_6s[i].item(), chan_7s[i].item(), chan_8s[i].item(), chan_9s[i].item(), chan_10s[i].item(), chan_11s[i].item(), chan_12s[i].item(), chan_13s[i].item(), chan_14s[i].item(), chan_15s[i].item(), chan_16s[i].item())\n            noise_latents[i] = noise\n\n        return ({\"samples\": noise_latents}, )\n    \nclass latent_normalize_channels:\n    def __init__(self):\n        pass\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                    \"latent\":     (\"LATENT\", ),     \n                    \"mode\":      ([\"full\", \"channels\"],), \n                    \"operation\": ([\"normalize\", \"center\", \"standardize\"],), \n                    },\n                }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"passthrough\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self, latent, mode, operation):\n        x = latent[\"samples\"]\n        b, c, h, w = x.shape\n\n        if mode == \"full\":\n            if operation == \"normalize\":\n                x = (x - x.mean()) / x.std()\n            elif operation == \"center\":\n                x = x - x.mean()\n            elif operation == \"standardize\":\n                x = x / x.std()\n\n        elif mode == \"channels\":\n            if operation == \"normalize\":\n                for i in range(b):\n                    for j in range(c):\n                        x[i, j] = (x[i, j] - x[i, j].mean()) / x[i, j].std()\n            elif operation == \"center\":\n                for i in range(b):\n                    for j in range(c):\n                        x[i, j] = x[i, j] - x[i, j].mean()\n            elif operation == \"standardize\":\n                for i in range(b):\n                    for j in range(c):\n                        x[i, j] = x[i, j] / x[i, j].std()\n\n        return ({\"samples\": x},)\n\n\n\n\n\nclass latent_channelwise_match:\n    def __init__(self):\n        pass\n    @classmethod\n    def INPUT_TYPES(cls):\n        return {\n            \"required\": {\n                    \"model\":         (\"MODEL\",),\n                    \"latent_target\": (\"LATENT\", ),      \n                    \"latent_source\": (\"LATENT\", ),      \n                     },\n            \"optional\": {\n                    \"mask_target\":   (\"MASK\", ),      \n                    \"mask_source\":   (\"MASK\", ),   \n                    \"extra_options\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),   \n            }\n                }\n\n    RETURN_TYPES = (\"LATENT\",)\n    RETURN_NAMES = (\"latent_matched\",)\n    FUNCTION     = \"main\"\n    CATEGORY     = \"RES4LYF/latents\"\n\n    def main(self,\n            model,\n            latent_target,\n            mask_target,\n            latent_source,\n            mask_source,\n            extra_options\n            ):\n        \n        #EO = ExtraOptions(extra_options)\n        dtype = latent_target['samples'].dtype\n\n        exclude_channels = get_extra_options_list(exclude_channels, -1, extra_options)\n        \n        if extra_options_flag(\"disable_process_latent\", extra_options):\n            x_target = latent_target['samples'].clone()\n            x_source = latent_source['samples'].clone()\n        else:\n            x_target = model.model.process_latent_in(latent_target['samples']).clone().to(torch.float64)\n            x_source = model.model.process_latent_in(latent_source['samples']).clone().to(torch.float64)\n        \n        if mask_target is None:\n            mask_target = torch.ones_like(x_target)\n        else:\n            mask_target = mask_target.unsqueeze(1)\n            mask_target = mask_target.repeat(1, x_target.shape[1], 1, 1) \n            mask_target = F.interpolate(mask_target, size=(x_target.shape[2], x_target.shape[3]), mode='bilinear', align_corners=False)\n            mask_target = mask_target.to(x_target.dtype).to(x_target.device)\n        \n        if mask_source is None:\n            mask_source = torch.ones_like(x_target)\n        else:\n            mask_source = mask_source.unsqueeze(1)\n            mask_source = mask_source.repeat(1, x_target.shape[1], 1, 1) \n            mask_source = F.interpolate(mask_source, size=(x_target.shape[2], x_target.shape[3]), mode='bilinear', align_corners=False)\n            mask_source = mask_source.to(x_target.dtype).to(x_target.device)\n        \n        x_target_masked     = x_target * ((mask_target==1)*mask_target)\n        x_target_masked_inv = x_target - x_target_masked\n        #x_source_masked     = x_source * ((mask_source==1)*mask_source)\n        \n        x_matched = torch.zeros_like(x_target)\n        for n in range(x_matched.shape[1]):\n            if n in exclude_channels: \n                x_matched[0][n] = x_target[0][n] \n                continue\n            \n            x_target_masked_values = x_target[0][n][mask_target[0][n] == 1]\n            x_source_masked_values = x_source[0][n][mask_source[0][n] == 1]\n            \n            x_target_masked_values_mean = x_target_masked_values.mean()\n            x_target_masked_values_std  = x_target_masked_values.std()\n            x_target_masked_source_mean = x_source_masked_values.mean()\n            x_target_masked_source_std  = x_source_masked_values.std()\n            \n            x_target_mean = x_target.mean()\n            x_target_std  = x_target.std()\n            x_source_mean = x_source.mean()\n            x_source_std  = x_source.std()\n            \n            #if re.search(r\"\\benable_std\\b\", extra_options) == None:\n            if not extra_options_flag(\"enable_std\", extra_options):\n                x_target_std = x_target_masked_values_std = x_target_masked_source_std = 1\n                \n            #if re.search(r\"\\bdisable_mean\\b\", extra_options):\n            if extra_options_flag(\"disable_mean\", extra_options):\n                x_target_mean = x_target_masked_values_mean = x_target_masked_source_mean = 1\n            \n            #if re.search(r\"\\bdisable_masks\\b\", extra_options):\n            if extra_options_flag(\"disable_masks\", extra_options):\n                x_matched[0][n] = (x_target[0][n] - x_target_mean) / x_target_std\n                x_matched[0][n] = (x_matched[0][n] * x_source_std) + x_source_mean\n            else:\n                x_matched[0][n] = (x_target_masked[0][n] - x_target_masked_values_mean) / x_target_masked_values_std\n                x_matched[0][n] = (x_matched[0][n] * x_target_masked_source_std) + x_target_masked_source_mean\n                x_matched[0][n] = x_target_masked_inv[0][n] + x_matched[0][n] * ((mask_target[0][n]==1)*mask_target[0][n])\n        \n        #if re.search(r\"\\bdisable_process_latent\\b\", extra_options) == None: \n        if not extra_options_flag(\"disable_process_latent\", extra_options):\n            x_matched = model.model.process_latent_out(x_matched).clone()\n            \n        \n        return ({\"samples\": x_matched.to(dtype)}, )\n                \n                \n    "
  },
  {
    "path": "nodes_misc.py",
    "content": "\r\nimport folder_paths\r\n\r\nimport os\r\nimport random\r\n\r\n\r\nclass SetImageSize:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"width\" : (\"INT\", {\"default\": 1024, \"min\": 1, \"max\": 10000}),\r\n                    \"height\": (\"INT\", {\"default\": 1024, \"min\": 1, \"max\": 10000}),\r\n                    },\r\n                    \"optional\": \r\n                    {\r\n                    }  \r\n                }\r\n    RETURN_TYPES = (\"INT\", \"INT\",)\r\n    RETURN_NAMES = (\"width\",\"height\",)\r\n    FUNCTION = \"main\"\r\n    \r\n    CATEGORY = \"RES4LYF/images\"\r\n    DESCRIPTION = \"Generate a pair of integers for image sizes.\"\r\n\r\n    def main(self, width, height):\r\n        return (width, height,)\r\n\r\n\r\nclass SetImageSizeWithScale:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"width\" : (\"INT\", {\"default\": 1024, \"min\": 1, \"max\": 10000}),\r\n                    \"height\": (\"INT\", {\"default\": 1024, \"min\": 1, \"max\": 10000}),\r\n                    \"scale_by\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 10000, \"step\":0.01}),\r\n                    },\r\n                    \"optional\": \r\n                    {\r\n                    }  \r\n                }\r\n    RETURN_TYPES = (\"INT\", \"INT\", \"INT\", \"INT\",)\r\n    RETURN_NAMES = (\"width\",\"height\",\"width_scaled\",\"height_scaled\",)\r\n    FUNCTION = \"main\"\r\n    \r\n    CATEGORY = \"RES4LYF/images\"\r\n    DESCRIPTION = \"Generate a pair of integers for image sizes.\"\r\n\r\n    def main(self, width, height, scale_by):\r\n        return (width, height, int(width*scale_by), int(height*scale_by))\r\n\r\n\r\nclass TextBox1:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"text1\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),\r\n                    },\r\n                    \"optional\": \r\n                    {\r\n                    }  \r\n                }\r\n    RETURN_TYPES = (\"STRING\",)\r\n    RETURN_NAMES = (\"text1\",)\r\n    FUNCTION = \"main\"\r\n    \r\n    CATEGORY = \"RES4LYF/text\"\r\n    DESCRIPTION = \"Multiline textbox.\"\r\n\r\n    def main(self, text1):\r\n\r\n        return (text1,)\r\n\r\nclass TextBox2:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"text1\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),\r\n                    \"text2\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),\r\n                    },\r\n                    \"optional\": \r\n                    {\r\n                    }  \r\n                }\r\n    RETURN_TYPES = (\"STRING\", \"STRING\",)\r\n    RETURN_NAMES = (\"text1\", \"text2\",)\r\n    FUNCTION = \"main\"\r\n    \r\n    CATEGORY = \"RES4LYF/text\"\r\n    DESCRIPTION = \"Multiline textbox.\"\r\n\r\n    def main(self, text1, text2,):\r\n\r\n        return (text1, text2,)\r\n\r\nclass TextBox3:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"text1\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),\r\n                    \"text2\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),\r\n                    \"text3\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),\r\n                    },\r\n                    \"optional\": \r\n                    {\r\n                    }  \r\n                }\r\n    RETURN_TYPES = (\"STRING\", \"STRING\",\"STRING\",)\r\n    RETURN_NAMES = (\"text1\", \"text2\", \"text3\",)\r\n    FUNCTION = \"main\"\r\n    \r\n    CATEGORY = \"RES4LYF/text\"\r\n    DESCRIPTION = \"Multiline textbox.\"\r\n\r\n    def main(self, text1, text2, text3 ):\r\n\r\n        return (text1, text2, text3, )\r\n\r\n\r\n\r\nclass TextLoadFile:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        input_dir = folder_paths.get_input_directory()\r\n        files = [f for f in os.listdir(input_dir)\r\n                if os.path.isfile(os.path.join(input_dir, f)) and f.lower().endswith('.txt')]\r\n        return {\r\n            \"required\": {\r\n                \"text_file\": (sorted(files), {\"text_upload\": True})\r\n            }\r\n        }\r\n        \r\n\r\n    RETURN_TYPES = (\"STRING\",)\r\n    RETURN_NAMES = (\"text\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/text\"\r\n\r\n    def main(self, text_file):\r\n        input_dir = folder_paths.get_input_directory()\r\n        text_file_path = os.path.join(input_dir, text_file) \r\n        if not os.path.exists(text_file_path):\r\n            print(f\"Error: The file `{text_file_path}` cannot be found.\")\r\n            return (\"\",)\r\n        with open(text_file_path, \"r\", encoding=\"utf-8\") as f:\r\n            text = f.read()\r\n        return (text,)\r\n\r\n\r\n\r\nclass TextShuffle:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"text\":        (\"STRING\", {\"forceInput\": True}),\r\n                \"separator\":   (\"STRING\", {\"default\": \" \", \"multiline\": False}),\r\n                \"seed\":        (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\r\n            },\r\n            \"optional\": {\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"STRING\",)\r\n    RETURN_NAMES = (\"shuffled_text\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/text\"\r\n\r\n    def main(self, text, separator, seed, ):\r\n        if seed is not None:\r\n            random.seed(seed)\r\n        parts = text.split(separator)\r\n        random.shuffle(parts)\r\n        shuffled_text = separator.join(parts)\r\n\r\n        return (shuffled_text, )\r\n\r\n\r\n\r\ndef truncate_tokens(text, truncate_to, clip, clip_type, stop_token):\r\n    if truncate_to == 0:\r\n        return \"\"\r\n    \r\n    truncate_words_to = truncate_to\r\n    total = truncate_to + 1\r\n    \r\n    tokens = {}\r\n\r\n    while total > truncate_to:\r\n        words = text.split()\r\n        truncated_words = words[:truncate_words_to]\r\n        truncated_text = \" \".join(truncated_words)\r\n\r\n        try:\r\n            tokens[clip_type] = clip.tokenize(truncated_text)[clip_type]\r\n        except:\r\n            return \"\"\r\n\r\n        if clip_type not in tokens:\r\n            return truncated_text\r\n\r\n        clip_end=0\r\n        for b in range(len(tokens[clip_type])):\r\n            for i in range(len(tokens[clip_type][b])):\r\n                clip_end += 1\r\n                if tokens[clip_type][b][i][0] == stop_token:\r\n                    break\r\n        if clip_type == 'l' or clip_type == 'g':\r\n            clip_end -= 2\r\n        elif clip_type == 't5xxl':\r\n            clip_end -= 1\r\n\r\n        total = clip_end\r\n\r\n        truncate_words_to -= 1\r\n        \r\n    return truncated_text\r\n\r\n\r\n\r\nclass TextShuffleAndTruncate:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"text\":               (\"STRING\", {\"forceInput\": True}),\r\n                \"separator\":          (\"STRING\", {\"default\": \" \", \"multiline\": False}),\r\n                \"truncate_words_to\":  (\"INT\", {\"default\": 77, \"min\": 1, \"max\": 10000}),\r\n                \"truncate_tokens_to\": (\"INT\", {\"default\": 77, \"min\": 1, \"max\": 10000}),\r\n                \"seed\":               (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\r\n            },\r\n            \"optional\": {\r\n                \"clip\": (\"CLIP\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"STRING\",\"STRING\",\"STRING\",\"STRING\",\"STRING\",)\r\n    RETURN_NAMES = (\"shuffled_text\", \"text_words\", \"text_clip_l\", \"text_clip_g\", \"text_t5\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/text\"\r\n\r\n    def main(self, text, separator, truncate_words_to, truncate_tokens_to, seed, clip=None):\r\n        if seed is not None:\r\n            random.seed(seed)\r\n        parts = text.split(separator)\r\n        random.shuffle(parts)\r\n        shuffled_text = separator.join(parts)\r\n\r\n        words = shuffled_text.split()\r\n        truncated_words = words[:truncate_words_to]\r\n        truncated_text = \" \".join(truncated_words)\r\n        \r\n        #t5_name = \"t5xxl\" if not hasattr(clip.tokenizer, \"pile_t5xl\") else \"pile_t5xl\"\r\n        t5_name = \"t5xxl\"\r\n        if hasattr(clip.tokenizer, \"clip_name\"):\r\n            t5_name = \"t5xxl\" if clip.tokenizer.clip_name != \"pile_t5xl\" else \"pile_t5xl\"\r\n\r\n        text_clip_l = truncate_tokens(truncated_text, truncate_tokens_to, clip, \"l\",     49407)\r\n        text_clip_g = truncate_tokens(truncated_text, truncate_tokens_to, clip, \"g\",     49407)\r\n        text_t5     = truncate_tokens(truncated_text, truncate_tokens_to, clip, t5_name, 1)\r\n                    \r\n        return (shuffled_text, truncated_text, text_clip_l, text_clip_g, text_t5,)\r\n\r\n\r\n\r\nclass TextTruncateTokens:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"text\":               (\"STRING\", {\"forceInput\": True}),\r\n                \"truncate_words_to\":  (\"INT\", {\"default\": 30, \"min\": 0, \"max\": 10000}),\r\n                \"truncate_clip_l_to\": (\"INT\", {\"default\": 77, \"min\": 0, \"max\": 10000}),\r\n                \"truncate_clip_g_to\": (\"INT\", {\"default\": 77, \"min\": 0, \"max\": 10000}),\r\n                \"truncate_t5_to\":     (\"INT\", {\"default\": 77, \"min\": 0, \"max\": 10000}),\r\n            },\r\n            \"optional\": {\r\n                \"clip\":               (\"CLIP\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"STRING\",\"STRING\",\"STRING\",\"STRING\",)\r\n    RETURN_NAMES = (\"text_words\",\"text_clip_l\",\"text_clip_g\",\"text_t5\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/text\"\r\n\r\n    def main(self, text, truncate_words_to, truncate_clip_l_to, truncate_clip_g_to, truncate_t5_to, clip=None):\r\n\r\n        words = text.split()\r\n        truncated_words = words[:truncate_words_to]\r\n        truncated_text = \" \".join(truncated_words)\r\n        \r\n        #t5_name = \"t5xxl\" if not hasattr(clip.tokenizer, \"pile_t5xl\") else \"pile_t5xl\"\r\n        t5_name = \"t5xxl\"\r\n        if hasattr(clip.tokenizer, \"clip_name\"):\r\n            t5_name = \"t5xxl\" if clip.tokenizer.clip_name != \"pile_t5xl\" else \"pile_t5xl\"\r\n\r\n        if clip is not None:\r\n            text_clip_l = truncate_tokens(text, truncate_clip_l_to, clip, \"l\",     49407)\r\n            text_clip_g = truncate_tokens(text, truncate_clip_g_to, clip, \"g\",     49407)\r\n            text_t5     = truncate_tokens(truncated_text, truncate_t5_to, clip, t5_name, 1)\r\n        else:\r\n            text_clip_l = None\r\n            text_clip_g = None\r\n            text_t5     = None\r\n        \r\n        return (truncated_text, text_clip_l, text_clip_g, text_t5,)\r\n\r\n\r\n\r\nclass TextConcatenate:\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                },\r\n                \"optional\": {\r\n                \"text_1\":    (\"STRING\", {\"multiline\": False, \"default\": \"\", \"forceInput\": True}),                \r\n                \"text_2\":    (\"STRING\", {\"multiline\": False, \"default\": \"\", \"forceInput\": True}), \r\n                \"separator\": (\"STRING\", {\"multiline\": False, \"default\": \"\"}),                \r\n            },\r\n        }\r\n\r\n    RETURN_TYPES = (\"STRING\",)\r\n    RETURN_NAMES = (\"text\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/text\"\r\n\r\n    def main(self, text_1=\"\", text_2=\"\", separator=\"\"):\r\n    \r\n        return (text_1 + separator + text_2, )\r\n\r\n\r\n\r\nclass TextBoxConcatenate:\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"text\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),\r\n            },\r\n            \"optional\": {\r\n                \"text_external\": (\"STRING\", {\"multiline\": False, \"default\": \"\", \"forceInput\": True}),                \r\n                \"separator\":     (\"STRING\", {\"multiline\": False, \"default\": \"\"}),        \r\n                \"mode\":          (['append_external_input', 'prepend_external_input',],),\r\n            },\r\n        }\r\n\r\n    RETURN_TYPES = (\"STRING\",)\r\n    RETURN_NAMES = (\"text\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/text\"\r\n    DESCRIPTION = \"Multiline textbox with concatenate functionality.\"\r\n\r\n\r\n    def main(self, text=\"\", text_external=\"\", separator=\"\", mode=\"append_external_input\"):\r\n        if   mode == \"append_external_input\":\r\n            text = text + separator + text_external\r\n        elif mode == \"prepend_external_input\":\r\n            text = text_external + separator + text\r\n    \r\n        return (text, )\r\n\r\n\r\n\r\nclass SeedGenerator:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"seed\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 0xffffffffffffffff}),\r\n            },\r\n            \"optional\": {\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"INT\",  \"INT\",)\r\n    RETURN_NAMES = (\"seed\", \"seed+1\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/utilities\"\r\n\r\n    def main(self, seed,):\r\n        return (seed, seed+1,)\r\n\r\n"
  },
  {
    "path": "nodes_precision.py",
    "content": "import torch\r\nfrom .helper import precision_tool\r\n\r\n\r\nclass set_precision:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                    \"latent_image\": (\"LATENT\", ),      \r\n                    \"precision\":    ([\"16\", \"32\", \"64\"], ),\r\n                    \"set_default\":  (\"BOOLEAN\", {\"default\": False})\r\n                     },\r\n                }\r\n\r\n    RETURN_TYPES = (\"LATENT\",)\r\n    RETURN_NAMES = (\"passthrough\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/precision\"\r\n\r\n\r\n    def main(self,\r\n            precision    = \"32\",\r\n            latent_image = None,\r\n            set_default  = False\r\n            ):\r\n        \r\n        match precision:\r\n            case \"16\":\r\n                if set_default is True:\r\n                    torch.set_default_dtype(torch.float16)\r\n                x = latent_image[\"samples\"].to(torch.float16)\r\n            case \"32\":\r\n                if set_default is True:\r\n                    torch.set_default_dtype(torch.float32)\r\n                x = latent_image[\"samples\"].to(torch.float32)\r\n            case \"64\":\r\n                if set_default is True:\r\n                    torch.set_default_dtype(torch.float64)\r\n                x = latent_image[\"samples\"].to(torch.float64)\r\n        return ({\"samples\": x}, )\r\n\r\n\r\n\r\nclass set_precision_universal:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                    \"precision\":    ([\"bf16\", \"fp16\", \"fp32\", \"fp64\", \"passthrough\"], {\"default\": \"fp32\"}),\r\n                    \"set_default\":  (\"BOOLEAN\",                                       {\"default\": False})\r\n                    },\r\n            \"optional\": {\r\n                    \"cond_pos\":     (\"CONDITIONING\",),\r\n                    \"cond_neg\":     (\"CONDITIONING\",),\r\n                    \"sigmas\":       (\"SIGMAS\", ),\r\n                    \"latent_image\": (\"LATENT\", ),\r\n                    },\r\n                }\r\n\r\n    RETURN_TYPES = (\"CONDITIONING\", \r\n                    \"CONDITIONING\", \r\n                    \"SIGMAS\", \r\n                    \"LATENT\",)\r\n    \r\n    RETURN_NAMES = (\"cond_pos\",\r\n                    \"cond_neg\",\r\n                    \"sigmas\",\r\n                    \"latent_image\",)\r\n\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/precision\"\r\n\r\n\r\n    def main(self,\r\n            precision    = \"fp32\",\r\n            cond_pos     = None,\r\n            cond_neg     = None,\r\n            sigmas       = None,\r\n            latent_image = None,\r\n            set_default  = False\r\n            ):\r\n        \r\n        dtype = None\r\n        match precision:\r\n            case \"bf16\":\r\n                dtype = torch.bfloat16\r\n            case \"fp16\":\r\n                dtype = torch.float16\r\n            case \"fp32\":\r\n                dtype = torch.float32\r\n            case \"fp64\":\r\n                dtype = torch.float64\r\n            case \"passthrough\":\r\n                return (cond_pos, cond_neg, sigmas, latent_image, )\r\n        \r\n        if cond_pos is not None:\r\n            cond_pos[0][0] = cond_pos[0][0].clone().to(dtype)\r\n            cond_pos[0][1][\"pooled_output\"] = cond_pos[0][1][\"pooled_output\"].clone().to(dtype)\r\n        \r\n        if cond_neg is not None:\r\n            cond_neg[0][0] = cond_neg[0][0].clone().to(dtype)\r\n            cond_neg[0][1][\"pooled_output\"] = cond_neg[0][1][\"pooled_output\"].clone().to(dtype)\r\n            \r\n        if sigmas is not None:\r\n            sigmas = sigmas.clone().to(dtype)\r\n        \r\n        if latent_image is not None:\r\n            x = latent_image[\"samples\"].clone().to(dtype)    \r\n            latent_image = {\"samples\": x}\r\n\r\n        if set_default is True:\r\n            torch.set_default_dtype(dtype)\r\n        \r\n        return (cond_pos, cond_neg, sigmas, latent_image, )\r\n\r\n\r\n\r\nclass set_precision_advanced:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                    \"latent_image\":     (\"LATENT\", ),      \r\n                    \"global_precision\": ([\"64\", \"32\", \"16\"], ),\r\n                    \"shark_precision\":  ([\"64\", \"32\", \"16\"], ),\r\n                     },\r\n                }\r\n\r\n    RETURN_TYPES = (\"LATENT\",\"LATENT\",\"LATENT\",\"LATENT\",\"LATENT\",)\r\n    \r\n    RETURN_NAMES = (\"passthrough\",\r\n                    \"latent_cast_to_global\",\r\n                    \"latent_16\",\r\n                    \"latent_32\",\r\n                    \"latent_64\",\r\n                    )\r\n    \r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/precision\"\r\n\r\n\r\n    def main(self,\r\n            global_precision = \"32\",\r\n            shark_precision  = \"64\",\r\n            latent_image     = None\r\n            ):\r\n        \r\n        dtype_map = {\r\n            \"16\": torch.float16,\r\n            \"32\": torch.float32,\r\n            \"64\": torch.float64\r\n        }\r\n        precision_map = {\r\n            \"16\": 'fp16',\r\n            \"32\": 'fp32',\r\n            \"64\": 'fp64'\r\n        }\r\n\r\n        torch.set_default_dtype(dtype_map[global_precision])\r\n        precision_tool.set_cast_type(precision_map[shark_precision])\r\n\r\n        latent_passthrough = latent_image[\"samples\"]\r\n\r\n        latent_out16 = latent_image[\"samples\"].to(torch.float16)\r\n        latent_out32 = latent_image[\"samples\"].to(torch.float32)\r\n        latent_out64 = latent_image[\"samples\"].to(torch.float64)\r\n\r\n        target_dtype = dtype_map[global_precision]\r\n        if latent_image[\"samples\"].dtype != target_dtype:\r\n            latent_image[\"samples\"] = latent_image[\"samples\"].to(target_dtype)\r\n\r\n        latent_cast_to_global = latent_image[\"samples\"]\r\n\r\n        return ({\"samples\": latent_passthrough},\r\n                {\"samples\": latent_cast_to_global},\r\n                {\"samples\": latent_out16},\r\n                {\"samples\": latent_out32},\r\n                {\"samples\": latent_out64}\r\n                )\r\n\r\n\r\n"
  },
  {
    "path": "requirements.txt",
    "content": "opencv-python\r\nmatplotlib\r\npywavelets\r\nnumpy>=1.26.4\r\n"
  },
  {
    "path": "res4lyf.py",
    "content": "# Code adapted from https://github.com/pythongosssss/ComfyUI-Custom-Scripts\n\nimport asyncio\nimport os\nimport json\nimport shutil\nimport inspect\nimport aiohttp\nimport math\nimport comfy.model_sampling\nimport comfy.samplers\nfrom aiohttp import web\nfrom server import PromptServer\nfrom tqdm import tqdm\n\n\nCONFIG_FILE_NAME = \"res4lyf.config.json\"\nDEFAULT_CONFIG_FILE_NAME = \"web/js/res4lyf.default.json\"\nconfig = None\n\nusing_RES4LYF_time_snr_shift = False\noriginal_time_snr_shift = comfy.model_sampling.time_snr_shift\n\ndef time_snr_shift_RES4LYF(alpha, t):\n    if using_RES4LYF_time_snr_shift and get_config_value(\"updatedTimestepScaling\", False):\n        out = math.exp(alpha) / (math.exp(alpha) + (1 / t - 1) ** 1.0)\n    else:\n        out = original_time_snr_shift(alpha, t)\n    return out\n\ndisplay_sampler_category = False\n\ndef get_display_sampler_category():\n    global display_sampler_category\n    return display_sampler_category\n    \n@PromptServer.instance.routes.post(\"/reslyf/settings\")\nasync def update_settings(request):\n    try:\n        json_data = await request.json()\n        setting = json_data.get(\"setting\")\n        value = json_data.get(\"value\")\n\n        if setting:\n            save_config_value(setting, value)\n            \n            if setting == \"updatedTimestepScaling\":\n                global using_RES4LYF_time_snr_shift\n                using_RES4LYF_time_snr_shift = value\n                if ( using_RES4LYF_time_snr_shift is True ):\n                    RESplain(\"Using RES4LYF time SNR shift\")\n                else:\n                    RESplain(\"Disabled RES4LYF time SNR shift\")\n            elif setting == \"displayCategory\":\n                global display_sampler_category\n                display_sampler_category = value\n                if ( display_sampler_category is True ):\n                    RESplain(\"Displaying sampler category\", debug=True)\n                else:\n                    RESplain(\"Not displaying sampler category\", debug=True)\n\n\n        return web.Response(status=200)\n    except Exception as e:\n        return web.Response(status=500, text=str(e))\n\n@PromptServer.instance.routes.post(\"/reslyf/log\")\nasync def log_message(request):\n    try:\n        json_data = await request.json()\n        log_text = json_data.get(\"log\")\n        \n        if log_text:\n            RESplain(log_text, debug=True)\n            return web.Response(status=200)\n        else:\n            return web.Response(status=400, text=\"No log text provided\")\n    except Exception as e:\n        return web.Response(status=500, text=str(e))\n    \noriginal_calculate_sigmas = comfy.samplers.calculate_sigmas\n\ndef calculate_sigmas_RES4LYF(model_sampling, scheduler_name, steps):\n    if scheduler_name == \"beta57\":\n        sigmas = comfy.samplers.beta_scheduler(model_sampling, steps, alpha=0.5, beta=0.7)\n    else:\n        return original_calculate_sigmas(model_sampling, scheduler_name, steps)\n    return sigmas\n\ndef init(check_imports=None):\n    RESplain(\"Init\")\n\n    # initialize display category\n    global display_sampler_category\n    display_sampler_category = get_config_value(\"displayCategory\", False)\n    if ( display_sampler_category is True ):\n        RESplain(\"Displaying sampler category\", debug=True)\n\n    # Initialize using_RES4LYF_time_snr_shift from config (deprecated, disabled by default)\n    global using_RES4LYF_time_snr_shift\n    using_RES4LYF_time_snr_shift = get_config_value(\"updatedTimestepScaling\", False)\n    if using_RES4LYF_time_snr_shift:\n        comfy.model_sampling.time_snr_shift = time_snr_shift_RES4LYF\n        RESplain(\"Using RES4LYF time SNR shift but this is deprecated and will be disabled at some completely unpredictable point in the future\")\n\n    # monkey patch comfy.samplers.calculate_sigmas with custom implementation\n    comfy.samplers.calculate_sigmas = calculate_sigmas_RES4LYF\n    if \"beta57\" not in comfy.samplers.SCHEDULER_NAMES:\n        comfy.samplers.SCHEDULER_NAMES = comfy.samplers.SCHEDULER_NAMES + [\"beta57\"]\n    if \"beta57\" not in comfy.samplers.KSampler.SCHEDULERS:\n        comfy.samplers.KSampler.SCHEDULERS = comfy.samplers.KSampler.SCHEDULERS + [\"beta57\"]\n\n    return True\n\n\ndef save_config_value(key, value):\n    config = get_extension_config()\n    keys = key.split(\".\")\n    d = config\n    for k in keys[:-1]:\n        if k not in d:\n            d[k] = {}\n        d = d[k]\n    d[keys[-1]] = value\n\n    config_path = get_ext_dir(CONFIG_FILE_NAME)\n    with open(config_path, \"w\") as f:\n        json.dump(config, f, indent=4)\n\n\ndef get_config_value(key, default=None, throw=False):\n    config = get_extension_config()\n    keys = key.split(\".\")\n    d = config\n    for k in keys[:-1]:\n        if k not in d:\n            if throw:\n                raise KeyError(\"Configuration key missing: \" + key)\n            else:\n                return default\n        d = d[k]\n    return d.get(keys[-1], default)\n\n\ndef is_debug_logging_enabled():\n    logging_enabled = get_config_value(\"enableDebugLogs\", False)\n    return logging_enabled\n\ndef RESplain(*args, debug='info'):\n    if isinstance(debug, bool):\n        type = 'debug' if debug else 'info'\n    else:\n        type = debug\n\n    if type == 'debug' and not is_debug_logging_enabled():\n        return\n    \n    if not args:\n        return\n\n    name = get_extension_config()[\"name\"]\n\n    message = \" \".join(map(str, args))\n\n    if type != 'debug' and type != 'warning':\n        print(f\"({name}) {message}\")\n    else:\n        print(f\"({name} {type}) {message}\")\n\ndef get_ext_dir(subpath=None, mkdir=False):\n    dir = os.path.dirname(__file__)\n    if subpath is not None:\n        dir = os.path.join(dir, subpath)\n\n    dir = os.path.abspath(dir)\n\n    if mkdir and not os.path.exists(dir):\n        os.makedirs(dir)\n    return dir\n\ndef merge_default_config(config, default_config):\n    for key, value in default_config.items():\n        if key not in config:\n            config[key] = value\n        elif isinstance(value, dict):\n            config[key] = merge_default_config(config.get(key, {}), value)\n    return config\n\ndef get_extension_config(reload=False):\n    global config\n    if not reload and config is not None:\n        return config\n\n    config_path = get_ext_dir(CONFIG_FILE_NAME)\n    default_config_path = get_ext_dir(DEFAULT_CONFIG_FILE_NAME)\n    \n    if os.path.exists(default_config_path):\n        with open(default_config_path, \"r\") as f:\n            default_config = json.loads(f.read())\n    else:\n        default_config = {}\n\n    if not os.path.exists(config_path):\n        config = default_config\n        with open(config_path, \"w\") as f:\n            json.dump(config, f, indent=4)\n    else:\n        with open(config_path, \"r\") as f:\n            config = json.loads(f.read())\n        config = merge_default_config(config, default_config)\n        with open(config_path, \"w\") as f:\n            json.dump(config, f, indent=4)\n\n    return config\n\n\ndef get_comfy_dir(subpath=None, mkdir=False):\n    dir = os.path.dirname(inspect.getfile(PromptServer))\n    if subpath is not None:\n        dir = os.path.join(dir, subpath)\n\n    dir = os.path.abspath(dir)\n\n    if mkdir and not os.path.exists(dir):\n        os.makedirs(dir)\n    return dir\n\n\ndef get_web_ext_dir():\n    config = get_extension_config()\n    name = config[\"name\"]\n    dir = get_comfy_dir(\"web/extensions/res4lyf\")\n    if not os.path.exists(dir):\n        os.makedirs(dir)\n    dir = os.path.join(dir, name)\n    return dir\n\n\ndef link_js(src, dst):\n    src = os.path.abspath(src)\n    dst = os.path.abspath(dst)\n    if os.name == \"nt\":\n        try:\n            import _winapi\n            _winapi.CreateJunction(src, dst)\n            return True\n        except:\n            pass\n    try:\n        os.symlink(src, dst)\n        return True\n    except:\n        import logging\n        logging.exception('')\n        return False\n\n\ndef is_junction(path):\n    if os.name != \"nt\":\n        return False\n    try:\n        return bool(os.readlink(path))\n    except OSError:\n        return False\n\n\ndef install_js():\n    src_dir = get_ext_dir(\"web/js\")\n    if not os.path.exists(src_dir):\n        RESplain(\"No JS\")\n        return\n\n    should_install = should_install_js()\n    if should_install:\n        RESplain(\"it looks like you're running an old version of ComfyUI that requires manual setup of web files, it is recommended you update your installation.\", \"warning\", True)\n    dst_dir = get_web_ext_dir()\n    linked = os.path.islink(dst_dir) or is_junction(dst_dir)\n    if linked or os.path.exists(dst_dir):\n        if linked:\n            if should_install:\n                RESplain(\"JS already linked\")\n            else:\n                os.unlink(dst_dir)\n                RESplain(\"JS unlinked, PromptServer will serve extension\")\n        elif not should_install:\n            shutil.rmtree(dst_dir)\n            RESplain(\"JS deleted, PromptServer will serve extension\")\n        return\n    \n    if not should_install:\n        RESplain(\"JS skipped, PromptServer will serve extension\")\n        return\n    \n    if link_js(src_dir, dst_dir):\n        RESplain(\"JS linked\")\n        return\n\n    RESplain(\"Copying JS files\")\n    shutil.copytree(src_dir, dst_dir, dirs_exist_ok=True)\n\n\ndef should_install_js():\n    return not hasattr(PromptServer.instance, \"supports\") or \"custom_nodes_from_web\" not in PromptServer.instance.supports\n\ndef get_async_loop():\n    loop = None\n    try:\n        loop = asyncio.get_event_loop()\n    except:\n        loop = asyncio.new_event_loop()\n        asyncio.set_event_loop(loop)\n    return loop\n\n\ndef get_http_session():\n    loop = get_async_loop()\n    return aiohttp.ClientSession(loop=loop)\n\n\nasync def download(url, stream, update_callback=None, session=None):\n    close_session = False\n    if session is None:\n        close_session = True\n        session = get_http_session()\n    try:\n        async with session.get(url) as response:\n            size = int(response.headers.get('content-length', 0)) or None\n\n            with tqdm(\n                unit='B', unit_scale=True, miniters=1, desc=url.split('/')[-1], total=size,\n            ) as progressbar:\n                perc = 0\n                async for chunk in response.content.iter_chunked(2048):\n                    stream.write(chunk)\n                    progressbar.update(len(chunk))\n                    if update_callback is not None and progressbar.total is not None and progressbar.total != 0:\n                        last = perc\n                        perc = round(progressbar.n / progressbar.total, 2)\n                        if perc != last:\n                            last = perc\n                            await update_callback(perc)\n    finally:\n        if close_session and session is not None:\n            await session.close()\n\n\nasync def download_to_file(url, destination, update_callback=None, is_ext_subpath=True, session=None):\n    if is_ext_subpath:\n        destination = get_ext_dir(destination)\n    with open(destination, mode='wb') as f:\n        download(url, f, update_callback, session)\n\n\ndef wait_for_async(async_fn, loop=None):\n    res = []\n\n    async def run_async():\n        r = await async_fn()\n        res.append(r)\n\n    if loop is None:\n        try:\n            loop = asyncio.get_event_loop()\n        except:\n            loop = asyncio.new_event_loop()\n            asyncio.set_event_loop(loop)\n\n    loop.run_until_complete(run_async())\n\n    return res[0]\n\n\ndef update_node_status(client_id, node, text, progress=None):\n    if client_id is None:\n        client_id = PromptServer.instance.client_id\n\n    if client_id is None:\n        return\n\n    PromptServer.instance.send_sync(\"res4lyf/update_status\", {\n        \"node\": node,\n        \"progress\": progress,\n        \"text\": text\n    }, client_id)\n\n\nasync def update_node_status_async(client_id, node, text, progress=None):\n    if client_id is None:\n        client_id = PromptServer.instance.client_id\n\n    if client_id is None:\n        return\n\n    await PromptServer.instance.send(\"res4lyf/update_status\", {\n        \"node\": node,\n        \"progress\": progress,\n        \"text\": text\n    }, client_id)\n\n\ndef get_config_value(key, default=None, throw=False):\n    split = key.split(\".\")\n    obj = get_extension_config()\n    for s in split:\n        if s in obj:\n            obj = obj[s]\n        else:\n            if throw:\n                raise KeyError(\"Configuration key missing: \" + key)\n            else:\n                return default\n    return obj\n\n\ndef is_inside_dir(root_dir, check_path):\n    root_dir = os.path.abspath(root_dir)\n    if not os.path.isabs(check_path):\n        check_path = os.path.abspath(os.path.join(root_dir, check_path))\n    return os.path.commonpath([check_path, root_dir]) == root_dir\n\n\ndef get_child_dir(root_dir, child_path, throw_if_outside=True):\n    child_path = os.path.abspath(os.path.join(root_dir, child_path))\n    if is_inside_dir(root_dir, child_path):\n        return child_path\n    if throw_if_outside:\n        raise NotADirectoryError(\n            \"Saving outside the target folder is not allowed.\")\n    return None\n"
  },
  {
    "path": "rk_method_beta.py",
    "content": "import torch\r\nfrom torch import Tensor\r\nfrom typing import Optional, Callable, Tuple, List, Dict, Any, Union\r\n\r\nimport comfy.model_patcher\r\nimport comfy.supported_models\r\n\r\nimport itertools \r\n\r\nfrom .phi_functions        import Phi\r\nfrom .rk_coefficients_beta import get_implicit_sampler_name_list, get_rk_methods_beta\r\nfrom ..helper              import ExtraOptions\r\nfrom ..latents             import get_orthogonal, get_collinear, get_cosine_similarity, tile_latent, untile_latent\r\n\r\nfrom ..res4lyf             import RESplain\r\n\r\nMAX_STEPS = 10000\r\n\r\n\r\ndef get_data_from_step   (x:Tensor, x_next:Tensor, sigma:Tensor, sigma_next:Tensor) -> Tensor:\r\n    h = sigma_next - sigma\r\n    return (sigma_next * x - sigma * x_next) / h\r\n\r\ndef get_epsilon_from_step(x:Tensor, x_next:Tensor, sigma:Tensor, sigma_next:Tensor) -> Tensor:\r\n    h = sigma_next - sigma\r\n    return (x - x_next) / h\r\n\r\n\r\n\r\nclass RK_Method_Beta:\r\n    def __init__(self,\r\n                model,\r\n                rk_type               : str,\r\n                noise_anchor          : float,\r\n                noise_boost_normalize : bool        = True,\r\n                model_device          : str         = 'cuda',\r\n                work_device           : str         = 'cpu',\r\n                dtype                 : torch.dtype = torch.float64,\r\n                extra_options         : str         = \"\"\r\n                ):\r\n        \r\n        self.work_device                 = work_device\r\n        self.model_device                = model_device\r\n        self.dtype                       : torch.dtype = dtype\r\n\r\n        self.model                       = model\r\n\r\n        if hasattr(model, \"model\"):\r\n            model_sampling = model.model.model_sampling\r\n        elif hasattr(model, \"inner_model\"):\r\n            model_sampling = model.inner_model.inner_model.model_sampling\r\n        \r\n        self.sigma_min                   : Tensor                   = model_sampling.sigma_min.to(dtype=dtype, device=work_device)\r\n        self.sigma_max                   : Tensor                   = model_sampling.sigma_max.to(dtype=dtype, device=work_device)\r\n\r\n        self.rk_type                     : str                      = rk_type\r\n\r\n        self.IMPLICIT                    : str                      = rk_type in get_implicit_sampler_name_list(nameOnly=True)\r\n        self.EXPONENTIAL                 : bool                     = RK_Method_Beta.is_exponential(rk_type)\r\n\r\n        self.SYNC_SUBSTEP_MEAN_CW        : bool                     = noise_boost_normalize\r\n\r\n        self.A                           : Optional[Tensor]         = None\r\n        self.B                           : Optional[Tensor]         = None\r\n        self.U                           : Optional[Tensor]         = None\r\n        self.V                           : Optional[Tensor]         = None\r\n\r\n        self.rows                        : int                      = 0\r\n        self.cols                        : int                      = 0\r\n\r\n        self.denoised                    : Optional[Tensor]         = None\r\n        self.uncond                      : Optional[Tensor]         = None\r\n\r\n        self.y0                          : Optional[Tensor]         = None\r\n        self.y0_inv                      : Optional[Tensor]         = None\r\n\r\n        self.multistep_stages            : int                      = 0\r\n        self.row_offset                  : Optional[int]            = None\r\n\r\n        self.cfg_cw                      : float                    = 1.0\r\n        self.extra_args                  : Optional[Dict[str, Any]] = None\r\n\r\n        self.extra_options               : str                      = extra_options\r\n        self.EO                          : ExtraOptions             = ExtraOptions(extra_options)\r\n\r\n        self.reorder_tableau_indices     : list[int]                = self.EO(\"reorder_tableau_indices\", [-1])\r\n\r\n        self.LINEAR_ANCHOR_X_0           : float                    = noise_anchor\r\n        \r\n        self.tile_sizes                  : Optional[List[Tuple[int,int]]] = None\r\n        self.tile_cnt                    : int                      = 0\r\n        self.latent_compression_ratio    : int                      = 8\r\n\r\n    @staticmethod\r\n    def is_exponential(rk_type:str) -> bool:\r\n        if rk_type.startswith(( \"res\", \r\n                                \"dpmpp\", \r\n                                \"ddim\", \r\n                                \"pec\", \r\n                                \"etdrk\", \r\n                                \"lawson\", \r\n                                \"abnorsett\",\r\n                                )): \r\n            return True\r\n        else:\r\n            return False\r\n\r\n    @staticmethod\r\n    def create(model,\r\n            rk_type       : str,\r\n            noise_anchor  : float       = 1.0,\r\n            noise_boost_normalize  : bool = True,\r\n            model_device  : str         = 'cuda',\r\n            work_device   : str         = 'cpu',\r\n            dtype         : torch.dtype = torch.float64,\r\n            extra_options : str         = \"\"\r\n            ) -> \"Union[RK_Method_Exponential, RK_Method_Linear]\":\r\n        \r\n        if RK_Method_Beta.is_exponential(rk_type):\r\n            return RK_Method_Exponential(model, rk_type, noise_anchor, noise_boost_normalize, model_device, work_device, dtype, extra_options)\r\n        else:\r\n            return RK_Method_Linear     (model, rk_type, noise_anchor, noise_boost_normalize, model_device, work_device, dtype, extra_options)\r\n                \r\n    def __call__(self):\r\n        raise NotImplementedError(\"This method got clownsharked!\")\r\n    \r\n    def model_epsilon(self, x:Tensor, sigma:Tensor, **extra_args) -> Tuple[Tensor, Tensor]:\r\n        s_in     = x.new_ones([x.shape[0]])\r\n        denoised = self.model(x, sigma * s_in, **extra_args)\r\n        denoised = self.calc_cfg_channelwise(denoised)\r\n        eps      = (x - denoised) / (sigma * s_in).view(x.shape[0], 1, 1, 1)       #return x0 ###################################THIS WORKS ONLY WITH THE MODEL SAMPLING PATCH\r\n        return eps, denoised\r\n    \r\n    def model_denoised(self, x:Tensor, sigma:Tensor, **extra_args) -> Tensor:\r\n        s_in     = x.new_ones([x.shape[0]])\r\n        control_tiles = None\r\n        y0_style_pos = self.extra_args['model_options']['transformer_options'].get(\"y0_style_pos\")\r\n        y0_style_neg = self.extra_args['model_options']['transformer_options'].get(\"y0_style_neg\")\r\n        y0_style_pos_tile, sy0_style_neg_tiles = None, None\r\n        \r\n        if self.EO(\"tile_model_calls\"):\r\n            tile_h = self.EO(\"tile_h\", 128)\r\n            tile_w = self.EO(\"tile_w\", 128)\r\n            \r\n            denoised_tiles = []\r\n            \r\n            tiles, orig_shape, grid, strides = tile_latent(x, tile_size=(tile_h,tile_w))\r\n            \r\n            for i in range(tiles.shape[0]):\r\n                tile = tiles[i].unsqueeze(0)\r\n                \r\n                denoised_tile = self.model(tile, sigma * s_in, **extra_args)\r\n                \r\n                denoised_tiles.append(denoised_tile)\r\n                \r\n            denoised_tiles = torch.cat(denoised_tiles, dim=0)\r\n            \r\n            denoised = untile_latent(denoised_tiles, orig_shape, grid, strides)\r\n            \r\n        elif self.tile_sizes is not None:\r\n            tile_h_full = self.tile_sizes[self.tile_cnt % len(self.tile_sizes)][0]\r\n            tile_w_full = self.tile_sizes[self.tile_cnt % len(self.tile_sizes)][1] \r\n            \r\n            if tile_h_full == -1:\r\n                tile_h      = x.shape[-2]\r\n                tile_h_full = tile_h * self.latent_compression_ratio\r\n            else:\r\n                tile_h = tile_h_full // self.latent_compression_ratio\r\n                \r\n            if tile_w_full == -1:\r\n                tile_w      = x.shape[-1]\r\n                tile_w_full = tile_w * self.latent_compression_ratio\r\n            else:\r\n                tile_w = tile_w_full // self.latent_compression_ratio\r\n            \r\n            #tile_h = tile_h_full // self.latent_compression_ratio\r\n            #tile_w = tile_w_full // self.latent_compression_ratio\r\n            \r\n            self.tile_cnt += 1\r\n            \r\n            #if len(self.tile_sizes) == 1 and self.tile_cnt % 2 == 1:\r\n            #    tile_h, tile_w = tile_w, tile_h\r\n            #    tile_h_full, tile_w_full = tile_w_full, tile_h_full\r\n            \r\n            if (self.tile_cnt // len(self.tile_sizes)) % 2 == 1 and self.EO(\"tiles_autorotate\"):\r\n                tile_h, tile_w = tile_w, tile_h\r\n                tile_h_full, tile_w_full = tile_w_full, tile_h_full\r\n            \r\n            xt_negative = self.model.inner_model.conds.get('xt_negative', self.model.inner_model.conds.get('negative'))\r\n            negative_control = xt_negative[0].get('control')\r\n            \r\n            if negative_control is not None and hasattr(negative_control, 'cond_hint_original'):\r\n                negative_cond_hint_init = negative_control.cond_hint.clone() if negative_control.cond_hint is not None else None\r\n            \r\n            xt_positive = self.model.inner_model.conds.get('xt_positive', self.model.inner_model.conds.get('positive'))\r\n            positive_control = xt_positive[0].get('control')\r\n            \r\n            if positive_control is not None and hasattr(positive_control, 'cond_hint_original'):\r\n                positive_cond_hint_init = positive_control.cond_hint.clone() if positive_control.cond_hint is not None else None\r\n                if positive_control.cond_hint_original.shape[-1] != x.shape[-2] * self.latent_compression_ratio or positive_control.cond_hint_original.shape[-2] != x.shape[-1] * self.latent_compression_ratio:\r\n                    positive_control_pretile = comfy.utils.bislerp(positive_control.cond_hint_original.clone().to(torch.float16).to('cuda'), x.shape[-1] * self.latent_compression_ratio, x.shape[-2] * self.latent_compression_ratio)\r\n                    positive_control.cond_hint_original = positive_control_pretile.to(positive_control.cond_hint_original)\r\n                positive_control_pretile = positive_control.cond_hint_original.clone().to(torch.float16).to('cuda')\r\n                control_tiles, control_orig_shape, control_grid, control_strides = tile_latent(positive_control_pretile, tile_size=(tile_h_full,tile_w_full))\r\n                control_tiles = control_tiles\r\n            \r\n            denoised_tiles = []\r\n            \r\n            tiles, orig_shape, grid, strides = tile_latent(x, tile_size=(tile_h,tile_w))\r\n            \r\n            if y0_style_pos is not None:\r\n                y0_style_pos_tiles, _, _, _ = tile_latent(y0_style_pos, tile_size=(tile_h,tile_w))\r\n            if y0_style_neg is not None:\r\n                y0_style_neg_tiles, _, _, _ = tile_latent(y0_style_neg, tile_size=(tile_h,tile_w))\r\n            \r\n            for i in range(tiles.shape[0]):\r\n                tile = tiles[i].unsqueeze(0)\r\n                \r\n                if control_tiles is not None:\r\n                    positive_control.cond_hint = control_tiles[i].unsqueeze(0).to(positive_control.cond_hint)\r\n                    if negative_control is not None:\r\n                        negative_control.cond_hint = control_tiles[i].unsqueeze(0).to(positive_control.cond_hint)\r\n                \r\n                if y0_style_pos is not None:\r\n                    self.extra_args['model_options']['transformer_options']['y0_style_pos'] = y0_style_pos_tiles[i].unsqueeze(0)\r\n                if y0_style_neg is not None:\r\n                    self.extra_args['model_options']['transformer_options']['y0_style_neg'] = y0_style_neg_tiles[i].unsqueeze(0)\r\n                \r\n                denoised_tile = self.model(tile, sigma * s_in, **extra_args)\r\n                \r\n                denoised_tiles.append(denoised_tile)\r\n                \r\n            denoised_tiles = torch.cat(denoised_tiles, dim=0)\r\n            \r\n            denoised = untile_latent(denoised_tiles, orig_shape, grid, strides)\r\n            \r\n        else:\r\n            denoised = self.model(x, sigma * s_in, **extra_args)\r\n        \r\n        if control_tiles is not None:\r\n            positive_control.cond_hint = positive_cond_hint_init\r\n            if negative_control is not None:\r\n                negative_control.cond_hint = negative_cond_hint_init\r\n                \r\n        if y0_style_pos is not None:\r\n            self.extra_args['model_options']['transformer_options']['y0_style_pos'] = y0_style_pos\r\n        if y0_style_neg is not None:\r\n            self.extra_args['model_options']['transformer_options']['y0_style_neg'] = y0_style_neg\r\n        \r\n        denoised = self.calc_cfg_channelwise(denoised)\r\n        return denoised\r\n\r\n    def update_transformer_options(self,\r\n                transformer_options : Optional[dict] = None,\r\n                ):\r\n\r\n        self.extra_args.setdefault(\"model_options\", {}).setdefault(\"transformer_options\", {}).update(transformer_options)\r\n        return\r\n\r\n    def set_coeff(self,\r\n                rk_type    : str,\r\n                h          : Tensor,\r\n                c1         : float  = 0.0,\r\n                c2         : float  = 0.5,\r\n                c3         : float  = 1.0,\r\n                step       : int    = 0,\r\n                sigmas     : Optional[Tensor] = None,\r\n                sigma_down : Optional[Tensor] = None,\r\n                ) -> None:\r\n\r\n        self.rk_type     = rk_type\r\n        self.IMPLICIT    = rk_type in get_implicit_sampler_name_list(nameOnly=True)\r\n        self.EXPONENTIAL = RK_Method_Beta.is_exponential(rk_type) \r\n\r\n        sigma            = sigmas[step]\r\n        sigma_next       = sigmas[step+1]\r\n        \r\n        h_prev = []\r\n        a, b, u, v, ci, multistep_stages, hybrid_stages, FSAL = get_rk_methods_beta(rk_type,\r\n                                                                                    h,\r\n                                                                                    c1,\r\n                                                                                    c2,\r\n                                                                                    c3,\r\n                                                                                    h_prev,\r\n                                                                                    step,\r\n                                                                                    sigmas,\r\n                                                                                    sigma,\r\n                                                                                    sigma_next,\r\n                                                                                    sigma_down,\r\n                                                                                    self.extra_options,\r\n                                                                                    )\r\n        \r\n        self.multistep_stages = multistep_stages\r\n        self.hybrid_stages    = hybrid_stages\r\n        \r\n        self.A = torch.tensor(a,  dtype=h.dtype, device=h.device)\r\n        self.B = torch.tensor(b,  dtype=h.dtype, device=h.device)\r\n        self.C = torch.tensor(ci, dtype=h.dtype, device=h.device)\r\n\r\n        self.U = torch.tensor(u,  dtype=h.dtype, device=h.device) if u is not None else None\r\n        self.V = torch.tensor(v,  dtype=h.dtype, device=h.device) if v is not None else None\r\n        \r\n        self.rows = self.A.shape[0]\r\n        self.cols = self.A.shape[1]\r\n        \r\n        self.row_offset = 1 if not self.IMPLICIT and self.A[0].sum() == 0 else 0  \r\n        \r\n        if self.IMPLICIT and self.reorder_tableau_indices[0] != -1:\r\n            self.reorder_tableau(self.reorder_tableau_indices)\r\n\r\n\r\n\r\n    def reorder_tableau(self, indices:list[int]) -> None:\r\n        #if indices[0]:\r\n        self.A    = self.A   [indices]\r\n        self.B[0] = self.B[0][indices]\r\n        self.C    = self.C   [indices]\r\n        self.C = torch.cat((self.C, self.C[-1:])) \r\n        return\r\n\r\n\r\n\r\n    def update_substep(self,\r\n                        x_0        : Tensor,\r\n                        x_         : Tensor,\r\n                        eps_       : Tensor,\r\n                        eps_prev_  : Tensor,\r\n                        row        : int,\r\n                        row_offset : int,\r\n                        h_new      : Tensor,\r\n                        h_new_orig : Tensor,\r\n                        lying_eps_row_factor : float = 1.0,\r\n                        ) -> Tensor:\r\n        \r\n        if row < self.rows - row_offset   and   self.multistep_stages == 0:\r\n            row_tmp_offset = row + row_offset\r\n\r\n        else:\r\n            row_tmp_offset = row + 1\r\n                \r\n        zr_base   = self.zum(row+row_offset+self.multistep_stages, eps_, eps_prev_)\r\n        \r\n        if self.SYNC_SUBSTEP_MEAN_CW and lying_eps_row_factor != 1.0:\r\n            zr_orig = self.zum(row+row_offset+self.multistep_stages, eps_, eps_prev_)\r\n            x_orig_row = x_0 + h_new * zr_orig\r\n        \r\n        #eps_row      = eps_     [row].clone()\r\n        #eps_prev_row = eps_prev_[row].clone()\r\n        \r\n        eps_     [row] *= lying_eps_row_factor\r\n        eps_prev_[row] *= lying_eps_row_factor\r\n        zr = self.zum(row+row_offset+self.multistep_stages, eps_, eps_prev_)\r\n        \r\n        x_[row_tmp_offset] = x_0 + h_new * zr\r\n        \r\n        if self.SYNC_SUBSTEP_MEAN_CW and lying_eps_row_factor != 1.0:\r\n            x_[row_tmp_offset] = x_[row_tmp_offset] - x_[row_tmp_offset].mean(dim=(-2,-1), keepdim=True) + x_orig_row.mean(dim=(-2,-1), keepdim=True)\r\n        \r\n        #eps_     [row] = eps_row\r\n        #eps_prev_[row] = eps_prev_row\r\n        \r\n        if (self.SYNC_SUBSTEP_MEAN_CW and h_new != h_new_orig) or self.EO(\"sync_mean_noise\"):\r\n            if not self.EO(\"disable_sync_mean_noise\"):\r\n                x_row_down = x_0 + h_new_orig * zr\r\n                x_[row_tmp_offset] = x_[row_tmp_offset] - x_[row_tmp_offset].mean(dim=(-2,-1), keepdim=True) + x_row_down.mean(dim=(-2,-1), keepdim=True)\r\n        \r\n        return x_\r\n\r\n\r\n    \r\n    def a_k_einsum(self, row:int, k     :Tensor) -> Tensor:\r\n        return torch.einsum('i, i... -> ...', self.A[row], k[:self.cols])\r\n    \r\n    def b_k_einsum(self, row:int, k     :Tensor) -> Tensor:\r\n        return torch.einsum('i, i... -> ...', self.B[row], k[:self.cols])\r\n    \r\n    def u_k_einsum(self, row:int, k_prev:Tensor) -> Tensor:\r\n        return torch.einsum('i, i... -> ...', self.U[row], k_prev[:self.cols]) if (self.U is not None and k_prev is not None) else 0\r\n    \r\n    def v_k_einsum(self, row:int, k_prev:Tensor) -> Tensor:\r\n        return torch.einsum('i, i... -> ...', self.V[row], k_prev[:self.cols]) if (self.V is not None and k_prev is not None) else 0\r\n    \r\n    \r\n    \r\n    def zum(self, row:int, k:Tensor, k_prev:Tensor=None,) -> Tensor:\r\n        if row < self.rows:\r\n            return self.a_k_einsum(row, k) + self.u_k_einsum(row, k_prev)\r\n        else:\r\n            row = row - self.rows\r\n            return self.b_k_einsum(row, k) + self.v_k_einsum(row, k_prev)\r\n        \r\n    def zum_tableau(self,  k:Tensor, k_prev:Tensor=None,) -> Tensor:\r\n        a_k_sum = torch.einsum('ij, j... -> i...', self.A, k[:self.cols])\r\n        u_k_sum = torch.einsum('ij, j... -> i...', self.U, k_prev[:self.cols]) if (self.U is not None and k_prev is not None) else 0\r\n        return a_k_sum + u_k_sum\r\n        \r\n\r\n\r\n    def init_cfg_channelwise(self, x:Tensor, cfg_cw:float=1.0, **extra_args) -> Dict[str, Any]:\r\n        self.uncond = [torch.full_like(x, 0.0)]\r\n        self.cfg_cw = cfg_cw\r\n        if cfg_cw != 1.0:\r\n            def post_cfg_function(args):\r\n                self.uncond[0] = args[\"uncond_denoised\"]\r\n                return args[\"denoised\"]\r\n            model_options = extra_args.get(\"model_options\", {}).copy()\r\n            extra_args[\"model_options\"] = comfy.model_patcher.set_model_options_post_cfg_function(model_options, post_cfg_function, disable_cfg1_optimization=True)\r\n        return extra_args\r\n            \r\n            \r\n    def calc_cfg_channelwise(self, denoised:Tensor) -> Tensor:\r\n        if self.cfg_cw != 1.0:            \r\n            avg = 0\r\n            for b, c in itertools.product(range(denoised.shape[0]), range(denoised.shape[1])):\r\n                avg     += torch.norm(denoised[b][c] - self.uncond[0][b][c])\r\n            avg  /= denoised.shape[1]\r\n            \r\n            for b, c in itertools.product(range(denoised.shape[0]), range(denoised.shape[1])):\r\n                ratio     = torch.nan_to_num(torch.norm(denoised[b][c] - self.uncond[0][b][c])   /   avg,     0)\r\n                denoised_new = self.uncond[0] + ratio * self.cfg_cw * (denoised - self.uncond[0])\r\n            return denoised_new\r\n        else:\r\n            return denoised\r\n        \r\n\r\n    @staticmethod\r\n    def calculate_res_2m_step(\r\n                            x_0        : Tensor,\r\n                            denoised_  : Tensor,\r\n                            sigma_down : Tensor,\r\n                            sigmas     : Tensor,\r\n                            step       : int,\r\n                            ) -> Tuple[Tensor, Tensor]:\r\n        \r\n        if denoised_[2].sum() == 0:\r\n            return None, None\r\n        \r\n        sigma      = sigmas[step]\r\n        sigma_prev = sigmas[step-1]\r\n        \r\n        h_prev = -torch.log(sigma/sigma_prev)\r\n        h      = -torch.log(sigma_down/sigma)\r\n\r\n        c1 = 0\r\n        c2 = (-h_prev / h).item()\r\n\r\n        ci = [c1,c2]\r\n        φ = Phi(h, ci, analytic_solution=True)\r\n\r\n        b2 = φ(2)/c2\r\n        b1 = φ(1) - b2\r\n        \r\n        eps_2 = denoised_[1] - x_0\r\n        eps_1 = denoised_[0] - x_0\r\n\r\n        h_a_k_sum = h * (b1 * eps_1 + b2 * eps_2)\r\n        \r\n        x = torch.exp(-h) * x_0 + h_a_k_sum\r\n        \r\n        denoised = x_0 + (sigma / (sigma - sigma_down)) * h_a_k_sum\r\n\r\n        return x, denoised\r\n\r\n\r\n    @staticmethod\r\n    def calculate_res_3m_step(\r\n                            x_0        : Tensor,\r\n                            denoised_  : Tensor,\r\n                            sigma_down : Tensor,\r\n                            sigmas     : Tensor,\r\n                            step       : int,\r\n                            ) -> Tuple[Tensor, Tensor]:\r\n        \r\n        if denoised_[3].sum() == 0:\r\n            return None, None\r\n        \r\n        sigma       = sigmas[step]\r\n        sigma_prev  = sigmas[step-1]\r\n        sigma_prev2 = sigmas[step-2]\r\n\r\n        h       = -torch.log(sigma_down/sigma)\r\n        h_prev  = -torch.log(sigma/sigma_prev)\r\n        h_prev2 = -torch.log(sigma/sigma_prev2)\r\n\r\n        c1 = 0\r\n        c2 = (-h_prev  / h).item()\r\n        c3 = (-h_prev2 / h).item()\r\n\r\n        ci = [c1,c2,c3]\r\n        φ = Phi(h, ci, analytic_solution=True)\r\n        \r\n        gamma = (3*(c3**3) - 2*c3) / (c2*(2 - 3*c2))\r\n\r\n        b3 = (1 / (gamma * c2 + c3)) * φ(2, -h)      \r\n        b2 = gamma * b3 \r\n        b1 = φ(1, -h) - b2 - b3    \r\n        \r\n        eps_3 = denoised_[2] - x_0\r\n        eps_2 = denoised_[1] - x_0\r\n        eps_1 = denoised_[0] - x_0\r\n\r\n        h_a_k_sum = h * (b1 * eps_1 + b2 * eps_2 + b3 * eps_3)\r\n        \r\n        x = torch.exp(-h) * x_0 + h_a_k_sum\r\n        \r\n        denoised = x_0 + (sigma / (sigma - sigma_down)) * h_a_k_sum\r\n\r\n        return x, denoised\r\n\r\n    def swap_rk_type_at_step_or_threshold(self,\r\n                                            x_0               : Tensor,\r\n                                            data_prev_        : Tensor,\r\n                                            NS,\r\n                                            sigmas            : Tensor,\r\n                                            step              : Tensor,\r\n                                            rk_swap_step      : int,\r\n                                            rk_swap_threshold : float,\r\n                                            rk_swap_type      : str,\r\n                                            rk_swap_print     : bool,\r\n                                            ) -> str:\r\n        if rk_swap_type == \"\":\r\n            if self.EXPONENTIAL:\r\n                rk_swap_type = \"res_3m\" \r\n            else:\r\n                rk_swap_type = \"deis_3m\"\r\n            \r\n        if step > rk_swap_step and self.rk_type != rk_swap_type:\r\n            RESplain(\"Switching rk_type to:\", rk_swap_type)\r\n            self.rk_type = rk_swap_type\r\n            \r\n            if RK_Method_Beta.is_exponential(rk_swap_type):\r\n                self.__class__ = RK_Method_Exponential\r\n            else:\r\n                self.__class__ = RK_Method_Linear\r\n                \r\n            if rk_swap_type in get_implicit_sampler_name_list(nameOnly=True):\r\n                self.IMPLICIT   = True\r\n                self.row_offset = 0\r\n                NS.row_offset   = 0\r\n            else:\r\n                self.IMPLICIT   = False\r\n                self.row_offset = 1\r\n                NS.row_offset   = 1\r\n            NS.h_fn     = self.h_fn\r\n            NS.t_fn     = self.t_fn\r\n            NS.sigma_fn = self.sigma_fn\r\n            \r\n            \r\n            \r\n        if step > 2 and sigmas[step+1] > 0 and self.rk_type != rk_swap_type and rk_swap_threshold > 0:\r\n            x_res_2m, denoised_res_2m = self.calculate_res_2m_step(x_0, data_prev_, NS.sigma_down, sigmas, step)\r\n            x_res_3m, denoised_res_3m = self.calculate_res_3m_step(x_0, data_prev_, NS.sigma_down, sigmas, step)\r\n            if denoised_res_2m is not None:\r\n                if rk_swap_print:\r\n                    RESplain(\"res_3m - res_2m:\", torch.norm(denoised_res_3m - denoised_res_2m).item())\r\n                if rk_swap_threshold > torch.norm(denoised_res_2m - denoised_res_3m):\r\n                    RESplain(\"Switching rk_type to:\", rk_swap_type, \"at step:\", step)\r\n                    self.rk_type = rk_swap_type\r\n            \r\n                    if RK_Method_Beta.is_exponential(rk_swap_type):\r\n                        self.__class__ = RK_Method_Exponential\r\n                    else:\r\n                        self.__class__ = RK_Method_Linear\r\n                \r\n                    if rk_swap_type in get_implicit_sampler_name_list(nameOnly=True):\r\n                        self.IMPLICIT   = True\r\n                        self.row_offset = 0\r\n                        NS.row_offset   = 0\r\n                    else:\r\n                        self.IMPLICIT   = False\r\n                        self.row_offset = 1\r\n                        NS.row_offset   = 1\r\n                    NS.h_fn     = self.h_fn\r\n                    NS.t_fn     = self.t_fn\r\n                    NS.sigma_fn = self.sigma_fn\r\n            \r\n        return self.rk_type\r\n\r\n\r\n    def bong_iter(self,\r\n                    x_0       : Tensor,\r\n                    x_        : Tensor,\r\n                    eps_      : Tensor,\r\n                    eps_prev_ : Tensor,\r\n                    data_     : Tensor,\r\n                    sigma     : Tensor,\r\n                    s_        : Tensor,\r\n                    row       : int,\r\n                    row_offset: int,\r\n                    h         : Tensor,\r\n                    step      : int,\r\n                    ) -> Tuple[Tensor, Tensor, Tensor]:\r\n        \r\n        if x_0.ndim == 4:\r\n            norm_dim = (-2,-1)\r\n        elif x_0.ndim == 5:\r\n            norm_dim = (-4,-2,-1)\r\n        \r\n        if self.EO(\"bong_start_step\", 0) > step or step > self.EO(\"bong_stop_step\", 10000):\r\n            return x_0, x_, eps_\r\n        \r\n        bong_iter_max_row = self.rows - row_offset\r\n        if self.EO(\"bong_iter_max_row_full\"):\r\n            bong_iter_max_row = self.rows\r\n            \r\n        if self.EO(\"bong_iter_lock_x_0_ch_means\"):\r\n            x_0_ch_means = x_0.mean(dim=norm_dim, keepdim=True)\r\n            \r\n        if self.EO(\"bong_iter_lock_x_row_ch_means\"):\r\n            x_row_means = []\r\n            for rr in range(row+row_offset):\r\n                x_row_mean = x_[rr].mean(dim=norm_dim, keepdim=True)\r\n                x_row_means.append(x_row_mean)\r\n        \r\n        if row < bong_iter_max_row   and   self.multistep_stages == 0:\r\n            bong_strength = self.EO(\"bong_strength\", 1.0)\r\n            \r\n            if bong_strength != 1.0:\r\n                x_0_tmp  = x_0.clone()\r\n                x_tmp_   = x_.clone()\r\n                eps_tmp_ = eps_.clone()\r\n            \r\n            for i in range(100):\r\n                x_0 = x_[row+row_offset] - h * self.zum(row+row_offset, eps_, eps_prev_)\r\n                \r\n                if self.EO(\"bong_iter_lock_x_0_ch_means\"):\r\n                    x_0 = x_0 - x_0.mean(dim=norm_dim, keepdim=True) + x_0_ch_means\r\n                    \r\n                for rr in range(row+row_offset):\r\n                    x_[rr] = x_0 + h * self.zum(rr, eps_, eps_prev_)\r\n                    \r\n                if self.EO(\"bong_iter_lock_x_row_ch_means\"):\r\n                    for rr in range(row+row_offset):\r\n                        x_[rr] = x_[rr] - x_[rr].mean(dim=norm_dim, keepdim=True) + x_row_means[rr]\r\n                    \r\n                for rr in range(row+row_offset):\r\n                    if self.EO(\"zonkytar\"):\r\n                        #eps_[rr] = self.get_unsample_epsilon(x_[rr], x_0, data_[rr], sigma, s_[rr])\r\n                        eps_[rr] = self.get_epsilon(x_[rr], x_0, data_[rr], sigma, s_[rr])\r\n                    else:\r\n                        eps_[rr] = self.get_epsilon(x_0, x_[rr], data_[rr], sigma, s_[rr])\r\n                    \r\n            if bong_strength != 1.0:\r\n                x_0  = x_0_tmp  + bong_strength * (x_0  - x_0_tmp)\r\n                x_   = x_tmp_   + bong_strength * (x_   - x_tmp_)\r\n                eps_ = eps_tmp_ + bong_strength * (eps_ - eps_tmp_)\r\n        \r\n        return x_0, x_, eps_\r\n\r\n\r\n    def newton_iter(self,\r\n                    x_0        : Tensor,\r\n                    x_         : Tensor,\r\n                    eps_       : Tensor,\r\n                    eps_prev_  : Tensor,\r\n                    data_      : Tensor,\r\n                    s_         : Tensor,\r\n                    row        : int,\r\n                    h          : Tensor,\r\n                    sigmas     : Tensor,\r\n                    step       : int,\r\n                    newton_name: str,\r\n                    ) -> Tuple[Tensor, Tensor]:\r\n        \r\n        newton_iter_name = \"newton_iter_\" + newton_name\r\n        \r\n        default_anchor_x_all = False\r\n        if newton_name == \"lying\":\r\n            default_anchor_x_all = True\r\n        \r\n        newton_iter                 = self.EO(newton_iter_name,                      100)\r\n        newton_iter_skip_last_steps = self.EO(newton_iter_name + \"_skip_last_steps\",   0)\r\n        newton_iter_mixing_rate     = self.EO(newton_iter_name + \"_mixing_rate\",     1.0)\r\n        \r\n        newton_iter_anchor          = self.EO(newton_iter_name + \"_anchor\",            0)\r\n        newton_iter_anchor_x_all    = self.EO(newton_iter_name + \"_anchor_x_all\",    default_anchor_x_all)\r\n        newton_iter_type            = self.EO(newton_iter_name + \"_type\",           \"from_epsilon\")\r\n        newton_iter_sequence        = self.EO(newton_iter_name + \"_sequence\",       \"double\")\r\n        \r\n        row_b_offset = 0\r\n        if self.EO(newton_iter_name + \"_include_row_b\"):\r\n            row_b_offset = 1\r\n        \r\n        if step >= len(sigmas)-1-newton_iter_skip_last_steps   or   sigmas[step+1] == 0   or   not self.IMPLICIT:\r\n            return x_, eps_\r\n        \r\n        sigma = sigmas[step]\r\n        \r\n        start, stop = 0, self.rows+row_b_offset\r\n        if newton_name   == \"pre\":\r\n            start = row\r\n        elif newton_name == \"post\":\r\n            start = row + 1\r\n            \r\n        if newton_iter_anchor >= 0:\r\n            eps_anchor = eps_[newton_iter_anchor].clone()\r\n            \r\n        if newton_iter_anchor_x_all:\r\n            x_orig_ = x_.clone()\r\n            \r\n        for n_iter in range(newton_iter):\r\n            for r in range(start, stop):\r\n                if newton_iter_anchor >= 0:\r\n                    eps_[newton_iter_anchor] = eps_anchor.clone()\r\n                if newton_iter_anchor_x_all:\r\n                    x_ = x_orig_.clone()\r\n                x_tmp, eps_tmp = x_[r].clone(), eps_[r].clone()\r\n                \r\n                seq_start, seq_stop = r, r+1\r\n                \r\n                if newton_iter_sequence == \"double\":\r\n                    seq_start, seq_stop = start, stop\r\n                    \r\n                for r_ in range(seq_start, seq_stop):\r\n                    x_[r_] = x_0 + h * self.zum(r_, eps_, eps_prev_)\r\n\r\n                for r_ in range(seq_start, seq_stop):\r\n                    if newton_iter_type == \"from_data\":\r\n                        data_[r_] = get_data_from_step(x_0, x_[r_], sigma, s_[r_])  \r\n                        eps_ [r_] = self.get_epsilon(x_0, x_[r_], data_[r_], sigma, s_[r_])\r\n                    elif newton_iter_type == \"from_step\":\r\n                        eps_ [r_] = get_epsilon_from_step(x_0, x_[r_], sigma, s_[r_])\r\n                    elif newton_iter_type == \"from_alt\":\r\n                        eps_ [r_] = x_0/sigma - x_[r_]/s_[r_]\r\n                    elif newton_iter_type == \"from_epsilon\":\r\n                        eps_ [r_] = self.get_epsilon(x_0, x_[r_], data_[r_], sigma, s_[r_])\r\n                    \r\n                    if self.EO(newton_iter_name + \"_opt\"):\r\n                        opt_timing, opt_type, opt_subtype = self.EO(newton_iter_name+\"_opt\", [str])\r\n                        \r\n                        opt_start, opt_stop = 0, self.rows+row_b_offset\r\n                        if    opt_timing == \"early\":\r\n                            opt_stop  = row + 1\r\n                        elif  opt_timing == \"late\":\r\n                            opt_start = row + 1\r\n\r\n                        for r2 in range(opt_start, opt_stop): \r\n                            if r_ != r2:\r\n                                if   opt_subtype == \"a\":\r\n                                    eps_a = eps_[r2]\r\n                                    eps_b = eps_[r_]\r\n                                elif opt_subtype == \"b\":\r\n                                    eps_a = eps_[r_]\r\n                                    eps_b = eps_[r2]\r\n                                \r\n                                if   opt_type == \"ortho\":\r\n                                    eps_ [r_] = get_orthogonal(eps_a, eps_b)\r\n                                elif opt_type == \"collin\":\r\n                                    eps_ [r_] = get_collinear (eps_a, eps_b)\r\n                                elif opt_type == \"proj\":\r\n                                    eps_ [r_] = get_collinear (eps_a, eps_b) + get_orthogonal(eps_b, eps_a)\r\n                                    \r\n                    x_  [r_] =   x_tmp + newton_iter_mixing_rate * (x_  [r_] -   x_tmp)\r\n                    eps_[r_] = eps_tmp + newton_iter_mixing_rate * (eps_[r_] - eps_tmp)\r\n                    \r\n                if newton_iter_sequence == \"double\":\r\n                    break\r\n        \r\n        return x_, eps_\r\n\r\n\r\n\r\n\r\nclass RK_Method_Exponential(RK_Method_Beta):\r\n    def __init__(self,\r\n                model,\r\n                rk_type       : str,\r\n                noise_anchor  : float,\r\n                noise_boost_normalize  : bool,\r\n\r\n                model_device  : str         = 'cuda',\r\n                work_device   : str         = 'cpu',\r\n                dtype         : torch.dtype = torch.float64,\r\n                extra_options : str         = \"\",\r\n                ):\r\n        \r\n        super().__init__(model,\r\n                        rk_type,\r\n                        noise_anchor,\r\n                        noise_boost_normalize,\r\n                        model_device  = model_device,\r\n                        work_device   = work_device,\r\n                        dtype         = dtype,\r\n                        extra_options = extra_options,\r\n                        ) \r\n        \r\n    @staticmethod\r\n    def alpha_fn(neg_h:Tensor) -> Tensor:\r\n        return torch.exp(neg_h)\r\n\r\n    @staticmethod\r\n    def sigma_fn(t:Tensor) -> Tensor:\r\n        return t.neg().exp()\r\n\r\n    @staticmethod\r\n    def t_fn(sigma:Tensor) -> Tensor:\r\n        return sigma.log().neg()\r\n    \r\n    @staticmethod\r\n    def h_fn(sigma_down:Tensor, sigma:Tensor) -> Tensor:\r\n        return -torch.log(sigma_down/sigma)\r\n\r\n    def __call__(self,\r\n                x         : Tensor,\r\n                sub_sigma : Tensor,\r\n                x_0       : Optional[Tensor] = None,\r\n                sigma     : Optional[Tensor] = None,\r\n                transformer_options : Optional[dict] = None,\r\n                ) -> Tuple[Tensor, Tensor]:\r\n        \r\n        x_0   = x         if x_0   is None else x_0\r\n        sigma = sub_sigma if sigma is None else sigma\r\n        \r\n        if transformer_options is not None:\r\n            self.extra_args.setdefault(\"model_options\", {}).setdefault(\"transformer_options\", {}).update(transformer_options)\r\n\r\n        denoised = self.model_denoised(x.to(self.model_device), sub_sigma.to(self.model_device), **self.extra_args).to(sigma.device)\r\n        \r\n        eps_anchored = (x_0 - denoised) / sigma\r\n        eps_unmoored = (x   - denoised) / sub_sigma\r\n        \r\n        eps      = eps_unmoored + self.LINEAR_ANCHOR_X_0 * (eps_anchored - eps_unmoored)\r\n        \r\n        denoised = x_0 - sigma * eps\r\n        \r\n        epsilon  = denoised - x_0\r\n        \r\n        return epsilon, denoised\r\n    \r\n    \r\n    \r\n    def get_epsilon(self,\r\n                    x_0       : Tensor,\r\n                    x         : Tensor,\r\n                    denoised  : Tensor,\r\n                    sigma     : Tensor,\r\n                    sub_sigma : Tensor,\r\n                    ) -> Tensor:\r\n        \r\n        eps_anchored = (x_0 - denoised) / sigma\r\n        eps_unmoored = (x   - denoised) / sub_sigma\r\n        \r\n        eps      = eps_unmoored + self.LINEAR_ANCHOR_X_0 * (eps_anchored - eps_unmoored)\r\n        \r\n        denoised = x_0 - sigma * eps\r\n        \r\n        return denoised - x_0\r\n    \r\n    \r\n    \r\n    def get_epsilon_anchored(self, x_0:Tensor, denoised:Tensor, sigma:Tensor) -> Tensor:\r\n        return denoised - x_0\r\n    \r\n    \r\n    \r\n    def get_guide_epsilon(self,\r\n                            x_0           : Tensor,\r\n                            x             : Tensor,\r\n                            y             : Tensor,\r\n                            sigma         : Tensor,\r\n                            sigma_cur     : Tensor,\r\n                            sigma_down    : Optional[Tensor] = None,\r\n                            epsilon_scale : Optional[Tensor] = None,\r\n                            ) -> Tensor:\r\n\r\n        sigma_cur = epsilon_scale if epsilon_scale is not None else sigma_cur\r\n\r\n        if sigma_down > sigma:\r\n            eps_unmoored = (sigma_cur/(self.sigma_max - sigma_cur)) * (x   - y)\r\n        else:\r\n            eps_unmoored = y - x \r\n        \r\n        if self.EO(\"manually_anchor_unsampler\"):\r\n            if sigma_down > sigma:\r\n                eps_anchored = (sigma    /(self.sigma_max - sigma)) * (x_0 - y)\r\n            else:\r\n                eps_anchored = y - x_0\r\n            eps_guide = eps_unmoored + self.LINEAR_ANCHOR_X_0 * (eps_anchored - eps_unmoored)\r\n        else:\r\n            eps_guide = eps_unmoored\r\n        \r\n        return eps_guide\r\n\r\n\r\n\r\n\r\nclass RK_Method_Linear(RK_Method_Beta):\r\n    def __init__(self,\r\n                model,\r\n                rk_type       : str,\r\n                noise_anchor  : float,\r\n                noise_boost_normalize  : bool,\r\n                model_device  : str         = 'cuda',\r\n                work_device   : str         = 'cpu',\r\n                dtype         : torch.dtype = torch.float64,\r\n                extra_options : str         = \"\",\r\n                ):\r\n        \r\n        super().__init__(model,\r\n                        rk_type,\r\n                        noise_anchor,\r\n                        noise_boost_normalize,\r\n                        model_device  = model_device,\r\n                        work_device   = work_device,\r\n                        dtype         = dtype,\r\n                        extra_options = extra_options,\r\n                        ) \r\n        \r\n    @staticmethod\r\n    def alpha_fn(neg_h:Tensor) -> Tensor:\r\n        return torch.ones_like(neg_h)\r\n\r\n    @staticmethod\r\n    def sigma_fn(t:Tensor) -> Tensor:\r\n        return t\r\n\r\n    @staticmethod\r\n    def t_fn(sigma:Tensor) -> Tensor:\r\n        return sigma\r\n    \r\n    @staticmethod\r\n    def h_fn(sigma_down:Tensor, sigma:Tensor) -> Tensor:\r\n        return sigma_down - sigma\r\n    \r\n    def __call__(self,\r\n                x         : Tensor,\r\n                sub_sigma : Tensor,\r\n                x_0       : Optional[Tensor] = None,\r\n                sigma     : Optional[Tensor] = None,\r\n                transformer_options : Optional[dict] = None,\r\n                ) -> Tuple[Tensor, Tensor]:\r\n        \r\n        x_0   = x         if x_0   is None else x_0\r\n        sigma = sub_sigma if sigma is None else sigma\r\n        \r\n        if transformer_options is not None:\r\n            self.extra_args.setdefault(\"model_options\", {}).setdefault(\"transformer_options\", {}).update(transformer_options)     \r\n        \r\n        denoised = self.model_denoised(x.to(self.model_device), sub_sigma.to(self.model_device), **self.extra_args).to(sigma.device)\r\n\r\n        epsilon_anchor   = (x_0 - denoised) / sigma\r\n        epsilon_unmoored =   (x - denoised) / sub_sigma\r\n        \r\n        epsilon = epsilon_unmoored + self.LINEAR_ANCHOR_X_0 * (epsilon_anchor - epsilon_unmoored)\r\n\r\n        return epsilon, denoised\r\n\r\n\r\n\r\n    def get_epsilon(self,\r\n                    x_0       : Tensor,\r\n                    x         : Tensor,\r\n                    denoised  : Tensor,\r\n                    sigma     : Tensor,\r\n                    sub_sigma : Tensor,\r\n                    ) -> Tensor:\r\n        \r\n        eps_anchor   = (x_0 - denoised) / sigma\r\n        eps_unmoored =   (x - denoised) / sub_sigma\r\n        \r\n        return eps_unmoored + self.LINEAR_ANCHOR_X_0 * (eps_anchor - eps_unmoored)\r\n    \r\n    \r\n    \r\n    def get_epsilon_anchored(self, x_0:Tensor, denoised:Tensor, sigma:Tensor) -> Tensor:\r\n        return (x_0 - denoised) / sigma\r\n    \r\n    \r\n    \r\n    def get_guide_epsilon(self, \r\n                            x_0           : Tensor, \r\n                            x             : Tensor, \r\n                            y             : Tensor, \r\n                            sigma         : Tensor, \r\n                            sigma_cur     : Tensor, \r\n                            sigma_down    : Optional[Tensor] = None, \r\n                            epsilon_scale : Optional[Tensor] = None, \r\n                            ) -> Tensor:\r\n\r\n        if sigma_down > sigma:\r\n            sigma_ratio = self.sigma_max - sigma_cur.clone()\r\n        else:\r\n            sigma_ratio = sigma_cur.clone()\r\n        sigma_ratio = epsilon_scale if epsilon_scale is not None else sigma_ratio\r\n\r\n        if sigma_down is None:\r\n            return (x - y) / sigma_ratio\r\n        else:\r\n            if sigma_down > sigma:\r\n                return (y - x) / sigma_ratio\r\n            else:\r\n                return (x - y) / sigma_ratio\r\n\r\n\r\n"
  },
  {
    "path": "samplers_extensions.py",
    "content": "import torch\r\nfrom torch import Tensor\r\nimport torch.nn.functional as F\r\n\r\nfrom dataclasses import dataclass, asdict\r\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union\r\nimport copy\r\n\r\nfrom nodes import MAX_RESOLUTION\r\n\r\nfrom ..helper                import OptionsManager, FrameWeightsManager, initialize_or_scale, get_res4lyf_scheduler_list, parse_range_string, parse_tile_sizes\r\n\r\nfrom .rk_coefficients_beta   import RK_SAMPLER_NAMES_BETA_FOLDERS, get_default_sampler_name, get_sampler_name_list, process_sampler_name\r\n\r\nfrom .noise_classes          import NOISE_GENERATOR_NAMES_SIMPLE\r\nfrom .rk_noise_sampler_beta  import NOISE_MODE_NAMES\r\nfrom .constants              import IMPLICIT_TYPE_NAMES, GUIDE_MODE_NAMES_BETA_SIMPLE, MAX_STEPS, FRAME_WEIGHTS_CONFIG_NAMES, FRAME_WEIGHTS_DYNAMICS_NAMES, FRAME_WEIGHTS_SCHEDULE_NAMES\r\n\r\n\r\nclass ClownSamplerSelector_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"sampler_name\": (get_sampler_name_list(),  {\"default\": get_default_sampler_name()}), \r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (RK_SAMPLER_NAMES_BETA_FOLDERS,)\r\n    RETURN_NAMES = (\"sampler_name\",) \r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            sampler_name = \"res_2m\",\r\n            ):\r\n        \r\n        sampler_name, implicit_sampler_name = process_sampler_name(sampler_name)\r\n        \r\n        sampler_name = sampler_name if implicit_sampler_name == \"use_explicit\" else implicit_sampler_name\r\n        \r\n        return (sampler_name,)\r\n\r\n\r\n\r\nclass ClownOptions_SDE_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"noise_type_sde\":         (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\r\n                    \"noise_type_sde_substep\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\r\n                    \"noise_mode_sde\":         (NOISE_MODE_NAMES,             {\"default\": 'hard',                                                        \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\r\n                    \"noise_mode_sde_substep\": (NOISE_MODE_NAMES,             {\"default\": 'hard',                                                        \"tooltip\": \"How noise scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\r\n                    \"eta\":                    (\"FLOAT\",                      {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\r\n                    \"eta_substep\":            (\"FLOAT\",                      {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Calculated noise amount to be added, then removed, after each step.\"}),\r\n                    \"seed\":                   (\"INT\",                        {\"default\": -1, \"min\": -1, \"max\": 0xffffffffffffffff}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"etas\":                   (\"SIGMAS\", ),\r\n                    \"etas_substep\":           (\"SIGMAS\", ),\r\n                    \"options\":                (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            noise_type_sde         = \"gaussian\",\r\n            noise_type_sde_substep = \"gaussian\",\r\n            noise_mode_sde         = \"hard\",\r\n            noise_mode_sde_substep = \"hard\",\r\n            eta                    = 0.5,\r\n            eta_substep            = 0.5,\r\n            seed             : int = -1,\r\n            etas             : Optional[Tensor] = None,\r\n            etas_substep     : Optional[Tensor] = None,\r\n            options                = None,\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n        \r\n        if noise_mode_sde == \"none\":\r\n            noise_mode_sde = \"hard\"\r\n            eta = 0.0\r\n            \r\n        if noise_mode_sde_substep == \"none\":\r\n            noise_mode_sde_substep = \"hard\"\r\n            eta_substep = 0.0\r\n            \r\n        if noise_type_sde == \"none\":\r\n            noise_type_sde = \"gaussian\"\r\n            eta = 0.0\r\n            \r\n        if noise_type_sde_substep == \"none\":\r\n            noise_type_sde_substep = \"gaussian\"\r\n            eta_substep = 0.0\r\n            \r\n        options['noise_type_sde']         = noise_type_sde\r\n        options['noise_type_sde_substep'] = noise_type_sde_substep\r\n        options['noise_mode_sde']         = noise_mode_sde\r\n        options['noise_mode_sde_substep'] = noise_mode_sde_substep\r\n        options['eta']                    = eta\r\n        options['eta_substep']            = eta_substep\r\n        options['noise_seed_sde']         = seed\r\n        \r\n        options['etas']                   = etas\r\n        options['etas_substep']           = etas_substep\r\n\r\n        return (options,)\r\n\r\n\r\n\r\nclass ClownOptions_StepSize_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"overshoot_mode\":         (NOISE_MODE_NAMES, {\"default\": 'hard',                                                        \"tooltip\": \"How step size overshoot scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\r\n                    \"overshoot_mode_substep\": (NOISE_MODE_NAMES, {\"default\": 'hard',                                                        \"tooltip\": \"How substep size overshoot scales with the sigma schedule. Hard is the most aggressive, the others start strong and drop rapidly.\"}),\r\n                    \"overshoot\":              (\"FLOAT\",          {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Boost the size of each denoising step, then rescale to match the original. Has a softening effect.\"}),\r\n                    \"overshoot_substep\":      (\"FLOAT\",          {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Boost the size of each denoising substep, then rescale to match the original. Has a softening effect.\"}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":                (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            overshoot_mode         = \"hard\",\r\n            overshoot_mode_substep = \"hard\",\r\n            overshoot              = 0.0,\r\n            overshoot_substep      = 0.0,\r\n            options                = None,\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['overshoot_mode']         = overshoot_mode\r\n        options['overshoot_mode_substep'] = overshoot_mode_substep\r\n        options['overshoot']              = overshoot\r\n        options['overshoot_substep']      = overshoot_substep\r\n\r\n        return (options,\r\n            )\r\n\r\n\r\n@dataclass\r\nclass DetailBoostOptions:\r\n    noise_scaling_weight : float = 0.0\r\n    noise_boost_step     : float = 0.0\r\n    noise_boost_substep  : float = 0.0\r\n    noise_anchor         : float = 1.0\r\n    s_noise              : float = 1.0\r\n    s_noise_substep      : float = 1.0\r\n    d_noise              : float = 1.0\r\n\r\nDETAIL_BOOST_METHODS = [\r\n    'sampler',\r\n    'sampler_normal',\r\n    'sampler_substep',\r\n    'sampler_substep_normal',\r\n    'model',\r\n    'model_alpha',\r\n    ]\r\n\r\nclass ClownOptions_DetailBoost_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"weight\":     (\"FLOAT\",              {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set to positive values to create a sharper, grittier, more detailed image. Set to negative values to soften and deepen the colors.\"}),\r\n                    \"method\":     (DETAIL_BOOST_METHODS, {\"default\": \"model\",                                                       \"tooltip\": \"Determines whether the sampler or the model underestimates the noise level.\"}),\r\n                    #\"noise_scaling_mode\":    (['linear'] + NOISE_MODE_NAMES,  {\"default\": 'hard',                                          \"tooltip\": \"Changes the steps where the effect is greatest. Most affect early steps, sinusoidal affects middle steps.\"}),\r\n                    \"mode\":       (NOISE_MODE_NAMES,     {\"default\": 'hard',                                                        \"tooltip\": \"Changes the steps where the effect is greatest. Most affect early steps, sinusoidal affects middle steps.\"}),\r\n                    \"eta\":        (\"FLOAT\",              {\"default\": 0.5, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"The strength of the effect of the noise_scaling_mode. Linear ignores this parameter.\"}),\r\n                    \"start_step\": (\"INT\",                {\"default\": 3,   \"min\": 0,      \"max\": MAX_STEPS}),\r\n                    \"end_step\":   (\"INT\",                {\"default\": 10,  \"min\": -1,     \"max\": MAX_STEPS}),\r\n\r\n                    #\"noise_scaling_cycles\":  (\"INT\",              {\"default\": 1, \"min\": 1, \"max\": MAX_STEPS}),\r\n\r\n                    #\"noise_boost_step\":      (\"FLOAT\",            {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set to positive values to create a sharper, grittier, more detailed image. Set to negative values to soften and deepen the colors.\"}),\r\n                    #\"noise_boost_substep\":   (\"FLOAT\",            {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set to positive values to create a sharper, grittier, more detailed image. Set to negative values to soften and deepen the colors.\"}),\r\n                    #\"sampler_scaling_normalize\":(\"BOOLEAN\",          {\"default\": False,                                                          \"tooltip\": \"Limit saturation and luminosity drift.\"}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"weights\": (\"SIGMAS\", ),\r\n                    \"etas\":    (\"SIGMAS\", ),\r\n                    \"options\":               (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            weight      : float = 0.0,\r\n            method      : str   = \"sampler\",\r\n            mode        : str   = \"linear\",\r\n            eta         : float = 0.5,\r\n            start_step  : int   = 0,\r\n            end_step    : int   = -1,\r\n\r\n\r\n            noise_scaling_cycles      : int   = 1,\r\n\r\n            noise_boost_step          : float = 0.0,\r\n            noise_boost_substep       : float = 0.0,\r\n            sampler_scaling_normalize : bool  = False,\r\n\r\n            weights     : Optional[Tensor] = None,\r\n            etas        : Optional[Tensor] = None,\r\n            \r\n            options                        = None\r\n            ):\r\n        \r\n        noise_scaling_weight     = weight\r\n        noise_scaling_type       = method\r\n        noise_scaling_mode       = mode\r\n        noise_scaling_eta        = eta\r\n        noise_scaling_start_step = start_step\r\n        noise_scaling_end_step   = end_step\r\n        \r\n        noise_scaling_weights = weights\r\n        noise_scaling_etas    = etas\r\n        \r\n        \r\n        options = options if options is not None else {}\r\n        \r\n        default_dtype = torch.float64\r\n        default_device = torch.device('cuda')\r\n        \r\n        if noise_scaling_type.endswith(\"_normal\"):\r\n            sampler_scaling_normalize = True\r\n            noise_scaling_type = noise_scaling_type[:-7]\r\n        \r\n        if noise_scaling_end_step == -1:\r\n            noise_scaling_end_step = MAX_STEPS\r\n        \r\n        if noise_scaling_weights == None: \r\n            noise_scaling_weights = initialize_or_scale(None, noise_scaling_weight, MAX_STEPS).to(default_dtype).to(default_device)\r\n        \r\n        if noise_scaling_etas == None: \r\n            noise_scaling_etas = initialize_or_scale(None, noise_scaling_eta, MAX_STEPS).to(default_dtype).to(default_device)\r\n        \r\n        noise_scaling_prepend = torch.zeros((noise_scaling_start_step,), dtype=default_dtype, device=default_device)\r\n        \r\n        noise_scaling_weights = torch.cat((noise_scaling_prepend, noise_scaling_weights), dim=0)\r\n        noise_scaling_etas    = torch.cat((noise_scaling_prepend, noise_scaling_etas),    dim=0)\r\n\r\n        if noise_scaling_weights.shape[-1] > noise_scaling_end_step:\r\n            noise_scaling_weights = noise_scaling_weights[:noise_scaling_end_step]\r\n            \r\n        if noise_scaling_etas.shape[-1] > noise_scaling_end_step:\r\n            noise_scaling_etas = noise_scaling_etas[:noise_scaling_end_step]\r\n        \r\n        noise_scaling_weights = F.pad(noise_scaling_weights, (0, MAX_STEPS), value=0.0)\r\n        noise_scaling_etas = F.pad(noise_scaling_etas, (0, MAX_STEPS), value=0.0)\r\n        \r\n        options['noise_scaling_weight']  = noise_scaling_weight\r\n        options['noise_scaling_type']    = noise_scaling_type\r\n        options['noise_scaling_mode']    = noise_scaling_mode\r\n        options['noise_scaling_eta']     = noise_scaling_eta\r\n        options['noise_scaling_cycles']  = noise_scaling_cycles\r\n        \r\n        options['noise_scaling_weights'] = noise_scaling_weights\r\n        options['noise_scaling_etas']    = noise_scaling_etas\r\n        \r\n        options['noise_boost_step']      = noise_boost_step\r\n        options['noise_boost_substep']   = noise_boost_substep\r\n        options['noise_boost_normalize'] = sampler_scaling_normalize\r\n\r\n        \"\"\"options['DetailBoostOptions'] = DetailBoostOptions(\r\n            noise_scaling_weight = noise_scaling_weight,\r\n            noise_scaling_type    = noise_scaling_type,\r\n            noise_scaling_mode    = noise_scaling_mode,\r\n            noise_scaling_eta     = noise_scaling_eta,\r\n            \r\n            noise_boost_step      = noise_boost_step,\r\n            noise_boost_substep   = noise_boost_substep,\r\n            noise_boost_normalize = noise_boost_normalize,\r\n            \r\n            noise_anchor          = noise_anchor,\r\n            s_noise               = s_noise,\r\n            s_noise_substep       = s_noise_substep,\r\n            d_noise               = d_noise\r\n            d_noise_start_step    = d_noise_start_step\r\n        )\"\"\"\r\n\r\n        return (options,)\r\n\r\n\r\n\r\n\r\nclass ClownOptions_SigmaScaling_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"s_noise\":              (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01,                 \"tooltip\": \"Adds extra SDE noise. Values around 1.03-1.07 can lead to a moderate boost in detail and paint textures.\"}),\r\n                    \"s_noise_substep\":      (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01,                 \"tooltip\": \"Adds extra SDE noise. Values around 1.03-1.07 can lead to a moderate boost in detail and paint textures.\"}),\r\n                    \"noise_anchor_sde\":     (\"FLOAT\", {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Typically set to between 1.0 and 0.0. Lower values cerate a grittier, more detailed image.\"}),\r\n                    \r\n                    \"lying\":                (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01,                 \"tooltip\": \"Downscales the sigma schedule. Values around 0.98-0.95 can lead to a large boost in detail and paint textures.\"}),\r\n                    \"lying_inv\":            (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01,                 \"tooltip\": \"Upscales the sigma schedule. Will soften the image and deepen colors. Use after d_noise to counteract desaturation.\"}),\r\n                    \"lying_start_step\":     (\"INT\",   {\"default\": 0, \"min\": 0, \"max\": MAX_STEPS}),\r\n                    \"lying_inv_start_step\": (\"INT\",   {\"default\": 1, \"min\": 0, \"max\": MAX_STEPS}),\r\n\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"s_noises\":             (\"SIGMAS\", ),\r\n                    \"s_noises_substep\":     (\"SIGMAS\", ),\r\n                    \"options\":              (\"OPTIONS\", ),\r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            noise_anchor_sde        : float = 1.0,\r\n            \r\n            s_noise                 : float = 1.0,\r\n            s_noise_substep         : float = 1.0,\r\n            lying                   : float = 1.0,\r\n            lying_start_step        : int   = 0,\r\n            \r\n            lying_inv               : float = 1.0,\r\n            lying_inv_start_step    : int   = 1,\r\n            \r\n            s_noises                : Optional[Tensor] = None,\r\n            s_noises_substep        : Optional[Tensor] = None,\r\n            options                         = None\r\n            ):\r\n        \r\n        options = options if options is not None else {}\r\n        \r\n        default_dtype = torch.float64\r\n        default_device = torch.device('cuda')\r\n        \r\n        \r\n        \r\n        options['noise_anchor']           = noise_anchor_sde\r\n        options['s_noise']                = s_noise\r\n        options['s_noise_substep']        = s_noise_substep\r\n        options['d_noise']                = lying\r\n        options['d_noise_start_step']     = lying_start_step\r\n        options['d_noise_inv']            = lying_inv\r\n        options['d_noise_inv_start_step'] = lying_inv_start_step\r\n\r\n        options['s_noises']                = s_noises\r\n        options['s_noises_substep']        = s_noises_substep\r\n\r\n        return (options,)\r\n\r\n\r\n\r\n\r\nclass ClownOptions_Momentum_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"momentum\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Accelerate convergence with positive values when sampling, negative values when unsampling.\"}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":               (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            momentum = 0.0,\r\n            options  = None\r\n            ):\r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['momentum'] = momentum\r\n\r\n        return (options,)\r\n\r\n\r\n\r\nclass ClownOptions_ImplicitSteps_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"implicit_type\":          (IMPLICIT_TYPE_NAMES, {\"default\": \"bongmath\"}), \r\n                    \"implicit_type_substeps\": (IMPLICIT_TYPE_NAMES, {\"default\": \"bongmath\"}), \r\n                    \"implicit_steps\":         (\"INT\",               {\"default\": 0, \"min\": 0, \"max\": 10000}),\r\n                    \"implicit_substeps\":      (\"INT\",               {\"default\": 0, \"min\": 0, \"max\": 10000}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":                (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            implicit_type          = \"bongmath\",\r\n            implicit_type_substeps = \"bongmath\",\r\n            implicit_steps         = 0,\r\n            implicit_substeps      = 0,\r\n            options                = None\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['implicit_type']          = implicit_type\r\n        options['implicit_type_substeps'] = implicit_type_substeps\r\n        options['implicit_steps']         = implicit_steps\r\n        options['implicit_substeps']      = implicit_substeps\r\n\r\n        return (options,)\r\n\r\n\r\n\r\nclass ClownOptions_Cycles_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"cycles\"          : (\"FLOAT\", {\"default\": 0.0, \"min\":  0.0,   \"max\": 10000, \"step\":0.5,  \"round\": 0.5}),\r\n                    \"eta_decay_scale\" : (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01, \"tooltip\": \"Multiplies etas by this number after every cycle. May help drive convergence.\" }),\r\n                    \"unsample_eta\"    : (\"FLOAT\", {\"default\": 0.5, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    \"unsampler_override\"  : (get_sampler_name_list(), {\"default\": \"none\"}), \r\n                    \"unsample_cfg\"    : (\"FLOAT\", {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":    (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            cycles          = 0,\r\n            unsample_eta    = 0.5,\r\n            eta_decay_scale = 1.0,\r\n            unsample_cfg    = 1.0,\r\n            unsampler_override  = \"none\",\r\n            options         = None\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['rebounds']        = int(cycles * 2)\r\n        options['unsample_eta']    = unsample_eta\r\n        options['unsampler_name']  = unsampler_override\r\n        options['eta_decay_scale'] = eta_decay_scale\r\n        options['unsample_cfg']    = unsample_cfg\r\n\r\n        return (options,)\r\n\r\n\r\n\r\n\r\nclass SharkOptions_StartStep_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"start_at_step\": (\"INT\", {\"default\": 0, \"min\": -1, \"max\": 10000, \"step\":1,}),\r\n\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":    (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            start_at_step = 0,\r\n            options       = None\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['start_at_step'] = start_at_step\r\n\r\n        return (options,)\r\n\r\n\r\n\r\n\r\nclass ClownOptions_Tile_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"tile_width\" : (\"INT\", {\"default\": 1024, \"min\": -1, \"max\": 10000, \"step\":1,}),\r\n                    \"tile_height\": (\"INT\", {\"default\": 1024, \"min\": -1, \"max\": 10000, \"step\":1,}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":    (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            tile_height = 1024,\r\n            tile_width  = 1024,\r\n            options     = None\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n        \r\n        tile_sizes = options.get('tile_sizes', [])\r\n        tile_sizes.append((tile_height, tile_width))\r\n        options['tile_sizes'] = tile_sizes\r\n\r\n        return (options,)\r\n\r\n\r\n\r\nclass ClownOptions_Tile_Advanced_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"tile_sizes\": (\"STRING\", {\"default\": \"1024,1024\", \"multiline\": True}),   \r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\":    (\"OPTIONS\", ),   \r\n                    }\r\n                }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            tile_sizes = \"1024,1024\",\r\n            options    = None\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        tiles_height_width = parse_tile_sizes(tile_sizes)\r\n        options['tile_sizes'] = [(tile[-1], tile[-2]) for tile in ptile]  # swap height and width to be consistent... width, height\r\n\r\n        return (options,)\r\n\r\n\r\n\r\n\r\nclass ClownOptions_ExtraOptions_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"extra_options\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),   \r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"options\": (\"OPTIONS\", ),   \r\n                    }  \r\n            }\r\n        \r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self,\r\n            extra_options = \"\",\r\n            options       = None\r\n            ):\r\n\r\n        options = options if options is not None else {}\r\n            \r\n        options['extra_options'] = extra_options\r\n\r\n        return (options, )\r\n\r\n\r\n\r\nclass ClownOptions_Automation_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\": {},\r\n                \"optional\": {\r\n                    \"etas\":             (\"SIGMAS\", ),\r\n                    \"etas_substep\":     (\"SIGMAS\", ),\r\n                    \"s_noises\":         (\"SIGMAS\", ),\r\n                    \"s_noises_substep\": (\"SIGMAS\", ),\r\n                    \"epsilon_scales\":   (\"SIGMAS\", ),\r\n                    \"frame_weights\":    (\"SIGMAS\", ),\r\n                    \"options\":          (\"OPTIONS\",),  \r\n                    }  \r\n                }\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self,\r\n            etas             = None,\r\n            etas_substep     = None,\r\n            s_noises         = None,\r\n            s_noises_substep = None,\r\n            epsilon_scales   = None,\r\n            frame_weights    = None,\r\n            options          = None\r\n            ):\r\n                \r\n        options = options if options is not None else {}\r\n            \r\n        frame_weights_mgr = (frame_weights, frame_weights)\r\n\r\n        automation = {\r\n            \"etas\"              : etas,\r\n            \"etas_substep\"      : etas_substep,\r\n            \"s_noises\"          : s_noises,\r\n            \"s_noises_substep\"  : s_noises_substep,\r\n            \"epsilon_scales\"    : epsilon_scales,\r\n            \"frame_weights_mgr\" : frame_weights_mgr,\r\n        }\r\n        \r\n        options[\"automation\"] = automation\r\n\r\n        return (options, )\r\n\r\n\r\n\r\n\r\n\r\nclass SharkOptions_GuideCond_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\": {},\r\n                \"optional\": {\r\n                    \"positive\" : (\"CONDITIONING\", ),\r\n                    \"negative\" : (\"CONDITIONING\", ),\r\n                    \"cfg\"      : (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    \"options\"  : (\"OPTIONS\",),  \r\n                    }  \r\n                }\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self,\r\n            positive = None,\r\n            negative = None,\r\n            cfg      = 1.0,\r\n            options  = None,\r\n            ):\r\n                \r\n        options = options if options is not None else {}\r\n\r\n        flow_cond = {\r\n            \"yt_positive\" : positive,\r\n            \"yt_negative\" : negative,\r\n            \"yt_cfg\"      : cfg,\r\n        }\r\n        \r\n        options[\"flow_cond\"] = flow_cond\r\n\r\n        return (options, )\r\n\r\n\r\n\r\n\r\nclass SharkOptions_GuideConds_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\": {},\r\n                \"optional\": {\r\n                    \"positive_masked\"   : (\"CONDITIONING\", ),\r\n                    \"positive_unmasked\" : (\"CONDITIONING\", ),\r\n                    \"negative_masked\"   : (\"CONDITIONING\", ),\r\n                    \"negative_unmasked\" : (\"CONDITIONING\", ),\r\n                    \"cfg_masked\"        : (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    \"cfg_unmasked\"      : (\"FLOAT\",   {\"default\": 1.0, \"min\": -10000, \"max\": 10000, \"step\":0.01}),\r\n                    \"options\"           : (\"OPTIONS\",),  \r\n                    }  \r\n                }\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self,\r\n            positive_masked   = None,\r\n            negative_masked   = None,\r\n            cfg_masked        = 1.0,\r\n            positive_unmasked = None,\r\n            negative_unmasked = None,\r\n            cfg_unmasked      = 1.0,\r\n            options  = None,\r\n            ):\r\n                \r\n        options = options if options is not None else {}\r\n\r\n        flow_cond = {\r\n            \"yt_positive\"     : positive_masked,\r\n            \"yt_negative\"     : negative_masked,\r\n            \"yt_cfg\"          : cfg_masked,\r\n            \"yt_inv_positive\" : positive_unmasked,\r\n            \"yt_inv_negative\" : negative_unmasked,\r\n            \"yt_inv_cfg\"      : cfg_unmasked,\r\n        }\r\n        \r\n        options[\"flow_cond\"] = flow_cond\r\n\r\n        return (options, )\r\n\r\n\r\n\r\n\r\n\r\nclass SharkOptions_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"noise_type_init\": (NOISE_GENERATOR_NAMES_SIMPLE, {\"default\": \"gaussian\"}),\r\n                \"s_noise_init\":    (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\":0.01, \"round\": False, }),\r\n                \"denoise_alt\":     (\"FLOAT\",                      {\"default\": 1.0, \"min\": -10000,   \"max\": 10000,   \"step\":0.01}),\r\n                \"channelwise_cfg\": (\"BOOLEAN\",                    {\"default\": False}),\r\n                },\r\n            \"optional\": {\r\n                \"options\":         (\"OPTIONS\", ),   \r\n                }\r\n            }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            noise_type_init = \"gaussian\",\r\n            s_noise_init    = 1.0,\r\n            denoise_alt     = 1.0,\r\n            channelwise_cfg = False,\r\n            options         = None\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['noise_type_init']  = noise_type_init\r\n        options['noise_init_stdev'] = s_noise_init\r\n        options['denoise_alt']      = denoise_alt\r\n        options['channelwise_cfg']  = channelwise_cfg\r\n\r\n        return (options,)\r\n    \r\n\r\n\r\n\r\nclass SharkOptions_UltraCascade_Latent_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"width\":   (\"INT\", {\"default\": 60, \"min\": 1, \"max\": MAX_RESOLUTION, \"step\": 1}),\r\n                \"height\":  (\"INT\", {\"default\": 36, \"min\": 1, \"max\": MAX_RESOLUTION, \"step\": 1}),\r\n                },\r\n            \"optional\": {\r\n                \"options\": (\"OPTIONS\",),   \r\n                }\r\n            }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            width  : int = 60,\r\n            height : int = 36,\r\n            options       = None,\r\n            ): \r\n        \r\n        options = options if options is not None else {}\r\n            \r\n        options['ultracascade_latent_width']  = width\r\n        options['ultracascade_latent_height'] = height\r\n\r\n        return (options,)\r\n\r\n\r\n\r\n\r\nclass ClownOptions_SwapSampler_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"sampler_name\":       (get_sampler_name_list(), {\"default\": get_default_sampler_name()}), \r\n                \"swap_below_err\":     (\"FLOAT\",                 {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Swap samplers if the error per step falls below this threshold.\"}),\r\n                \"swap_at_step\":       (\"INT\",                   {\"default\": 30,  \"min\": 1,      \"max\": 10000}),\r\n                \"log_err_to_console\": (\"BOOLEAN\",               {\"default\": False}),\r\n                },\r\n            \"optional\": {\r\n                \"options\":            (\"OPTIONS\", ),   \r\n                }\r\n            }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            sampler_name       = \"res_3m\",\r\n            swap_below_err     = 0.0,\r\n            swap_at_step       = 30,\r\n            log_err_to_console = False,\r\n            options            = None,\r\n            ): \r\n        \r\n        sampler_name, implicit_sampler_name = process_sampler_name(sampler_name)\r\n        \r\n        sampler_name = sampler_name if implicit_sampler_name == \"use_explicit\" else implicit_sampler_name\r\n                \r\n        options = options if options is not None else {}\r\n            \r\n        options['rk_swap_type']      = sampler_name\r\n        options['rk_swap_threshold'] = swap_below_err\r\n        options['rk_swap_step']      = swap_at_step\r\n        options['rk_swap_print']     = log_err_to_console\r\n\r\n        return (options,)\r\n    \r\n    \r\n\r\n    \r\n    \r\nclass ClownOptions_SDE_Mask_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": {\r\n                \"max\":               (\"FLOAT\",                                     {\"default\": 1.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Clamp the max value for the mask.\"}),\r\n                \"min\":               (\"FLOAT\",                                     {\"default\": 0.0, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Clamp the min value for the mask.\"}),\r\n                \"invert_mask\":       (\"BOOLEAN\",                                   {\"default\": False}),\r\n                },\r\n            \"optional\": {\r\n                \"mask\":              (\"MASK\", ),\r\n                \"options\":           (\"OPTIONS\", ),   \r\n                }\r\n            }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n    \r\n    def main(self,\r\n            max = 1.0,\r\n            min = 0.0,\r\n            invert_mask = False,\r\n            mask     = None,\r\n            options      = None,\r\n            ): \r\n        \r\n        options = copy.deepcopy(options) if options is not None else {}\r\n        \r\n        if invert_mask:\r\n            mask = 1-mask\r\n        \r\n        mask = ((mask - mask.min()) * (max - min)) / (mask.max() - mask.min()) + min    \r\n        \r\n        options['sde_mask'] = mask\r\n\r\n        return (options,)\r\n\r\n\r\n\r\n\r\nclass ClownGuide_Mean_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"weight\":               (\"FLOAT\",                                     {\"default\": 0.75, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"cutoff\":               (\"FLOAT\",                                     {\"default\": 1.0,  \"min\":  0.0,    \"max\": 1.0,   \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"weight_scheduler\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"start_step\":           (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step\":             (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":          (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide\":                (\"LATENT\", ),\r\n                    \"mask\":                 (\"MASK\", ),\r\n                    \"weights\":              (\"SIGMAS\", ),\r\n                    \"guides\":               (\"GUIDES\", ),\r\n                    }  \r\n                }\r\n        \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            weight_scheduler          = \"constant\",\r\n            start_step                = 0,\r\n            end_step                  = 30,\r\n            cutoff                    = 1.0,\r\n            guide                     = None,\r\n            weight                    = 0.0,\r\n\r\n            channelwise_mode          = False,\r\n            projection_mode           = False,\r\n            weights                   = None,\r\n            mask                      = None,\r\n            invert_mask               = False,\r\n            \r\n            guides                    = None,\r\n            ):\r\n        \r\n        default_dtype = torch.float64\r\n        \r\n        mask = 1-mask if mask is not None else None\r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        if guide is not None:\r\n            raw_x = guide.get('state_info', {}).get('raw_x', None)\r\n            if raw_x is not None:\r\n                guide          = {'samples': guide['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide          = {'samples': guide['samples'].clone()}\r\n        \r\n        if weight_scheduler == \"constant\": # and weights == None: \r\n            weights = initialize_or_scale(None, weight, end_step).to(default_dtype)\r\n            weights = F.pad(weights, (0, MAX_STEPS), value=0.0)\r\n            \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        \r\n        guides['weight_mean']           = weight\r\n        guides['weights_mean']          = weights\r\n        guides['guide_mean']            = guide\r\n        guides['mask_mean']             = mask\r\n        \r\n        guides['weight_scheduler_mean'] = weight_scheduler\r\n        guides['start_step_mean']       = start_step\r\n        guides['end_step_mean']         = end_step\r\n        \r\n        guides['cutoff_mean']           = cutoff\r\n        \r\n        return (guides, )\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownGuide_Style_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"apply_to\":         ([\"positive\", \"negative\"],                    {\"default\": \"positive\", \"tooltip\": \"When using CFG, decides whether to apply the guide to the positive or negative conditioning.\"}),\r\n                    \"method\":           ([\"AdaIN\", \"WCT\"],                            {\"default\": \"WCT\"}),\r\n                    \"weight\":           (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide by multiplying all other weights by this value.\"}),\r\n                    \"synweight\":        (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the relative strength of the guide on the opposite conditioning to what was selected: i.e., negative if positive in apply_to. Recommended to avoid CFG burn.\"}),\r\n                    \"weight_scheduler\": ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\", \"tooltip\": \"Selecting any scheduler except constant will cause the strength to gradually decay to zero. Try beta57 vs. linear quadratic.\"},),\r\n                    \"start_step\":       (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step\":         (\"INT\",                                       {\"default\": -1,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":      (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide\":            (\"LATENT\", ),\r\n                    \"mask\":             (\"MASK\", ),\r\n                    \"weights\":          (\"SIGMAS\", ),\r\n                    \"guides\":           (\"GUIDES\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n    DESCRIPTION  = \"Transfer some visual aspects of style from a guide (reference) image. If nothing about style is specified in the prompt, it may just transfer the lighting and color scheme.\" + \\\r\n                \"If using CFG results in burn, or a very dark/bright image in the preview followed by a bad output, try duplicating and chaining this node, so that the guide may be applied to both positive and negative conditioning.\" + \\\r\n                \"Currently supported models: SD1.5, SDXL, Stable Cascade, SD3.5, AuraFlow, Flux, HiDream, WAN, and LTXV.\"\r\n\r\n    def main(self,\r\n            apply_to         = \"all\",\r\n            method           = \"WCT\",\r\n            weight           = 1.0,\r\n            synweight        = 1.0,\r\n            weight_scheduler = \"constant\",\r\n            start_step       = 0,\r\n            end_step         = 15,\r\n            invert_mask      = False,\r\n            \r\n            guide            = None,\r\n            mask             = None,\r\n            weights          = None,\r\n            guides           = None,\r\n            ):\r\n        \r\n        default_dtype = torch.float64\r\n        \r\n        mask = 1-mask if mask is not None else None\r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        if guide is not None:\r\n            raw_x = guide.get('state_info', {}).get('raw_x', None)\r\n            if raw_x is not None:\r\n                guide          = {'samples': guide['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide          = {'samples': guide['samples'].clone()}\r\n        \r\n        if weight_scheduler == \"constant\": # and weights == None: \r\n            weights = initialize_or_scale(None, weight, end_step).to(default_dtype)\r\n            prepend = torch.zeros(start_step).to(weights)\r\n            weights = torch.cat([prepend, weights])\r\n            weights = F.pad(weights, (0, MAX_STEPS), value=0.0)\r\n            \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        \r\n        guides['style_method'] = method\r\n        \r\n        if apply_to in {\"positive\", \"all\"}:\r\n        \r\n            guides['weight_style_pos']           = weight\r\n            guides['weights_style_pos']          = weights\r\n\r\n            guides['synweight_style_pos']        = synweight\r\n\r\n            guides['guide_style_pos']            = guide\r\n            guides['mask_style_pos']             = mask\r\n\r\n            guides['weight_scheduler_style_pos'] = weight_scheduler\r\n            guides['start_step_style_pos']       = start_step\r\n            guides['end_step_style_pos']         = end_step\r\n            \r\n        if apply_to in {\"negative\", \"all\"}:\r\n            guides['weight_style_neg']           = weight\r\n            guides['weights_style_neg']          = weights\r\n\r\n            guides['synweight_style_neg']        = synweight\r\n\r\n            guides['guide_style_neg']            = guide\r\n            guides['mask_style_neg']             = mask\r\n\r\n            guides['weight_scheduler_style_neg'] = weight_scheduler\r\n            guides['start_step_style_neg']       = start_step\r\n            guides['end_step_style_neg']         = end_step\r\n        \r\n        return (guides, )\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownGuide_AdaIN_MMDiT_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"weight\":           (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide by multiplying all other weights by this value.\"}),\r\n                    \"weight_scheduler\": ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"double_blocks\"   : (\"STRING\",                                    {\"default\": \"\", \"multiline\": True}),\r\n                    \"double_weights\"  : (\"STRING\",                                    {\"default\": \"\", \"multiline\": True}),\r\n                    \"single_blocks\"   : (\"STRING\",                                    {\"default\": \"20\", \"multiline\": True}),\r\n                    \"single_weights\"  : (\"STRING\",                                    {\"default\": \"0.5\", \"multiline\": True}),\r\n                    \"start_step\":       (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step\":         (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":      (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide\":            (\"LATENT\", ),\r\n                    \"mask\":             (\"MASK\", ),\r\n                    \"weights\":          (\"SIGMAS\", ),\r\n                    \"guides\":           (\"GUIDES\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            weight           = 1.0,\r\n            weight_scheduler = \"constant\",\r\n            double_weights   = \"0.1\",\r\n            single_weights   = \"0.0\", \r\n            double_blocks    = \"all\",\r\n            single_blocks    = \"all\", \r\n            start_step       = 0,\r\n            end_step         = 15,\r\n            invert_mask      = False,\r\n            \r\n            guide            = None,\r\n            mask             = None,\r\n            weights          = None,\r\n            guides           = None,\r\n            ):\r\n        \r\n        default_dtype = torch.float64\r\n        \r\n        mask = 1-mask if mask is not None else None\r\n        \r\n        double_weights = parse_range_string(double_weights)\r\n        single_weights = parse_range_string(single_weights)\r\n        \r\n        if len(double_weights) == 0:\r\n            double_weights.append(0.0)\r\n        if len(single_weights) == 0:\r\n            single_weights.append(0.0)\r\n            \r\n        if len(double_weights) == 1:\r\n            double_weights = double_weights * 100\r\n        if len(single_weights) == 1:\r\n            single_weights = single_weights * 100\r\n            \r\n        if type(double_weights[0]) == int:\r\n            double_weights = [float(val) for val in double_weights]\r\n        if type(single_weights[0]) == int:\r\n            single_weights = [float(val) for val in single_weights]\r\n        \r\n        if double_blocks == \"all\":\r\n            double_blocks  = [val for val in range(100)]\r\n            if len(double_weights) == 1:\r\n                double_weights = [double_weights[0]] * 100\r\n        else:\r\n            double_blocks  = parse_range_string(double_blocks)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(double_blocks, double_weights):\r\n                weights_expanded[b] = w\r\n            double_weights = weights_expanded\r\n            \r\n        \r\n        if single_blocks == \"all\":\r\n            single_blocks = [val for val in range(100)]\r\n            if len(single_weights) == 1:\r\n                single_weights = [single_weights[0]] * 100\r\n        else:\r\n            single_blocks  = parse_range_string(single_blocks)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(single_blocks, single_weights):\r\n                weights_expanded[b] = w\r\n            single_weights = weights_expanded\r\n        \r\n        \r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        if guide is not None:\r\n            raw_x = guide.get('state_info', {}).get('raw_x', None)\r\n            if raw_x is not None:\r\n                guide          = {'samples': guide['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide          = {'samples': guide['samples'].clone()}\r\n        \r\n        if weight_scheduler == \"constant\": # and weights == None: \r\n            weights = initialize_or_scale(None, weight, end_step).to(default_dtype)\r\n            prepend = torch.zeros(start_step).to(weights)\r\n            weights = torch.cat([prepend, weights])\r\n            weights = F.pad(weights, (0, MAX_STEPS), value=0.0)\r\n        \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        \r\n        guides['weight_adain']           = weight\r\n        guides['weights_adain']          = weights\r\n        \r\n        guides['blocks_adain_mmdit'] = {\r\n            \"double_weights\": double_weights,\r\n            \"single_weights\": single_weights,\r\n            \"double_blocks\" : double_blocks,\r\n            \"single_blocks\" : single_blocks,\r\n        }\r\n        \r\n        guides['guide_adain']            = guide\r\n        guides['mask_adain']             = mask\r\n\r\n        guides['weight_scheduler_adain'] = weight_scheduler\r\n        guides['start_step_adain']       = start_step\r\n        guides['end_step_adain']         = end_step\r\n        \r\n        return (guides, )\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownGuide_AttnInj_MMDiT_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"weight\":           (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide by multiplying all other weights by this value.\"}),\r\n                    \"weight_scheduler\": ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"double_blocks\"   : (\"STRING\",                                    {\"default\": \"0,1,3\", \"multiline\": True}),\r\n                    \"double_weights\"  : (\"STRING\",                                    {\"default\": \"1.0\", \"multiline\": True}),\r\n                    \"single_blocks\"   : (\"STRING\",                                    {\"default\": \"20\", \"multiline\": True}),\r\n                    \"single_weights\"  : (\"STRING\",                                    {\"default\": \"0.5\", \"multiline\": True}),\r\n                    \r\n                    \"img_q\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"img_k\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"img_v\":            (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n\r\n                    \"txt_q\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"txt_k\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"txt_v\":            (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n\r\n                    \"img_q_norm\":       (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"img_k_norm\":       (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"img_v_norm\":       (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n\r\n                    \"txt_q_norm\":       (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"txt_k_norm\":       (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n                    \"txt_v_norm\":       (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set relative injection strength.\"}),\r\n\r\n                    \"start_step\":       (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step\":         (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":      (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide\":            (\"LATENT\", ),\r\n                    \"mask\":             (\"MASK\", ),\r\n                    \"weights\":          (\"SIGMAS\", ),\r\n                    \"guides\":           (\"GUIDES\", ),\r\n                    }  \r\n                }\r\n    \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            weight           = 1.0,\r\n            weight_scheduler = \"constant\",\r\n            double_weights   = \"0.1\",\r\n            single_weights   = \"0.0\", \r\n            double_blocks    = \"all\",\r\n            single_blocks    = \"all\", \r\n            \r\n            img_q            = 0.0,\r\n            img_k            = 0.0,\r\n            img_v            = 0.0,\r\n            \r\n            txt_q            = 0.0,\r\n            txt_k            = 0.0,\r\n            txt_v            = 0.0,\r\n            \r\n            img_q_norm       = 0.0,\r\n            img_k_norm       = 0.0,\r\n            img_v_norm       = 0.0,\r\n            \r\n            txt_q_norm       = 0.0,\r\n            txt_k_norm       = 0.0,\r\n            txt_v_norm       = 0.0,\r\n            \r\n            start_step       = 0,\r\n            end_step         = 15,\r\n            invert_mask      = False,\r\n            \r\n            guide            = None,\r\n            mask             = None,\r\n            weights          = None,\r\n            guides           = None,\r\n            ):\r\n        \r\n        default_dtype = torch.float64\r\n        \r\n        mask = 1-mask if mask is not None else None\r\n        \r\n        double_weights = parse_range_string(double_weights)\r\n        single_weights = parse_range_string(single_weights)\r\n        \r\n        if len(double_weights) == 0:\r\n            double_weights.append(0.0)\r\n        if len(single_weights) == 0:\r\n            single_weights.append(0.0)\r\n        \r\n        if len(double_weights) == 1:\r\n            double_weights = double_weights * 100\r\n        if len(single_weights) == 1:\r\n            single_weights = single_weights * 100\r\n        \r\n        if type(double_weights[0]) == int:\r\n            double_weights = [float(val) for val in double_weights]\r\n        if type(single_weights[0]) == int:\r\n            single_weights = [float(val) for val in single_weights]\r\n        \r\n        if double_blocks == \"all\":\r\n            double_blocks  = [val for val in range(100)]\r\n            if len(double_weights) == 1:\r\n                double_weights = [double_weights[0]] * 100\r\n        else:\r\n            double_blocks  = parse_range_string(double_blocks)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(double_blocks, double_weights):\r\n                weights_expanded[b] = w\r\n            double_weights = weights_expanded\r\n            \r\n        \r\n        if single_blocks == \"all\":\r\n            single_blocks = [val for val in range(100)]\r\n            if len(single_weights) == 1:\r\n                single_weights = [single_weights[0]] * 100\r\n        else:\r\n            single_blocks  = parse_range_string(single_blocks)\r\n            \r\n            weights_expanded = [0.0] * 100\r\n            for b, w in zip(single_blocks, single_weights):\r\n                weights_expanded[b] = w\r\n            single_weights = weights_expanded\r\n        \r\n        \r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        if guide is not None:\r\n            raw_x = guide.get('state_info', {}).get('raw_x', None)\r\n            if raw_x is not None:\r\n                guide          = {'samples': guide['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide          = {'samples': guide['samples'].clone()}\r\n        \r\n        if weight_scheduler == \"constant\": # and weights == None: \r\n            weights = initialize_or_scale(None, weight, end_step).to(default_dtype)\r\n            prepend = torch.zeros(start_step).to(weights)\r\n            weights = torch.cat([prepend, weights])\r\n            weights = F.pad(weights, (0, MAX_STEPS), value=0.0)\r\n        \r\n        guides = copy.deepcopy(guides) if guides is not None else {}\r\n        \r\n        guides['weight_attninj']           = weight\r\n        guides['weights_attninj']          = weights\r\n        \r\n        guides['blocks_attninj_mmdit'] = {\r\n            \"double_weights\": double_weights,\r\n            \"single_weights\": single_weights,\r\n            \"double_blocks\" : double_blocks,\r\n            \"single_blocks\" : single_blocks,\r\n        }\r\n        \r\n        guides['blocks_attninj_qkv'] = {\r\n            \"img_q\": img_q,\r\n            \"img_k\": img_k,\r\n            \"img_v\": img_v,\r\n            \"txt_q\": txt_q,\r\n            \"txt_k\": txt_k,\r\n            \"txt_v\": txt_v,\r\n            \r\n            \"img_q_norm\": img_q_norm,\r\n            \"img_k_norm\": img_k_norm,\r\n            \"img_v_norm\": img_v_norm,\r\n            \"txt_q_norm\": txt_q_norm,\r\n            \"txt_k_norm\": txt_k_norm,\r\n            \"txt_v_norm\": txt_v_norm,\r\n        }\r\n        \r\n        guides['guide_attninj']            = guide\r\n        guides['mask_attninj']             = mask\r\n\r\n        guides['weight_scheduler_attninj'] = weight_scheduler\r\n        guides['start_step_attninj']       = start_step\r\n        guides['end_step_attninj']         = end_step\r\n        \r\n        return (guides, )\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nclass ClownGuide_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"guide_mode\":           (GUIDE_MODE_NAMES_BETA_SIMPLE,       {\"default\": 'epsilon',                                                      \"tooltip\": \"Recommended: epsilon or mean/mean_std with sampler_mode = standard, and unsample/resample with sampler_mode = unsample/resample. Epsilon_dynamic_mean, etc. are only used with two latent inputs and a mask. Blend/hard_light/mean/mean_std etc. require low strengths, start with 0.01-0.02.\"}),\r\n                    \"channelwise_mode\":     (\"BOOLEAN\",                                   {\"default\": True}),\r\n                    \"projection_mode\":      (\"BOOLEAN\",                                   {\"default\": True}),\r\n                    \"weight\":               (\"FLOAT\",                                     {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"cutoff\":               (\"FLOAT\",                                     {\"default\": 1.0,  \"min\": 0.0,    \"max\": 1.0,   \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"weight_scheduler\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"start_step\":           (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step\":             (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":          (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide\":                (\"LATENT\", ),\r\n                    \"mask\":                 (\"MASK\", ),\r\n                    \"weights\":              (\"SIGMAS\", ),\r\n                    }  \r\n                }\r\n        \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            weight_scheduler          = \"constant\",\r\n            weight_scheduler_unmasked = \"constant\",\r\n            start_step                = 0,\r\n            start_step_unmasked       = 0,\r\n            end_step                  = 30,\r\n            end_step_unmasked         = 30,\r\n            cutoff                    = 1.0,\r\n            cutoff_unmasked           = 1.0,\r\n            guide                     = None,\r\n            guide_unmasked            = None,\r\n            weight                    = 0.0,\r\n            weight_unmasked           = 0.0,\r\n\r\n            guide_mode                = \"epsilon\",\r\n            channelwise_mode          = False,\r\n            projection_mode           = False,\r\n            weights                   = None,\r\n            weights_unmasked          = None,\r\n            mask                      = None,\r\n            unmask                    = None,\r\n            invert_mask               = False,\r\n            ):\r\n        \r\n        CG = ClownGuides_Beta()\r\n        \r\n        mask = 1-mask if mask is not None else None\r\n        \r\n        if end_step == -1:\r\n            end_step = MAX_STEPS\r\n        \r\n        if guide is not None:\r\n            raw_x = guide.get('state_info', {}).get('raw_x', None)\r\n            \r\n            if False: # raw_x is not None:\r\n                guide          = {'samples': guide['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide          = {'samples': guide['samples'].clone()}\r\n                \r\n        if guide_unmasked is not None:\r\n            raw_x = guide_unmasked.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_unmasked = {'samples': guide_unmasked['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_unmasked = {'samples': guide_unmasked['samples'].clone()}\r\n        \r\n        guides, = CG.main(\r\n            weight_scheduler_masked   = weight_scheduler,\r\n            weight_scheduler_unmasked = weight_scheduler_unmasked,\r\n            start_step_masked         = start_step,\r\n            start_step_unmasked       = start_step_unmasked,\r\n            end_step_masked           = end_step,\r\n            end_step_unmasked         = end_step_unmasked,\r\n            cutoff_masked             = cutoff,\r\n            cutoff_unmasked           = cutoff_unmasked,\r\n            guide_masked              = guide,\r\n            guide_unmasked            = guide_unmasked,\r\n            weight_masked             = weight,\r\n            weight_unmasked           = weight_unmasked,\r\n\r\n            guide_mode                = guide_mode,\r\n            channelwise_mode          = channelwise_mode,\r\n            projection_mode           = projection_mode,\r\n            weights_masked            = weights,\r\n            weights_unmasked          = weights_unmasked,\r\n            mask                      = mask,\r\n            unmask                    = unmask,\r\n            invert_mask               = invert_mask\r\n        )\r\n\r\n        return (guides, )\r\n\r\n\r\n        #return (guides[0], )\r\n\r\n\r\n\r\n\r\nclass ClownGuides_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"guide_mode\":                  (GUIDE_MODE_NAMES_BETA_SIMPLE,                {\"default\": 'epsilon',                                                      \"tooltip\": \"Recommended: epsilon or mean/mean_std with sampler_mode = standard, and unsample/resample with sampler_mode = unsample/resample. Epsilon_dynamic_mean, etc. are only used with two latent inputs and a mask. Blend/hard_light/mean/mean_std etc. require low strengths, start with 0.01-0.02.\"}),\r\n                    \"channelwise_mode\":            (\"BOOLEAN\",                                   {\"default\": True}),\r\n                    \"projection_mode\":             (\"BOOLEAN\",                                   {\"default\": True}),\r\n                    \"weight_masked\":               (\"FLOAT\",                                     {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"weight_unmasked\":             (\"FLOAT\",                                     {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                    \"cutoff_masked\":               (\"FLOAT\",                                     {\"default\": 1.0,  \"min\": 0.0,    \"max\": 1.0,   \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"cutoff_unmasked\":             (\"FLOAT\",                                     {\"default\": 1.0,  \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"weight_scheduler_masked\":     ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"weight_scheduler_unmasked\":   ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"start_step_masked\":           (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"start_step_unmasked\":         (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step_masked\":             (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"end_step_unmasked\":           (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_mask\":                 (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide_masked\":                (\"LATENT\", ),\r\n                    \"guide_unmasked\":              (\"LATENT\", ),\r\n                    \"mask\":                        (\"MASK\", ),\r\n                    \"weights_masked\":              (\"SIGMAS\", ),\r\n                    \"weights_unmasked\":            (\"SIGMAS\", ),\r\n                    }  \r\n                }\r\n        \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            weight_scheduler_masked   = \"constant\",\r\n            weight_scheduler_unmasked = \"constant\",\r\n            start_step_masked         = 0,\r\n            start_step_unmasked       = 0,\r\n            end_step_masked           = 30,\r\n            end_step_unmasked         = 30,\r\n            cutoff_masked             = 1.0,\r\n            cutoff_unmasked           = 1.0,\r\n            guide_masked              = None,\r\n            guide_unmasked            = None,\r\n            weight_masked             = 0.0,\r\n            weight_unmasked           = 0.0,\r\n\r\n            guide_mode                = \"epsilon\",\r\n            channelwise_mode          = False,\r\n            projection_mode           = False,\r\n            weights_masked            = None,\r\n            weights_unmasked          = None,\r\n            mask                      = None,\r\n            unmask                    = None,\r\n            invert_mask               = False,\r\n            ):\r\n\r\n        default_dtype = torch.float64\r\n        \r\n        if end_step_masked   == -1:\r\n            end_step_masked   = MAX_STEPS\r\n        if end_step_unmasked == -1:\r\n            end_step_unmasked = MAX_STEPS\r\n        \r\n        if guide_masked is None:\r\n            weight_scheduler_masked = \"constant\"\r\n            start_step_masked       = 0\r\n            end_step_masked         = 30\r\n            cutoff_masked           = 1.0\r\n            guide_masked            = None\r\n            weight_masked           = 0.0\r\n            weights_masked          = None\r\n            #mask                    = None\r\n        \r\n        if guide_unmasked is None:\r\n            weight_scheduler_unmasked = \"constant\"\r\n            start_step_unmasked       = 0\r\n            end_step_unmasked         = 30\r\n            cutoff_unmasked           = 1.0\r\n            guide_unmasked            = None\r\n            weight_unmasked           = 0.0\r\n            weights_unmasked          = None\r\n            #unmask                    = None\r\n        \r\n        if guide_masked is not None:\r\n            raw_x = guide_masked.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_masked   = {'samples': guide_masked['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_masked   = {'samples': guide_masked['samples'].clone()}\r\n        \r\n        if guide_unmasked is not None:\r\n            raw_x = guide_unmasked.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_unmasked = {'samples': guide_unmasked['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_unmasked = {'samples': guide_unmasked['samples'].clone()}\r\n        \r\n        if invert_mask and mask is not None:\r\n            mask = 1-mask\r\n                \r\n        if projection_mode:\r\n            guide_mode = guide_mode + \"_projection\"\r\n        \r\n        if channelwise_mode:\r\n            guide_mode = guide_mode + \"_cw\"\r\n            \r\n        if guide_mode == \"unsample_cw\":\r\n            guide_mode = \"unsample\"\r\n        if guide_mode == \"resample_cw\":\r\n            guide_mode = \"resample\"\r\n        \r\n        if weight_scheduler_masked == \"constant\" and weights_masked == None: \r\n            weights_masked = initialize_or_scale(None, weight_masked, end_step_masked).to(default_dtype)\r\n            weights_masked = F.pad(weights_masked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if weight_scheduler_unmasked == \"constant\" and weights_unmasked == None: \r\n            weights_unmasked = initialize_or_scale(None, weight_unmasked, end_step_unmasked).to(default_dtype)\r\n            weights_unmasked = F.pad(weights_unmasked, (0, MAX_STEPS), value=0.0)\r\n        \r\n        guides = {\r\n            \"guide_mode\"                : guide_mode,\r\n            \"weight_masked\"             : weight_masked,\r\n            \"weight_unmasked\"           : weight_unmasked,\r\n            \"weights_masked\"            : weights_masked,\r\n            \"weights_unmasked\"          : weights_unmasked,\r\n            \"guide_masked\"              : guide_masked,\r\n            \"guide_unmasked\"            : guide_unmasked,\r\n            \"mask\"                      : mask,\r\n            \"unmask\"                    : unmask,\r\n\r\n            \"weight_scheduler_masked\"   : weight_scheduler_masked,\r\n            \"weight_scheduler_unmasked\" : weight_scheduler_unmasked,\r\n            \"start_step_masked\"         : start_step_masked,\r\n            \"start_step_unmasked\"       : start_step_unmasked,\r\n            \"end_step_masked\"           : end_step_masked,\r\n            \"end_step_unmasked\"         : end_step_unmasked,\r\n            \"cutoff_masked\"             : cutoff_masked,\r\n            \"cutoff_unmasked\"           : cutoff_unmasked\r\n        }\r\n        \r\n        \r\n        return (guides, )\r\n\r\n\r\n\r\n\r\nclass ClownGuidesAB_Beta:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\"required\":\r\n                    {\r\n                    \"guide_mode\":         (GUIDE_MODE_NAMES_BETA_SIMPLE,                {\"default\": 'epsilon',                                                      \"tooltip\": \"Recommended: epsilon or mean/mean_std with sampler_mode = standard, and unsample/resample with sampler_mode = unsample/resample. Epsilon_dynamic_mean, etc. are only used with two latent inputs and a mask. Blend/hard_light/mean/mean_std etc. require low strengths, start with 0.01-0.02.\"}),\r\n                    \"channelwise_mode\":   (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    \"projection_mode\":    (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    \"weight_A\":           (\"FLOAT\",                                     {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide.\"}),\r\n                    \"weight_B\":           (\"FLOAT\",                                     {\"default\": 0.75, \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Set the strength of the guide_bkg.\"}),\r\n                    \"cutoff_A\":           (\"FLOAT\",                                     {\"default\": 1.0,  \"min\": 0.0,    \"max\": 1.0,   \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"cutoff_B\":           (\"FLOAT\",                                     {\"default\": 1.0,  \"min\": -100.0, \"max\": 100.0, \"step\":0.01, \"round\": False, \"tooltip\": \"Disables the guide for the next step when the denoised image is similar to the guide. Higher values will strengthen the effect.\"}),\r\n                    \"weight_scheduler_A\": ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                    \"weight_scheduler_B\": ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"constant\"},),\r\n                    \"start_step_A\":       (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"start_step_B\":       (\"INT\",                                       {\"default\": 0,    \"min\":  0,      \"max\": 10000}),\r\n                    \"end_step_A\":         (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"end_step_B\":         (\"INT\",                                       {\"default\": 15,   \"min\": -1,      \"max\": 10000}),\r\n                    \"invert_masks\":       (\"BOOLEAN\",                                   {\"default\": False}),\r\n                    },\r\n                \"optional\": \r\n                    {\r\n                    \"guide_A\":            (\"LATENT\", ),\r\n                    \"guide_B\":            (\"LATENT\", ),\r\n                    \"mask_A\":             (\"MASK\", ),\r\n                    \"mask_B\":             (\"MASK\", ),\r\n                    \"weights_A\":          (\"SIGMAS\", ),\r\n                    \"weights_B\":          (\"SIGMAS\", ),\r\n                    }  \r\n                }\r\n        \r\n    RETURN_TYPES = (\"GUIDES\",)\r\n    RETURN_NAMES = (\"guides\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_extensions\"\r\n\r\n    def main(self,\r\n            weight_scheduler_A = \"constant\",\r\n            weight_scheduler_B = \"constant\",\r\n            start_step_A       = 0,\r\n            start_step_B       = 0,\r\n            end_step_A         = 30,\r\n            end_step_B         = 30,\r\n            cutoff_A           = 1.0,\r\n            cutoff_B           = 1.0,\r\n            guide_A            = None,\r\n            guide_B            = None,\r\n            weight_A           = 0.0,\r\n            weight_B           = 0.0,\r\n\r\n            guide_mode         = \"epsilon\",\r\n            channelwise_mode   = False,\r\n            projection_mode    = False,\r\n            weights_A          = None,\r\n            weights_B          = None,\r\n            mask_A             = None,\r\n            mask_B             = None,\r\n            invert_masks       : bool = False,\r\n            ):\r\n        \r\n        default_dtype = torch.float64\r\n        \r\n        if end_step_A == -1:\r\n            end_step_A = MAX_STEPS\r\n        if end_step_B == -1:\r\n            end_step_B = MAX_STEPS\r\n        \r\n        if guide_A is not None:\r\n            raw_x = guide_A.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_A          = {'samples': guide_A['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_A          = {'samples': guide_A['samples'].clone()}\r\n                \r\n        if guide_B is not None:\r\n            raw_x = guide_B.get('state_info', {}).get('raw_x', None)\r\n            if False: #raw_x is not None:\r\n                guide_B = {'samples': guide_B['state_info']['raw_x'].clone()}\r\n            else:\r\n                guide_B = {'samples': guide_B['samples'].clone()}\r\n        \r\n        if guide_A is None:\r\n            guide_A  = guide_B\r\n            guide_B  = None\r\n            mask_A   = mask_B\r\n            mask_B   = None\r\n            weight_B = 0.0\r\n            \r\n        if guide_B is None:\r\n            weight_B = 0.0\r\n            \r\n        if mask_A is None and mask_B is not None:\r\n            mask_A = 1-mask_B\r\n                        \r\n        if projection_mode:\r\n            guide_mode = guide_mode + \"_projection\"\r\n        \r\n        if channelwise_mode:\r\n            guide_mode = guide_mode + \"_cw\"\r\n            \r\n        if guide_mode == \"unsample_cw\":\r\n            guide_mode = \"unsample\"\r\n        if guide_mode == \"resample_cw\":\r\n            guide_mode = \"resample\"\r\n        \r\n        if weight_scheduler_A == \"constant\" and weights_A == None: \r\n            weights_A = initialize_or_scale(None, weight_A, end_step_A).to(default_dtype)\r\n            weights_A = F.pad(weights_A, (0, MAX_STEPS), value=0.0)\r\n        \r\n        if weight_scheduler_B == \"constant\" and weights_B == None: \r\n            weights_B = initialize_or_scale(None, weight_B, end_step_B).to(default_dtype)\r\n            weights_B = F.pad(weights_B, (0, MAX_STEPS), value=0.0)\r\n            \r\n        if invert_masks:\r\n            mask_A = 1-mask_A if mask_A is not None else None\r\n            mask_B = 1-mask_B if mask_B is not None else None\r\n    \r\n        guides = {\r\n            \"guide_mode\"                : guide_mode,\r\n            \"weight_masked\"             : weight_A,\r\n            \"weight_unmasked\"           : weight_B,\r\n            \"weights_masked\"            : weights_A,\r\n            \"weights_unmasked\"          : weights_B,\r\n            \"guide_masked\"              : guide_A,\r\n            \"guide_unmasked\"            : guide_B,\r\n            \"mask\"                      : mask_A,\r\n            \"unmask\"                    : mask_B,\r\n\r\n            \"weight_scheduler_masked\"   : weight_scheduler_A,\r\n            \"weight_scheduler_unmasked\" : weight_scheduler_B,\r\n            \"start_step_masked\"         : start_step_A,\r\n            \"start_step_unmasked\"       : start_step_B,\r\n            \"end_step_masked\"           : end_step_A,\r\n            \"end_step_unmasked\"         : end_step_B,\r\n            \"cutoff_masked\"             : cutoff_A,\r\n            \"cutoff_unmasked\"           : cutoff_B\r\n        }\r\n        \r\n        return (guides, )\r\n    \r\n\r\n\r\nclass ClownOptions_Combine:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"options\": (\"OPTIONS\",),\r\n            },\r\n        }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self, options, **kwargs):\r\n        options_mgr = OptionsManager(options, **kwargs)\r\n        return (options_mgr.as_dict(),)\r\n\r\n\r\n\r\nclass ClownOptions_Frameweights:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"config_name\": (FRAME_WEIGHTS_CONFIG_NAMES, {\"default\": \"frame_weights\", \"tooltip\": \"Apply to specific type of per-frame weights.\"}),\r\n                \"dynamics\": (FRAME_WEIGHTS_DYNAMICS_NAMES, {\"default\": \"ease_out\", \"tooltip\": \"The function type used for the dynamic period. constant: no change, linear: steady change, ease_out: starts fast, ease_in: starts slow\"}),\r\n                \"schedule\": (FRAME_WEIGHTS_SCHEDULE_NAMES, {\"default\": \"moderate_early\", \"tooltip\": \"fast_early: fast change starts immediately, slow_late: slow change starts later\"}),\r\n                \"scale\": (\"FLOAT\", {\"default\": 0.5, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01, \"tooltip\": \"The amount of change over the course of the frame weights. 1.0 means that the guides have no influence by the end.\"}),\r\n                \"reverse\": (\"BOOLEAN\", {\"default\": False, \"tooltip\": \"Reverse the frame weights\"}),\r\n            },\r\n            \"optional\": {\r\n                \"frame_weights\": (\"SIGMAS\", {\"tooltip\": \"Overrides all other settings EXCEPT reverse.\"}),\r\n                \"custom_string\": (\"STRING\", {\"tooltip\": \"Overrides all other settings EXCEPT reverse.\", \"multiline\": True}),\r\n                \"options\": (\"OPTIONS\",),\r\n            },\r\n        }\r\n\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION = \"main\"\r\n    CATEGORY = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self,\r\n            config_name,\r\n            dynamics,\r\n            schedule,\r\n            scale,\r\n            reverse,\r\n            frame_weights = None,\r\n            custom_string = None,\r\n            options       = None,\r\n            ):\r\n        \r\n        options_mgr = OptionsManager(options if options is not None else {})\r\n\r\n        frame_weights_mgr = options_mgr.get(\"frame_weights_mgr\")\r\n        if frame_weights_mgr is None:\r\n            frame_weights_mgr = FrameWeightsManager()\r\n\r\n        if custom_string is not None and custom_string.strip() == \"\":\r\n            custom_string = None\r\n        \r\n        frame_weights_mgr.add_weight_config(\r\n            config_name,\r\n            dynamics=dynamics,\r\n            schedule=schedule,\r\n            scale=scale,\r\n            is_reversed=reverse,\r\n            frame_weights=frame_weights,\r\n            custom_string=custom_string\r\n        )\r\n        \r\n        options_mgr.update(\"frame_weights_mgr\", frame_weights_mgr)\r\n        \r\n        return (options_mgr.as_dict(),)\r\n\r\n\r\nclass SharkOptions_GuiderInput:\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\"required\":\r\n                    {\"guider\": (\"GUIDER\", ),\r\n                    },\r\n                \"optional\":\r\n                    {\"options\": (\"OPTIONS\", ),\r\n                    }\r\n                }\r\n    RETURN_TYPES = (\"OPTIONS\",)\r\n    RETURN_NAMES = (\"options\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/sampler_options\"\r\n\r\n    def main(self, guider, options=None):\r\n        options_mgr = OptionsManager(options if options is not None else {})\r\n        \r\n        if isinstance(guider, dict):\r\n            guider = guider.get('samples', None)\r\n            \r\n        if isinstance(guider, torch.Tensor):\r\n            guider = guider.detach().cpu()\r\n        \r\n        if options_mgr is None:\r\n            options_mgr = OptionsManager()\r\n            \r\n        options_mgr.update(\"guider\", guider)\r\n        \r\n        return (options_mgr.as_dict(), )\r\n"
  },
  {
    "path": "sd/attention.py",
    "content": "import math\nimport sys\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn, einsum\nfrom einops import rearrange, repeat\nfrom typing import Optional\nimport logging\n\nfrom comfy.ldm.modules.diffusionmodules.util import AlphaBlender, timestep_embedding\nfrom comfy.ldm.modules.sub_quadratic_attention import efficient_dot_product_attention\n\nfrom comfy import model_management\n\nif model_management.xformers_enabled():\n    import xformers\n    import xformers.ops\n\nif model_management.sage_attention_enabled():\n    try:\n        from sageattention import sageattn\n    except ModuleNotFoundError:\n        logging.error(f\"\\n\\nTo use the `--use-sage-attention` feature, the `sageattention` package must be installed first.\\ncommand:\\n\\t{sys.executable} -m pip install sageattention\")\n        exit(-1)\n\nif model_management.flash_attention_enabled():\n    try:\n        from flash_attn import flash_attn_func\n    except ModuleNotFoundError:\n        logging.error(f\"\\n\\nTo use the `--use-flash-attention` feature, the `flash-attn` package must be installed first.\\ncommand:\\n\\t{sys.executable} -m pip install flash-attn\")\n        exit(-1)\n\nfrom comfy.cli_args import args\nimport comfy.ops\nops = comfy.ops.disable_weight_init\n\nfrom ..style_transfer import apply_scattersort, apply_scattersort_spatial\n\nFORCE_UPCAST_ATTENTION_DTYPE = model_management.force_upcast_attention_dtype()\n\ndef get_attn_precision(attn_precision, current_dtype):\n    if args.dont_upcast_attention:\n        return None\n\n    if FORCE_UPCAST_ATTENTION_DTYPE is not None and current_dtype in FORCE_UPCAST_ATTENTION_DTYPE:\n        return FORCE_UPCAST_ATTENTION_DTYPE[current_dtype]\n    return attn_precision\n\ndef exists(val):\n    return val is not None\n\n\ndef default(val, d):\n    if exists(val):\n        return val\n    return d\n\n\n# feedforward\nclass GEGLU(nn.Module):\n    def __init__(self, dim_in, dim_out, dtype=None, device=None, operations=ops):\n        super().__init__()\n        self.proj = operations.Linear(dim_in, dim_out * 2, dtype=dtype, device=device)\n\n    def forward(self, x):\n        x, gate = self.proj(x).chunk(2, dim=-1)\n        return x * F.gelu(gate)\n\n\nclass FeedForward(nn.Module):\n    def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0., dtype=None, device=None, operations=ops):\n        super().__init__()\n        inner_dim = int(dim * mult)\n        dim_out = default(dim_out, dim)\n        project_in = nn.Sequential(\n            operations.Linear(dim, inner_dim, dtype=dtype, device=device),\n            nn.GELU()\n        ) if not glu else GEGLU(dim, inner_dim, dtype=dtype, device=device, operations=operations)\n\n        self.net = nn.Sequential(\n            project_in,\n            nn.Dropout(dropout),\n            operations.Linear(inner_dim, dim_out, dtype=dtype, device=device)\n        )\n\n    def forward(self, x):\n        return self.net(x)\n\ndef Normalize(in_channels, dtype=None, device=None):\n    return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True, dtype=dtype, device=device)\n\ndef attention_basic(q, k, v, heads, mask=None, attn_precision=None, skip_reshape=False, skip_output_reshape=False):\n    attn_precision = get_attn_precision(attn_precision, q.dtype)\n\n    if skip_reshape:\n        b, _, _, dim_head = q.shape\n    else:\n        b, _, dim_head = q.shape\n        dim_head //= heads\n\n    scale = dim_head ** -0.5\n\n    h = heads\n    if skip_reshape:\n         q, k, v = map(\n            lambda t: t.reshape(b * heads, -1, dim_head),\n            (q, k, v),\n        )\n    else:\n        q, k, v = map(\n            lambda t: t.unsqueeze(3)\n            .reshape(b, -1, heads, dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b * heads, -1, dim_head)\n            .contiguous(),\n            (q, k, v),\n        )\n\n    # force cast to fp32 to avoid overflowing\n    if attn_precision == torch.float32:\n        sim = einsum('b i d, b j d -> b i j', q.float(), k.float()) * scale\n    else:\n        sim = einsum('b i d, b j d -> b i j', q, k) * scale\n\n    del q, k\n\n    if exists(mask):\n        if mask.dtype == torch.bool:\n            mask = rearrange(mask, 'b ... -> b (...)') #TODO: check if this bool part matches pytorch attention\n            max_neg_value = -torch.finfo(sim.dtype).max\n            mask = repeat(mask, 'b j -> (b h) () j', h=h)\n            sim.masked_fill_(~mask, max_neg_value)\n        else:\n            if len(mask.shape) == 2:\n                bs = 1\n            else:\n                bs = mask.shape[0]\n            mask = mask.reshape(bs, -1, mask.shape[-2], mask.shape[-1]).expand(b, heads, -1, -1).reshape(-1, mask.shape[-2], mask.shape[-1])\n            sim.add_(mask)\n\n    # attention, what we cannot get enough of\n    sim = sim.softmax(dim=-1)\n\n    out = einsum('b i j, b j d -> b i d', sim.to(v.dtype), v)\n\n    if skip_output_reshape:\n        out = (\n            out.unsqueeze(0)\n            .reshape(b, heads, -1, dim_head)\n        )\n    else:\n        out = (\n            out.unsqueeze(0)\n            .reshape(b, heads, -1, dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b, -1, heads * dim_head)\n        )\n    return out\n\n\ndef attention_sub_quad(query, key, value, heads, mask=None, attn_precision=None, skip_reshape=False, skip_output_reshape=False):\n    attn_precision = get_attn_precision(attn_precision, query.dtype)\n\n    if skip_reshape:\n        b, _, _, dim_head = query.shape\n    else:\n        b, _, dim_head = query.shape\n        dim_head //= heads\n\n    if skip_reshape:\n        query = query.reshape(b * heads, -1, dim_head)\n        value = value.reshape(b * heads, -1, dim_head)\n        key = key.reshape(b * heads, -1, dim_head).movedim(1, 2)\n    else:\n        query = query.unsqueeze(3).reshape(b, -1, heads, dim_head).permute(0, 2, 1, 3).reshape(b * heads, -1, dim_head)\n        value = value.unsqueeze(3).reshape(b, -1, heads, dim_head).permute(0, 2, 1, 3).reshape(b * heads, -1, dim_head)\n        key = key.unsqueeze(3).reshape(b, -1, heads, dim_head).permute(0, 2, 3, 1).reshape(b * heads, dim_head, -1)\n\n\n    dtype = query.dtype\n    upcast_attention = attn_precision == torch.float32 and query.dtype != torch.float32\n    if upcast_attention:\n        bytes_per_token = torch.finfo(torch.float32).bits//8\n    else:\n        bytes_per_token = torch.finfo(query.dtype).bits//8\n    batch_x_heads, q_tokens, _ = query.shape\n    _, _, k_tokens = key.shape\n\n    mem_free_total, _ = model_management.get_free_memory(query.device, True)\n\n    kv_chunk_size_min = None\n    kv_chunk_size = None\n    query_chunk_size = None\n\n    for x in [4096, 2048, 1024, 512, 256]:\n        count = mem_free_total / (batch_x_heads * bytes_per_token * x * 4.0)\n        if count >= k_tokens:\n            kv_chunk_size = k_tokens\n            query_chunk_size = x\n            break\n\n    if query_chunk_size is None:\n        query_chunk_size = 512\n\n    if mask is not None:\n        if len(mask.shape) == 2:\n            bs = 1\n        else:\n            bs = mask.shape[0]\n        mask = mask.reshape(bs, -1, mask.shape[-2], mask.shape[-1]).expand(b, heads, -1, -1).reshape(-1, mask.shape[-2], mask.shape[-1])\n\n    hidden_states = efficient_dot_product_attention(\n        query,\n        key,\n        value,\n        query_chunk_size=query_chunk_size,\n        kv_chunk_size=kv_chunk_size,\n        kv_chunk_size_min=kv_chunk_size_min,\n        use_checkpoint=False,\n        upcast_attention=upcast_attention,\n        mask=mask,\n    )\n\n    hidden_states = hidden_states.to(dtype)\n    if skip_output_reshape:\n        hidden_states = hidden_states.unflatten(0, (-1, heads))\n    else:\n        hidden_states = hidden_states.unflatten(0, (-1, heads)).transpose(1,2).flatten(start_dim=2)\n    return hidden_states\n\ndef attention_split(q, k, v, heads, mask=None, attn_precision=None, skip_reshape=False, skip_output_reshape=False):\n    attn_precision = get_attn_precision(attn_precision, q.dtype)\n\n    if skip_reshape:\n        b, _, _, dim_head = q.shape\n    else:\n        b, _, dim_head = q.shape\n        dim_head //= heads\n\n    scale = dim_head ** -0.5\n\n    if skip_reshape:\n         q, k, v = map(\n            lambda t: t.reshape(b * heads, -1, dim_head),\n            (q, k, v),\n        )\n    else:\n        q, k, v = map(\n            lambda t: t.unsqueeze(3)\n            .reshape(b, -1, heads, dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b * heads, -1, dim_head)\n            .contiguous(),\n            (q, k, v),\n        )\n\n    r1 = torch.zeros(q.shape[0], q.shape[1], v.shape[2], device=q.device, dtype=q.dtype)\n\n    mem_free_total = model_management.get_free_memory(q.device)\n\n    if attn_precision == torch.float32:\n        element_size = 4\n        upcast = True\n    else:\n        element_size = q.element_size()\n        upcast = False\n\n    gb = 1024 ** 3\n    tensor_size = q.shape[0] * q.shape[1] * k.shape[1] * element_size\n    modifier = 3\n    mem_required = tensor_size * modifier\n    steps = 1\n\n\n    if mem_required > mem_free_total:\n        steps = 2**(math.ceil(math.log(mem_required / mem_free_total, 2)))\n        # print(f\"Expected tensor size:{tensor_size/gb:0.1f}GB, cuda free:{mem_free_cuda/gb:0.1f}GB \"\n        #      f\"torch free:{mem_free_torch/gb:0.1f} total:{mem_free_total/gb:0.1f} steps:{steps}\")\n\n    if steps > 64:\n        max_res = math.floor(math.sqrt(math.sqrt(mem_free_total / 2.5)) / 8) * 64\n        raise RuntimeError(f'Not enough memory, use lower resolution (max approx. {max_res}x{max_res}). '\n                            f'Need: {mem_required/64/gb:0.1f}GB free, Have:{mem_free_total/gb:0.1f}GB free')\n\n    if mask is not None:\n        if len(mask.shape) == 2:\n            bs = 1\n        else:\n            bs = mask.shape[0]\n        mask = mask.reshape(bs, -1, mask.shape[-2], mask.shape[-1]).expand(b, heads, -1, -1).reshape(-1, mask.shape[-2], mask.shape[-1])\n\n    # print(\"steps\", steps, mem_required, mem_free_total, modifier, q.element_size(), tensor_size)\n    first_op_done = False\n    cleared_cache = False\n    while True:\n        try:\n            slice_size = q.shape[1] // steps if (q.shape[1] % steps) == 0 else q.shape[1]\n            for i in range(0, q.shape[1], slice_size):\n                end = i + slice_size\n                if upcast:\n                    with torch.autocast(enabled=False, device_type = 'cuda'):\n                        s1 = einsum('b i d, b j d -> b i j', q[:, i:end].float(), k.float()) * scale\n                else:\n                    s1 = einsum('b i d, b j d -> b i j', q[:, i:end], k) * scale\n\n                if mask is not None:\n                    if len(mask.shape) == 2:\n                        s1 += mask[i:end]\n                    else:\n                        if mask.shape[1] == 1:\n                            s1 += mask\n                        else:\n                            s1 += mask[:, i:end]\n\n                s2 = s1.softmax(dim=-1).to(v.dtype)\n                del s1\n                first_op_done = True\n\n                r1[:, i:end] = einsum('b i j, b j d -> b i d', s2, v)\n                del s2\n            break\n        except model_management.OOM_EXCEPTION as e:\n            if first_op_done == False:\n                model_management.soft_empty_cache(True)\n                if cleared_cache == False:\n                    cleared_cache = True\n                    logging.warning(\"out of memory error, emptying cache and trying again\")\n                    continue\n                steps *= 2\n                if steps > 64:\n                    raise e\n                logging.warning(\"out of memory error, increasing steps and trying again {}\".format(steps))\n            else:\n                raise e\n\n    del q, k, v\n\n    if skip_output_reshape:\n        r1 = (\n            r1.unsqueeze(0)\n            .reshape(b, heads, -1, dim_head)\n        )\n    else:\n        r1 = (\n            r1.unsqueeze(0)\n            .reshape(b, heads, -1, dim_head)\n            .permute(0, 2, 1, 3)\n            .reshape(b, -1, heads * dim_head)\n        )\n    return r1\n\nBROKEN_XFORMERS = False\ntry:\n    x_vers = xformers.__version__\n    # XFormers bug confirmed on all versions from 0.0.21 to 0.0.26 (q with bs bigger than 65535 gives CUDA error)\n    BROKEN_XFORMERS = x_vers.startswith(\"0.0.2\") and not x_vers.startswith(\"0.0.20\")\nexcept:\n    pass\n\ndef attention_xformers(q, k, v, heads, mask=None, attn_precision=None, skip_reshape=False, skip_output_reshape=False):\n    b = q.shape[0]\n    dim_head = q.shape[-1]\n    # check to make sure xformers isn't broken\n    disabled_xformers = False\n\n    if BROKEN_XFORMERS:\n        if b * heads > 65535:\n            disabled_xformers = True\n\n    if not disabled_xformers:\n        if torch.jit.is_tracing() or torch.jit.is_scripting():\n            disabled_xformers = True\n\n    if disabled_xformers:\n        return attention_pytorch(q, k, v, heads, mask, skip_reshape=skip_reshape)\n\n    if skip_reshape:\n        # b h k d -> b k h d\n        q, k, v = map(\n            lambda t: t.permute(0, 2, 1, 3),\n            (q, k, v),\n        )\n    # actually do the reshaping\n    else:\n        dim_head //= heads\n        q, k, v = map(\n            lambda t: t.reshape(b, -1, heads, dim_head),\n            (q, k, v),\n        )\n\n    if mask is not None:\n        # add a singleton batch dimension\n        if mask.ndim == 2:\n            mask = mask.unsqueeze(0)\n        # add a singleton heads dimension\n        if mask.ndim == 3:\n            mask = mask.unsqueeze(1)\n        # pad to a multiple of 8\n        pad = 8 - mask.shape[-1] % 8\n        # the xformers docs says that it's allowed to have a mask of shape (1, Nq, Nk)\n        # but when using separated heads, the shape has to be (B, H, Nq, Nk)\n        # in flux, this matrix ends up being over 1GB\n        # here, we create a mask with the same batch/head size as the input mask (potentially singleton or full)\n        mask_out = torch.empty([mask.shape[0], mask.shape[1], q.shape[1], mask.shape[-1] + pad], dtype=q.dtype, device=q.device)\n\n        mask_out[..., :mask.shape[-1]] = mask\n        # doesn't this remove the padding again??\n        mask = mask_out[..., :mask.shape[-1]]\n        mask = mask.expand(b, heads, -1, -1)\n\n    out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=mask)\n\n    if skip_output_reshape:\n        out = out.permute(0, 2, 1, 3)\n    else:\n        out = (\n            out.reshape(b, -1, heads * dim_head)\n        )\n\n    return out\n\nif model_management.is_nvidia(): #pytorch 2.3 and up seem to have this issue.\n    SDP_BATCH_LIMIT = 2**15\nelse:\n    #TODO: other GPUs ?\n    SDP_BATCH_LIMIT = 2**31\n\n\ndef attention_pytorch(q, k, v, heads, mask=None, attn_precision=None, skip_reshape=False, skip_output_reshape=False):\n    if skip_reshape:\n        b, _, _, dim_head = q.shape\n    else:\n        b, _, dim_head = q.shape\n        dim_head //= heads\n        q, k, v = map(\n            lambda t: t.view(b, -1, heads, dim_head).transpose(1, 2),\n            (q, k, v),\n        )\n\n    if mask is not None:\n        # add a batch dimension if there isn't already one\n        if mask.ndim == 2:\n            mask = mask.unsqueeze(0)\n        # add a heads dimension if there isn't already one\n        if mask.ndim == 3:\n            mask = mask.unsqueeze(1)\n\n    if SDP_BATCH_LIMIT >= b:\n        out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)\n        if not skip_output_reshape:\n            out = (\n                out.transpose(1, 2).reshape(b, -1, heads * dim_head)\n            )\n    else:\n        out = torch.empty((b, q.shape[2], heads * dim_head), dtype=q.dtype, layout=q.layout, device=q.device)\n        for i in range(0, b, SDP_BATCH_LIMIT):\n            m = mask\n            if mask is not None:\n                if mask.shape[0] > 1:\n                    m = mask[i : i + SDP_BATCH_LIMIT]\n\n            out[i : i + SDP_BATCH_LIMIT] = torch.nn.functional.scaled_dot_product_attention(\n                q[i : i + SDP_BATCH_LIMIT],\n                k[i : i + SDP_BATCH_LIMIT],\n                v[i : i + SDP_BATCH_LIMIT],\n                attn_mask=m,\n                dropout_p=0.0, is_causal=False\n            ).transpose(1, 2).reshape(-1, q.shape[2], heads * dim_head)\n    return out\n\n\ndef attention_sage(q, k, v, heads, mask=None, attn_precision=None, skip_reshape=False, skip_output_reshape=False):\n    if skip_reshape:\n        b, _, _, dim_head = q.shape\n        tensor_layout = \"HND\"\n    else:\n        b, _, dim_head = q.shape\n        dim_head //= heads\n        q, k, v = map(\n            lambda t: t.view(b, -1, heads, dim_head),\n            (q, k, v),\n        )\n        tensor_layout = \"NHD\"\n\n    if mask is not None:\n        # add a batch dimension if there isn't already one\n        if mask.ndim == 2:\n            mask = mask.unsqueeze(0)\n        # add a heads dimension if there isn't already one\n        if mask.ndim == 3:\n            mask = mask.unsqueeze(1)\n\n    try:\n        out = sageattn(q, k, v, attn_mask=mask, is_causal=False, tensor_layout=tensor_layout)\n    except Exception as e:\n        logging.error(\"Error running sage attention: {}, using pytorch attention instead.\".format(e))\n        if tensor_layout == \"NHD\":\n            q, k, v = map(\n                lambda t: t.transpose(1, 2),\n                (q, k, v),\n            )\n        return attention_pytorch(q, k, v, heads, mask=mask, skip_reshape=True, skip_output_reshape=skip_output_reshape)\n\n    if tensor_layout == \"HND\":\n        if not skip_output_reshape:\n            out = (\n                out.transpose(1, 2).reshape(b, -1, heads * dim_head)\n            )\n    else:\n        if skip_output_reshape:\n            out = out.transpose(1, 2)\n        else:\n            out = out.reshape(b, -1, heads * dim_head)\n    return out\n\n\ntry:\n    @torch.library.custom_op(\"flash_attention::flash_attn\", mutates_args=())\n    def flash_attn_wrapper(q: torch.Tensor, k: torch.Tensor, v: torch.Tensor,\n                    dropout_p: float = 0.0, causal: bool = False) -> torch.Tensor:\n        return flash_attn_func(q, k, v, dropout_p=dropout_p, causal=causal)\n\n\n    @flash_attn_wrapper.register_fake\n    def flash_attn_fake(q, k, v, dropout_p=0.0, causal=False):\n        # Output shape is the same as q\n        return q.new_empty(q.shape)\nexcept AttributeError as error:\n    FLASH_ATTN_ERROR = error\n\n    def flash_attn_wrapper(q: torch.Tensor, k: torch.Tensor, v: torch.Tensor,\n                    dropout_p: float = 0.0, causal: bool = False) -> torch.Tensor:\n        assert False, f\"Could not define flash_attn_wrapper: {FLASH_ATTN_ERROR}\"\n\n\ndef attention_flash(q, k, v, heads, mask=None, attn_precision=None, skip_reshape=False, skip_output_reshape=False):\n    if skip_reshape:\n        b, _, _, dim_head = q.shape\n    else:\n        b, _, dim_head = q.shape\n        dim_head //= heads\n        q, k, v = map(\n            lambda t: t.view(b, -1, heads, dim_head).transpose(1, 2),\n            (q, k, v),\n        )\n\n    if mask is not None:\n        # add a batch dimension if there isn't already one\n        if mask.ndim == 2:\n            mask = mask.unsqueeze(0)\n        # add a heads dimension if there isn't already one\n        if mask.ndim == 3:\n            mask = mask.unsqueeze(1)\n\n    try:\n        assert mask is None\n        out = flash_attn_wrapper(\n            q.transpose(1, 2),\n            k.transpose(1, 2),\n            v.transpose(1, 2),\n            dropout_p=0.0,\n            causal=False,\n        ).transpose(1, 2)\n    except Exception as e:\n        logging.warning(f\"Flash Attention failed, using default SDPA: {e}\")\n        out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)\n    if not skip_output_reshape:\n        out = (\n            out.transpose(1, 2).reshape(b, -1, heads * dim_head)\n        )\n    return out\n\n\noptimized_attention = attention_basic\n\nif model_management.sage_attention_enabled():\n    logging.info(\"Using sage attention\")\n    optimized_attention = attention_sage\nelif model_management.xformers_enabled():\n    logging.info(\"Using xformers attention\")\n    optimized_attention = attention_xformers\nelif model_management.flash_attention_enabled():\n    logging.info(\"Using Flash Attention\")\n    optimized_attention = attention_flash\nelif model_management.pytorch_attention_enabled():\n    logging.info(\"Using pytorch attention\")\n    optimized_attention = attention_pytorch\nelse:\n    if args.use_split_cross_attention:\n        logging.info(\"Using split optimization for attention\")\n        optimized_attention = attention_split\n    else:\n        logging.info(\"Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention\")\n        optimized_attention = attention_sub_quad\n\noptimized_attention_masked = optimized_attention\n\ndef optimized_attention_for_device(device, mask=False, small_input=False):\n    if small_input:\n        if model_management.pytorch_attention_enabled():\n            return attention_pytorch #TODO: need to confirm but this is probably slightly faster for small inputs in all cases\n        else:\n            return attention_basic\n\n    if device == torch.device(\"cpu\"):\n        return attention_sub_quad\n\n    if mask:\n        return optimized_attention_masked\n\n    return optimized_attention\n\n\nclass ReCrossAttention(nn.Module):\n    def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0., attn_precision=None, dtype=None, device=None, operations=ops):\n        super().__init__()\n        inner_dim = dim_head * heads\n        context_dim = default(context_dim, query_dim)\n        self.attn_precision = attn_precision\n\n        self.heads = heads\n        self.dim_head = dim_head\n\n        self.to_q = operations.Linear(query_dim, inner_dim, bias=False, dtype=dtype, device=device)\n        self.to_k = operations.Linear(context_dim, inner_dim, bias=False, dtype=dtype, device=device)\n        self.to_v = operations.Linear(context_dim, inner_dim, bias=False, dtype=dtype, device=device)\n\n        self.to_out = nn.Sequential(operations.Linear(inner_dim, query_dim, dtype=dtype, device=device), nn.Dropout(dropout))\n\n    def forward(self, x, context=None, value=None, mask=None, style_block=None):\n        q = self.to_q(x)\n        q = style_block(q, \"q_proj\")\n        #SELF_ATTN = True if context is None else False\n        context = default(context, x)    # if context is None, return x\n        k = self.to_k(context)\n        k = style_block(k, \"k_proj\")\n        if value is not None:\n            v = self.to_v(value)\n            del value\n        else:\n            v = self.to_v(context)\n        v = style_block(v, \"v_proj\")\n\n        if mask is None:\n            out = optimized_attention(q, k, v, self.heads, attn_precision=self.attn_precision)\n        else:\n            #if SELF_ATTN and mask.shape[-2] != q.shape[-2]:\n            #    mask = F.interpolate(mask[None, None].float(), size=(q.shape[-2], q.shape[-2]),    mode='nearest')[0,0].to(mask)\n            #elif mask.shape[-2] != q.shape[-2]: # cross attn\n            #    mask = F.interpolate(mask[None, None].float(), size=(q.shape[-2], mask.shape[-1]), mode='nearest')[0,0].to(mask)\n            out = attention_pytorch(q, k, v, self.heads, mask=mask)\n            #out = optimized_attention_masked(q, k, v, self.heads, mask, attn_precision=self.attn_precision)\n        out = style_block(out, \"out\")\n        return self.to_out(out)\n\n\nclass ReBasicTransformerBlock(nn.Module):\n    def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True, ff_in=False, inner_dim=None,\n                 disable_self_attn=False, disable_temporal_crossattention=False, switch_temporal_ca_to_sa=False, attn_precision=None, dtype=None, device=None, operations=ops):\n        super().__init__()\n\n        self.ff_in = ff_in or inner_dim is not None\n        if inner_dim is None:\n            inner_dim = dim\n\n        self.is_res = inner_dim == dim\n        self.attn_precision = attn_precision\n\n        if self.ff_in:\n            self.norm_in = operations.LayerNorm(dim, dtype=dtype, device=device)\n            self.ff_in = FeedForward(dim, dim_out=inner_dim, dropout=dropout, glu=gated_ff, dtype=dtype, device=device, operations=operations)\n\n        self.disable_self_attn = disable_self_attn\n        self.attn1 = ReCrossAttention(query_dim=inner_dim, heads=n_heads, dim_head=d_head, dropout=dropout,\n                              context_dim=context_dim if self.disable_self_attn else None, attn_precision=self.attn_precision, dtype=dtype, device=device, operations=operations)  # is a self-attention if not self.disable_self_attn\n        self.ff = FeedForward(inner_dim, dim_out=dim, dropout=dropout, glu=gated_ff, dtype=dtype, device=device, operations=operations)\n\n        if disable_temporal_crossattention:\n            if switch_temporal_ca_to_sa:\n                raise ValueError\n            else:\n                self.attn2 = None\n        else:\n            context_dim_attn2 = None\n            if not switch_temporal_ca_to_sa:\n                context_dim_attn2 = context_dim\n\n            self.attn2 = ReCrossAttention(query_dim=inner_dim, context_dim=context_dim_attn2,\n                                heads=n_heads, dim_head=d_head, dropout=dropout, attn_precision=self.attn_precision, dtype=dtype, device=device, operations=operations)  # is self-attn if context is none\n            self.norm2 = operations.LayerNorm(inner_dim, dtype=dtype, device=device)\n\n        self.norm1 = operations.LayerNorm(inner_dim, dtype=dtype, device=device)\n        self.norm3 = operations.LayerNorm(inner_dim, dtype=dtype, device=device)\n        self.n_heads = n_heads\n        self.d_head = d_head\n        self.switch_temporal_ca_to_sa = switch_temporal_ca_to_sa\n\n    def forward(self, x, context=None, transformer_options={}, style_block=None):\n        extra_options = {}\n        block = transformer_options.get(\"block\", None)\n        block_index = transformer_options.get(\"block_index\", 0)\n        transformer_patches = {}\n        transformer_patches_replace = {}\n        \n        self_mask  = transformer_options.get('self_mask')\n        cross_mask = transformer_options.get('cross_mask')\n        \n        if self_mask is not None and cross_mask is not None:\n            if self_mask.shape[-2] == x.shape[-2]:\n                pass\n            elif self_mask.shape[-2] < x.shape[-2]:\n                self_mask  = transformer_options.get('self_mask_up')\n                cross_mask = transformer_options.get('cross_mask_up')\n            else:\n                self_mask  = transformer_options.get('self_mask_down')\n                cross_mask = transformer_options.get('cross_mask_down')\n                \n                if self_mask.shape[-2] > x.shape[-2]:\n                    self_mask  = transformer_options.get('self_mask_down2')\n                    cross_mask = transformer_options.get('cross_mask_down2')\n\n        for k in transformer_options:\n            if k == \"patches\":\n                transformer_patches = transformer_options[k]\n            elif k == \"patches_replace\":\n                transformer_patches_replace = transformer_options[k]\n            else:\n                extra_options[k] = transformer_options[k]\n\n        extra_options[\"n_heads\"] = self.n_heads\n        extra_options[\"dim_head\"] = self.d_head\n        extra_options[\"attn_precision\"] = self.attn_precision\n\n        if self.ff_in: # never true for sdxl?\n            x_skip = x\n            x = self.ff_in(self.norm_in(x))\n            if self.is_res:\n                x += x_skip\n                \n        n = self.norm1(x)\n        n = style_block(n, \"norm1\")\n        if self.disable_self_attn:\n            context_attn1 = context\n        else:\n            context_attn1 = None\n        value_attn1 = None\n\n        if \"attn1_patch\" in transformer_patches:\n            patch = transformer_patches[\"attn1_patch\"]\n            if context_attn1 is None:\n                context_attn1 = n\n            value_attn1 = context_attn1\n            for p in patch:\n                n, context_attn1, value_attn1 = p(n, context_attn1, value_attn1, extra_options)\n\n        if block is not None:\n            transformer_block = (block[0], block[1], block_index)\n        else:\n            transformer_block = None\n        attn1_replace_patch = transformer_patches_replace.get(\"attn1\", {})\n        block_attn1 = transformer_block\n        if block_attn1 not in attn1_replace_patch:\n            block_attn1 = block\n\n        if block_attn1 in attn1_replace_patch:\n            if context_attn1 is None:\n                context_attn1 = n\n                value_attn1 = n\n            n = self.attn1.to_q(n)\n            context_attn1 = self.attn1.to_k(context_attn1)\n            value_attn1 = self.attn1.to_v(value_attn1)\n            n = attn1_replace_patch[block_attn1](n, context_attn1, value_attn1, extra_options)\n            n = self.attn1.to_out(n)\n        else:\n            n = self.attn1(n, context=context_attn1, value=value_attn1, mask=self_mask, style_block=style_block.ATTN1)         # self attention                                             #####\n            n = style_block(n, \"self_attn\")\n\n        if \"attn1_output_patch\" in transformer_patches:\n            patch = transformer_patches[\"attn1_output_patch\"]\n            for p in patch:\n                n = p(n, extra_options)\n\n        x += n ###########\n        x = style_block(x, \"self_attn_res\")\n        if \"middle_patch\" in transformer_patches:\n            patch = transformer_patches[\"middle_patch\"]\n            for p in patch:\n                x = p(x, extra_options)\n\n        if self.attn2 is not None:\n            n = self.norm2(x)\n            n = style_block(n, \"norm2\")\n            \n            if self.switch_temporal_ca_to_sa:\n                context_attn2 = n\n            else:\n                context_attn2 = context\n            value_attn2 = None\n            if \"attn2_patch\" in transformer_patches:\n                patch = transformer_patches[\"attn2_patch\"]\n                value_attn2 = context_attn2\n                for p in patch:\n                    n, context_attn2, value_attn2 = p(n, context_attn2, value_attn2, extra_options)\n\n            attn2_replace_patch = transformer_patches_replace.get(\"attn2\", {})\n            block_attn2 = transformer_block\n            if block_attn2 not in attn2_replace_patch:\n                block_attn2 = block\n\n            if block_attn2 in attn2_replace_patch:\n                if value_attn2 is None:\n                    value_attn2 = context_attn2\n                n = self.attn2.to_q(n)\n                context_attn2 = self.attn2.to_k(context_attn2)\n                value_attn2 = self.attn2.to_v(value_attn2)\n                n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options)\n                n = self.attn2.to_out(n)\n            else:\n                n = self.attn2(n, context=context_attn2, value=value_attn2, mask=cross_mask, style_block=style_block.ATTN2)       # real cross attention                                ##### b (h w) c\n                n = style_block(n, \"cross_attn\")\n\n\n        if \"attn2_output_patch\" in transformer_patches:\n            patch = transformer_patches[\"attn2_output_patch\"]\n            for p in patch:\n                n = p(n, extra_options)\n\n        x += n ###########\n        x = style_block(x, \"cross_attn_res\")\n\n        if self.is_res: # always true with sdxl?\n            x_skip = x\n            \n        if not self.is_res:\n            pass\n        \n        x = self.norm3(x)\n        x = style_block(x, \"norm3\")\n        x = self.ff(x)\n        x = style_block(x, \"ff\")\n\n        if self.is_res:\n            x += x_skip\n            \n        x = style_block(x, \"ff_res\")\n\n        return x\n\n\nclass ReSpatialTransformer(nn.Module):\n    \"\"\"\n    Transformer block for image-like data.\n    First, project the input (aka embedding)\n    and reshape to b, t, d.\n    Then apply standard transformer action.\n    Finally, reshape to image\n    NEW: use_linear for more efficiency instead of the 1x1 convs\n    \"\"\"\n    def __init__(self, in_channels, n_heads, d_head,\n                 depth=1, dropout=0., context_dim=None,\n                 disable_self_attn=False, use_linear=False,\n                 use_checkpoint=True, attn_precision=None, dtype=None, device=None, operations=ops):\n        super().__init__()\n        if exists(context_dim) and not isinstance(context_dim, list):\n            context_dim = [context_dim] * depth\n        self.in_channels = in_channels\n        inner_dim = n_heads * d_head\n        self.norm = operations.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True, dtype=dtype, device=device)\n        if not use_linear:\n            self.proj_in = operations.Conv2d(in_channels,\n                                     inner_dim,\n                                     kernel_size=1,\n                                     stride=1,\n                                     padding=0, dtype=dtype, device=device)\n        else:\n            self.proj_in = operations.Linear(in_channels, inner_dim, dtype=dtype, device=device)\n\n        self.transformer_blocks = nn.ModuleList(\n            [ReBasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim[d],\n                                   disable_self_attn=disable_self_attn, checkpoint=use_checkpoint, attn_precision=attn_precision, dtype=dtype, device=device, operations=operations)\n                for d in range(depth)]\n        )\n        if not use_linear:\n            self.proj_out = operations.Conv2d(inner_dim,in_channels,\n                                                  kernel_size=1,\n                                                  stride=1,\n                                                  padding=0, dtype=dtype, device=device)\n        else:\n            self.proj_out = operations.Linear(in_channels, inner_dim, dtype=dtype, device=device)\n        self.use_linear = use_linear\n\n    def forward(self, x, context=None, style_block=None, transformer_options={}):\n        # note: if no context is given, cross-attention defaults to self-attention\n        if not isinstance(context, list):\n            context = [context] * len(self.transformer_blocks)\n        b, c, h, w = x.shape\n        transformer_options[\"activations_shape\"] = list(x.shape)\n        x_in = x\n        x = self.norm(x)\n        x = style_block(x, \"spatial_norm_in\")\n        if not self.use_linear:\n            x = self.proj_in(x)\n            x = style_block(x, \"spatial_proj_in\")\n        x = x.movedim(1, 3).flatten(1, 2).contiguous()\n        if self.use_linear:\n            x = self.proj_in(x)\n            x = style_block(x, \"spatial_proj_in\")\n        for i, block in enumerate(self.transformer_blocks):\n            transformer_options[\"block_index\"] = i\n            x = block(x, context=context[i], style_block=style_block.TFMR, transformer_options=transformer_options)\n            x = style_block(x, \"spatial_transformer_block\")\n        x = style_block(x, \"spatial_transformer\")\n        if self.use_linear:\n            x = self.proj_out(x)\n        x = x.reshape(x.shape[0], h, w, x.shape[-1]).movedim(3, 1).contiguous()\n        if not self.use_linear:\n            x = self.proj_out(x)\n        x = style_block(x, \"spatial_proj_out\")\n        x = x + x_in\n        x = style_block(x, \"spatial_res\")\n        return x\n\n\nclass SpatialVideoTransformer(ReSpatialTransformer):\n    def __init__(\n        self,\n        in_channels,\n        n_heads,\n        d_head,\n        depth=1,\n        dropout=0.0,\n        use_linear=False,\n        context_dim=None,\n        use_spatial_context=False,\n        timesteps=None,\n        merge_strategy: str = \"fixed\",\n        merge_factor: float = 0.5,\n        time_context_dim=None,\n        ff_in=False,\n        checkpoint=False,\n        time_depth=1,\n        disable_self_attn=False,\n        disable_temporal_crossattention=False,\n        max_time_embed_period: int = 10000,\n        attn_precision=None,\n        dtype=None, device=None, operations=ops\n    ):\n        super().__init__(\n            in_channels,\n            n_heads,\n            d_head,\n            depth=depth,\n            dropout=dropout,\n            use_checkpoint=checkpoint,\n            context_dim=context_dim,\n            use_linear=use_linear,\n            disable_self_attn=disable_self_attn,\n            attn_precision=attn_precision,\n            dtype=dtype, device=device, operations=operations\n        )\n        self.time_depth = time_depth\n        self.depth = depth\n        self.max_time_embed_period = max_time_embed_period\n\n        time_mix_d_head = d_head\n        n_time_mix_heads = n_heads\n\n        time_mix_inner_dim = int(time_mix_d_head * n_time_mix_heads)\n\n        inner_dim = n_heads * d_head\n        if use_spatial_context:\n            time_context_dim = context_dim\n\n        self.time_stack = nn.ModuleList(\n            [\n                BasicTransformerBlock(\n                    inner_dim,\n                    n_time_mix_heads,\n                    time_mix_d_head,\n                    dropout=dropout,\n                    context_dim=time_context_dim,\n                    # timesteps=timesteps,\n                    checkpoint=checkpoint,\n                    ff_in=ff_in,\n                    inner_dim=time_mix_inner_dim,\n                    disable_self_attn=disable_self_attn,\n                    disable_temporal_crossattention=disable_temporal_crossattention,\n                    attn_precision=attn_precision,\n                    dtype=dtype, device=device, operations=operations\n                )\n                for _ in range(self.depth)\n            ]\n        )\n\n        assert len(self.time_stack) == len(self.transformer_blocks)\n\n        self.use_spatial_context = use_spatial_context\n        self.in_channels = in_channels\n\n        time_embed_dim = self.in_channels * 4\n        self.time_pos_embed = nn.Sequential(\n            operations.Linear(self.in_channels, time_embed_dim, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Linear(time_embed_dim, self.in_channels, dtype=dtype, device=device),\n        )\n\n        self.time_mixer = AlphaBlender(\n            alpha=merge_factor, merge_strategy=merge_strategy\n        )\n\n    def forward(\n        self,\n        x: torch.Tensor,\n        context: Optional[torch.Tensor] = None,\n        time_context: Optional[torch.Tensor] = None,\n        timesteps: Optional[int] = None,\n        image_only_indicator: Optional[torch.Tensor] = None,\n        transformer_options={}\n    ) -> torch.Tensor:\n        _, _, h, w = x.shape\n        transformer_options[\"activations_shape\"] = list(x.shape)\n        x_in = x\n        spatial_context = None\n        if exists(context):\n            spatial_context = context\n\n        if self.use_spatial_context:\n            assert (\n                context.ndim == 3\n            ), f\"n dims of spatial context should be 3 but are {context.ndim}\"\n\n            if time_context is None:\n                time_context = context\n            time_context_first_timestep = time_context[::timesteps]\n            time_context = repeat(\n                time_context_first_timestep, \"b ... -> (b n) ...\", n=h * w\n            )\n        elif time_context is not None and not self.use_spatial_context:\n            time_context = repeat(time_context, \"b ... -> (b n) ...\", n=h * w)\n            if time_context.ndim == 2:\n                time_context = rearrange(time_context, \"b c -> b 1 c\")\n\n        x = self.norm(x)\n        if not self.use_linear:\n            x = self.proj_in(x)\n        x = rearrange(x, \"b c h w -> b (h w) c\")\n        if self.use_linear:\n            x = self.proj_in(x)\n\n        num_frames = torch.arange(timesteps, device=x.device)\n        num_frames = repeat(num_frames, \"t -> b t\", b=x.shape[0] // timesteps)\n        num_frames = rearrange(num_frames, \"b t -> (b t)\")\n        t_emb = timestep_embedding(num_frames, self.in_channels, repeat_only=False, max_period=self.max_time_embed_period).to(x.dtype)\n        emb = self.time_pos_embed(t_emb)\n        emb = emb[:, None, :]\n\n        for it_, (block, mix_block) in enumerate(\n            zip(self.transformer_blocks, self.time_stack)\n        ):\n            transformer_options[\"block_index\"] = it_\n            x = block(\n                x,\n                context=spatial_context,\n                transformer_options=transformer_options,\n            )\n\n            x_mix = x\n            x_mix = x_mix + emb\n\n            B, S, C = x_mix.shape\n            x_mix = rearrange(x_mix, \"(b t) s c -> (b s) t c\", t=timesteps)\n            x_mix = mix_block(x_mix, context=time_context) #TODO: transformer_options\n            x_mix = rearrange(\n                x_mix, \"(b s) t c -> (b t) s c\", s=S, b=B // timesteps, c=C, t=timesteps\n            )\n\n            x = self.time_mixer(x_spatial=x, x_temporal=x_mix, image_only_indicator=image_only_indicator)\n\n        if self.use_linear:\n            x = self.proj_out(x)\n        x = rearrange(x, \"b (h w) c -> b c h w\", h=h, w=w)\n        if not self.use_linear:\n            x = self.proj_out(x)\n        out = x + x_in\n        return out\n\n\n"
  },
  {
    "path": "sd/openaimodel.py",
    "content": "from abc import abstractmethod\nimport torch\nimport torch as th\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom einops import rearrange\nimport logging\nimport copy\n\nfrom ..helper import ExtraOptions\n\nfrom comfy.ldm.modules.diffusionmodules.util import (\n    checkpoint,\n    avg_pool_nd,\n    timestep_embedding,\n    AlphaBlender,\n)\nfrom comfy.ldm.modules.attention import SpatialTransformer, SpatialVideoTransformer, default\nfrom .attention import ReSpatialTransformer, ReBasicTransformerBlock\nfrom comfy.ldm.util import exists\nimport comfy.patcher_extension\nimport comfy.ops\nops = comfy.ops.disable_weight_init\n\nfrom comfy.ldm.modules.diffusionmodules.openaimodel import TimestepBlock, TimestepEmbedSequential, Upsample, Downsample, ResBlock, VideoResBlock\nfrom ..latents import slerp_tensor, interpolate_spd, tile_latent, untile_latent, gaussian_blur_2d, median_blur_2d\n\nfrom ..style_transfer import apply_scattersort_masked, apply_scattersort_tiled, adain_seq_inplace, adain_patchwise_row_batch_med, adain_patchwise_row_batch, apply_scattersort, apply_scattersort_spatial, StyleMMDiT_Model, StyleUNet_Model\n\n#This is needed because accelerate makes a copy of transformer_options which breaks \"transformer_index\"\ndef forward_timestep_embed(ts, x, emb, context=None, transformer_options={}, output_shape=None, time_context=None, num_video_frames=None, image_only_indicator=None, style_block=None):\n    for layer in ts:\n        if isinstance(layer, VideoResBlock): # UNUSED\n            x = layer(x, emb, num_video_frames, image_only_indicator)\n        elif isinstance(layer, TimestepBlock):  # ResBlock(TimestepBlock)\n            x = layer(x, emb, style_block.res_block)\n            x = style_block(x, \"res\")\n        elif isinstance(layer, SpatialVideoTransformer):   # UNUSED\n            x = layer(x, context, time_context, num_video_frames, image_only_indicator, transformer_options)\n            if \"transformer_index\" in transformer_options:\n                transformer_options[\"transformer_index\"] += 1\n        elif isinstance(layer, ReSpatialTransformer):          # USED\n            x = layer(x, context, style_block.spatial_block, transformer_options,)\n            x = style_block(x, \"spatial\")\n            if \"transformer_index\" in transformer_options:\n                transformer_options[\"transformer_index\"] += 1\n        elif isinstance(layer, Upsample):\n            x = layer(x, output_shape=output_shape)\n            x = style_block(x, \"resample\")\n        elif isinstance(layer, Downsample):\n            x = layer(x)\n            x = style_block(x, \"resample\")\n        else:\n            if \"patches\" in transformer_options and \"forward_timestep_embed_patch\" in transformer_options[\"patches\"]:\n                found_patched = False\n                for class_type, handler in transformer_options[\"patches\"][\"forward_timestep_embed_patch\"]:\n                    if isinstance(layer, class_type):\n                        x = handler(layer, x, emb, context, transformer_options, output_shape, time_context, num_video_frames, image_only_indicator)\n                        found_patched = True\n                        break\n                if found_patched:\n                    continue\n            x = layer(x)\n    return x\n\n\n\n\nclass ReResBlock(TimestepBlock):\n    \"\"\"\n    A residual block that can optionally change the number of channels.\n    :param channels: the number of input channels.\n    :param emb_channels: the number of timestep embedding channels.\n    :param dropout: the rate of dropout.\n    :param out_channels: if specified, the number of out channels.\n    :param use_conv: if True and out_channels is specified, use a spatial\n        convolution instead of a smaller 1x1 convolution to change the\n        channels in the skip connection.\n    :param dims: determines if the signal is 1D, 2D, or 3D.\n    :param use_checkpoint: if True, use gradient checkpointing on this module.\n    :param up: if True, use this block for upsampling.\n    :param down: if True, use this block for downsampling.\n    \"\"\"\n\n    def __init__(\n        self,\n        channels,\n        emb_channels,\n        dropout,\n        out_channels=None,\n        use_conv=False,\n        use_scale_shift_norm=False,\n        dims=2,\n        use_checkpoint=False,\n        up=False,\n        down=False,\n        kernel_size=3,\n        exchange_temb_dims=False,\n        skip_t_emb=False,\n        dtype=None,\n        device=None,\n        operations=ops\n    ):\n        super().__init__()\n        self.channels = channels\n        self.emb_channels = emb_channels\n        self.dropout = dropout\n        self.out_channels = out_channels or channels\n        self.use_conv = use_conv\n        self.use_checkpoint = use_checkpoint\n        self.use_scale_shift_norm = use_scale_shift_norm\n        self.exchange_temb_dims = exchange_temb_dims\n\n        if isinstance(kernel_size, list):\n            padding = [k // 2 for k in kernel_size]\n        else:\n            padding = kernel_size // 2\n\n        self.in_layers = nn.Sequential(\n            operations.GroupNorm(32, channels, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.conv_nd(dims, channels, self.out_channels, kernel_size, padding=padding, dtype=dtype, device=device),\n        )\n\n        self.updown = up or down\n\n        if up:\n            self.h_upd = Upsample(channels, False, dims, dtype=dtype, device=device)\n            self.x_upd = Upsample(channels, False, dims, dtype=dtype, device=device)\n        elif down:\n            self.h_upd = Downsample(channels, False, dims, dtype=dtype, device=device)\n            self.x_upd = Downsample(channels, False, dims, dtype=dtype, device=device)\n        else:\n            self.h_upd = self.x_upd = nn.Identity()\n\n        self.skip_t_emb = skip_t_emb\n        if self.skip_t_emb:\n            self.emb_layers = None\n            self.exchange_temb_dims = False\n        else:\n            self.emb_layers = nn.Sequential(\n                nn.SiLU(),\n                operations.Linear(\n                    emb_channels,\n                    2 * self.out_channels if use_scale_shift_norm else self.out_channels, dtype=dtype, device=device\n                ),\n            )\n        self.out_layers = nn.Sequential(\n            operations.GroupNorm(32, self.out_channels, dtype=dtype, device=device),\n            nn.SiLU(),\n            nn.Dropout(p=dropout),\n            operations.conv_nd(dims, self.out_channels, self.out_channels, kernel_size, padding=padding, dtype=dtype, device=device)\n            ,\n        )\n\n        if self.out_channels == channels:\n            self.skip_connection = nn.Identity()\n        elif use_conv:\n            self.skip_connection = operations.conv_nd(\n                dims, channels, self.out_channels, kernel_size, padding=padding, dtype=dtype, device=device\n            )\n        else:\n            self.skip_connection = operations.conv_nd(dims, channels, self.out_channels, 1, dtype=dtype, device=device)\n\n    def forward(self, x, emb, style_block=None):\n        \"\"\"\n        Apply the block to a Tensor, conditioned on a timestep embedding.\n        :param x: an [N x C x ...] Tensor of features.\n        :param emb: an [N x emb_channels] Tensor of timestep embeddings.\n        :return: an [N x C x ...] Tensor of outputs.\n        \"\"\"\n        return checkpoint(\n            self._forward, (x, emb, style_block), self.parameters(), self.use_checkpoint\n        )\n\n\n    def _forward(self, x, emb, style_block=None):\n        #if self.updown: # not used with sdxl?\n        #    in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1]\n        #    h = in_rest(x)\n        #    h = self.h_upd(h)\n        #    x = self.x_upd(x)\n        #    h = in_conv(h)\n        #else:\n        #    h = self.in_layers(x)\n        \n        h = self.in_layers[0](x)\n        h = style_block(h, \"in_norm\")\n        \n        h = self.in_layers[1](h)\n        h = style_block(h, \"in_silu\")\n        \n        h = self.in_layers[2](h)\n        h = style_block(h, \"in_conv\")\n        \n\n        emb_out = None\n        if not self.skip_t_emb:\n            #emb_out = self.emb_layers(emb).type(h.dtype)\n            emb_out = self.emb_layers[0](emb).type(h.dtype)\n            emb_out = style_block(emb_out, \"emb_silu\")\n            \n            emb_out = self.emb_layers[1](emb_out)\n            emb_out = style_block(emb_out, \"emb_linear\")\n            \n            while len(emb_out.shape) < len(h.shape):\n                emb_out = emb_out[..., None]\n                \n        if self.use_scale_shift_norm: # not used with sdxl?\n            out_norm, out_rest = self.out_layers[0], self.out_layers[1:]\n            h = out_norm(h)\n            if emb_out is not None:\n                scale, shift = th.chunk(emb_out, 2, dim=1)\n                h *= (1 + scale)\n                h += shift\n            h = out_rest(h)\n        else:\n            if emb_out is not None:\n                if self.exchange_temb_dims:\n                    emb_out = emb_out.movedim(1, 2)\n                h = h + emb_out\n                h = style_block(h, \"emb_res\")\n            #h = self.out_layers(h)\n            h = self.out_layers[0](h)\n            h = style_block(h, \"out_norm\")\n            \n            h = self.out_layers[1](h)\n            h = style_block(h, \"out_silu\")\n            \n            h = self.out_layers[3](h) # [2] is dropout\n            h = style_block(h, \"out_conv\")\n            \n        res_out = self.skip_connection(x) + h\n        res_out = style_block(res_out, \"residual\")\n        return res_out   \n        #return self.skip_connection(x) + h\n\n\n\n\nclass Timestep(nn.Module):\n    def __init__(self, dim):\n        super().__init__()\n        self.dim = dim\n\n    def forward(self, t):\n        return timestep_embedding(t, self.dim)\n\ndef apply_control(h, control, name):\n    if control is not None and name in control and len(control[name]) > 0:\n        ctrl = control[name].pop()\n        if ctrl is not None:\n            try:\n                h += ctrl\n            except:\n                logging.warning(\"warning control could not be applied {} {}\".format(h.shape, ctrl.shape))\n    return h\n\n\nclass ReUNetModel(nn.Module):\n    \"\"\"\n    The full UNet model with attention and timestep embedding.\n    :param in_channels: channels in the input Tensor.\n    :param model_channels: base channel count for the model.\n    :param out_channels: channels in the output Tensor.\n    :param num_res_blocks: number of residual blocks per downsample.\n    :param dropout: the dropout probability.\n    :param channel_mult: channel multiplier for each level of the UNet.\n    :param conv_resample: if True, use learned convolutions for upsampling and\n        downsampling.\n    :param dims: determines if the signal is 1D, 2D, or 3D.\n    :param num_classes: if specified (as an int), then this model will be\n        class-conditional with `num_classes` classes.\n    :param use_checkpoint: use gradient checkpointing to reduce memory usage.\n    :param num_heads: the number of attention heads in each attention layer.\n    :param num_heads_channels: if specified, ignore num_heads and instead use\n                               a fixed channel width per attention head.\n    :param num_heads_upsample: works with num_heads to set a different number\n                               of heads for upsampling. Deprecated.\n    :param use_scale_shift_norm: use a FiLM-like conditioning mechanism.\n    :param resblock_updown: use residual blocks for up/downsampling.\n    :param use_new_attention_order: use a different attention pattern for potentially\n                                    increased efficiency.\n    \"\"\"\n\n    def __init__(\n        self,\n        image_size,\n        in_channels,\n        model_channels,\n        out_channels,\n        num_res_blocks,\n        dropout                         = 0,\n        channel_mult                    = (1, 2, 4, 8),\n        conv_resample                   = True,\n        dims                            = 2,\n        num_classes                     = None,\n        use_checkpoint                  = False,\n        dtype                           = th.float32,\n        num_heads                       = -1,\n        num_head_channels               = -1,\n        num_heads_upsample              = -1,\n        use_scale_shift_norm            = False,\n        resblock_updown                 = False,\n        use_new_attention_order         = False,\n        use_spatial_transformer         = False,    # custom transformer support\n        transformer_depth               = 1,              # custom transformer support\n        context_dim                     = None,                 # custom transformer support\n        n_embed                         = None,                     # custom support for prediction of discrete ids into codebook of first stage vq model\n        legacy                          = True,\n        disable_self_attentions         = None,\n        num_attention_blocks            = None,\n        disable_middle_self_attn        = False,\n        use_linear_in_transformer       = False,\n        adm_in_channels                 = None,\n        transformer_depth_middle        = None,\n        transformer_depth_output        = None,\n        use_temporal_resblock           = False,\n        use_temporal_attention          = False,\n        time_context_dim                = None,\n        extra_ff_mix_layer              = False,\n        use_spatial_context             = False,\n        merge_strategy                  = None,\n        merge_factor                    = 0.0,\n        video_kernel_size               = None,\n        disable_temporal_crossattention = False,\n        max_ddpm_temb_period            = 10000,\n        attn_precision                  = None,\n        device                          = None,\n        operations                      = ops,\n    ):\n        super().__init__()\n\n        if context_dim is not None:\n            assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...'\n            # from omegaconf.listconfig import ListConfig\n            # if type(context_dim) == ListConfig:\n            #     context_dim = list(context_dim)\n\n        if num_heads_upsample == -1:\n            num_heads_upsample = num_heads\n\n        if num_heads == -1:\n            assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set'\n\n        if num_head_channels == -1:\n            assert num_heads != -1, 'Either num_heads or num_head_channels has to be set'\n\n        self.in_channels = in_channels\n        self.model_channels = model_channels\n        self.out_channels = out_channels\n\n        if isinstance(num_res_blocks, int):\n            self.num_res_blocks = len(channel_mult) * [num_res_blocks]\n        else:\n            if len(num_res_blocks) != len(channel_mult):\n                raise ValueError(\"provide num_res_blocks either as an int (globally constant) or \"\n                                 \"as a list/tuple (per-level) with the same length as channel_mult\")\n            self.num_res_blocks = num_res_blocks\n\n        if disable_self_attentions is not None:\n            # should be a list of booleans, indicating whether to disable self-attention in TransformerBlocks or not\n            assert len(disable_self_attentions) == len(channel_mult)\n        if num_attention_blocks is not None:\n            assert len(num_attention_blocks) == len(self.num_res_blocks)\n\n        transformer_depth = transformer_depth[:]\n        transformer_depth_output = transformer_depth_output[:]\n\n        self.dropout                = dropout\n        self.channel_mult           = channel_mult\n        self.conv_resample          = conv_resample\n        self.num_classes            = num_classes\n        self.use_checkpoint         = use_checkpoint\n        self.dtype                  = dtype\n        self.num_heads              = num_heads\n        self.num_head_channels      = num_head_channels\n        self.num_heads_upsample     = num_heads_upsample\n        self.use_temporal_resblocks = use_temporal_resblock\n        self.predict_codebook_ids   = n_embed is not None\n\n        self.default_num_video_frames = None\n\n        time_embed_dim = model_channels * 4\n        self.time_embed = nn.Sequential(\n            operations.Linear(model_channels, time_embed_dim, dtype=self.dtype, device=device),\n            nn.SiLU(),\n            operations.Linear(time_embed_dim, time_embed_dim, dtype=self.dtype, device=device),\n        )\n\n        if self.num_classes is not None:\n            if isinstance(self.num_classes, int):\n                self.label_emb = nn.Embedding(num_classes, time_embed_dim, dtype=self.dtype, device=device)\n            elif self.num_classes == \"continuous\":\n                logging.debug(\"setting up linear c_adm embedding layer\")\n                self.label_emb = nn.Linear(1, time_embed_dim)\n            elif self.num_classes == \"sequential\":\n                assert adm_in_channels is not None\n                self.label_emb = nn.Sequential(\n                    nn.Sequential(\n                        operations.Linear(adm_in_channels, time_embed_dim, dtype=self.dtype, device=device),\n                        nn.SiLU(),\n                        operations.Linear(time_embed_dim, time_embed_dim, dtype=self.dtype, device=device),\n                    )\n                )\n            else:\n                raise ValueError()\n\n        self.input_blocks = nn.ModuleList(\n            [\n                TimestepEmbedSequential(\n                    operations.conv_nd(dims, in_channels, model_channels, 3, padding=1, dtype=self.dtype, device=device)\n                )\n            ]\n        )\n        self._feature_size = model_channels\n        input_block_chans = [model_channels]\n        ch = model_channels\n        ds = 1\n\n        def get_attention_layer(\n            ch,\n            num_heads,\n            dim_head,\n            depth=1,\n            context_dim=None,\n            use_checkpoint=False,\n            disable_self_attn=False,\n        ):\n            if use_temporal_attention:\n                return SpatialVideoTransformer(\n                    ch,\n                    num_heads,\n                    dim_head,\n                    depth                           = depth,\n                    context_dim                     = context_dim,\n                    time_context_dim                = time_context_dim,\n                    dropout                         = dropout,\n                    ff_in                           = extra_ff_mix_layer,\n                    use_spatial_context             = use_spatial_context,\n                    merge_strategy                  = merge_strategy,\n                    merge_factor                    = merge_factor,\n                    checkpoint                      = use_checkpoint,\n                    use_linear                      = use_linear_in_transformer,\n                    disable_self_attn               = disable_self_attn,\n                    disable_temporal_crossattention = disable_temporal_crossattention,\n                    max_time_embed_period           = max_ddpm_temb_period,\n                    attn_precision                  = attn_precision,\n                    dtype=self.dtype, device=device, operations=operations,\n                )\n            else:\n                return SpatialTransformer(\n                                ch, num_heads, dim_head, depth=depth, context_dim=context_dim,\n                                disable_self_attn=disable_self_attn, use_linear=use_linear_in_transformer,\n                                use_checkpoint=use_checkpoint, attn_precision=attn_precision, dtype=self.dtype, device=device, operations=operations\n                            )\n\n        def get_resblock(\n            merge_factor,\n            merge_strategy,\n            video_kernel_size,\n            ch,\n            time_embed_dim,\n            dropout,\n            out_channels,\n            dims,\n            use_checkpoint,\n            use_scale_shift_norm,\n            down       = False,\n            up         = False,\n            dtype      = None,\n            device     = None,\n            operations = ops\n        ):\n            if self.use_temporal_resblocks:\n                return VideoResBlock(\n                    merge_factor         = merge_factor,\n                    merge_strategy       = merge_strategy,\n                    video_kernel_size    = video_kernel_size,\n                    channels             = ch,\n                    emb_channels         = time_embed_dim,\n                    dropout              = dropout,\n                    out_channels         = out_channels,\n                    dims                 = dims,\n                    use_checkpoint       = use_checkpoint,\n                    use_scale_shift_norm = use_scale_shift_norm,\n                    down                 = down,\n                    up                   = up,\n                    dtype=dtype, device=device, operations=operations,\n                )\n            else:\n                return ResBlock(\n                    channels             = ch,\n                    emb_channels         = time_embed_dim,\n                    dropout              = dropout,\n                    out_channels         = out_channels,\n                    use_checkpoint       = use_checkpoint,\n                    dims                 = dims,\n                    use_scale_shift_norm = use_scale_shift_norm,\n                    down                 = down,\n                    up                   = up,\n                    dtype=dtype, device=device, operations=operations,\n                )\n\n        for level, mult in enumerate(channel_mult):\n            for nr in range(self.num_res_blocks[level]):\n                layers = [\n                    get_resblock(\n                        merge_factor         = merge_factor,\n                        merge_strategy       = merge_strategy,\n                        video_kernel_size    = video_kernel_size,\n                        ch                   = ch,\n                        time_embed_dim       = time_embed_dim,\n                        dropout              = dropout,\n                        out_channels         = mult * model_channels,\n                        dims                 = dims,\n                        use_checkpoint       = use_checkpoint,\n                        use_scale_shift_norm = use_scale_shift_norm,\n                        dtype=self.dtype, device=device, operations=operations,\n                    )\n                ]\n                ch = mult * model_channels\n                num_transformers = transformer_depth.pop(0)\n                if num_transformers > 0:\n                    if num_head_channels == -1:\n                        dim_head = ch // num_heads\n                    else:\n                        num_heads = ch // num_head_channels\n                        dim_head = num_head_channels\n                    if legacy:\n                        #num_heads = 1\n                        dim_head = ch // num_heads if use_spatial_transformer else num_head_channels\n                    if exists(disable_self_attentions):\n                        disabled_sa = disable_self_attentions[level]\n                    else:\n                        disabled_sa = False\n\n                    if not exists(num_attention_blocks) or nr < num_attention_blocks[level]:\n                        layers.append(get_attention_layer(\n                                ch, num_heads, dim_head, depth=num_transformers, context_dim=context_dim,\n                                disable_self_attn=disabled_sa, use_checkpoint=use_checkpoint)\n                        )\n                self.input_blocks.append(TimestepEmbedSequential(*layers))\n                self._feature_size += ch\n                input_block_chans.append(ch)\n            if level != len(channel_mult) - 1:\n                out_ch = ch\n                self.input_blocks.append(\n                    TimestepEmbedSequential(\n                        get_resblock(\n                            merge_factor         = merge_factor,\n                            merge_strategy       = merge_strategy,\n                            video_kernel_size    = video_kernel_size,\n                            ch                   = ch,\n                            time_embed_dim       = time_embed_dim,\n                            dropout              = dropout,\n                            out_channels         = out_ch,\n                            dims                 = dims,\n                            use_checkpoint       = use_checkpoint,\n                            use_scale_shift_norm = use_scale_shift_norm,\n                            down                 = True,\n                            dtype=self.dtype, device=device, operations=operations,\n                        )\n                        if resblock_updown\n                        else Downsample(ch, conv_resample, dims=dims, out_channels=out_ch, dtype=self.dtype, device=device, operations=operations)\n                    )\n                )\n                ch = out_ch\n                input_block_chans.append(ch)\n                ds *= 2\n                self._feature_size += ch\n\n        if num_head_channels == -1:\n            dim_head = ch // num_heads\n        else:\n            num_heads = ch // num_head_channels\n            dim_head = num_head_channels\n        if legacy:\n            #num_heads = 1\n            dim_head = ch // num_heads if use_spatial_transformer else num_head_channels\n        mid_block = [\n            get_resblock(\n                merge_factor         = merge_factor,\n                merge_strategy       = merge_strategy,\n                video_kernel_size    = video_kernel_size,\n                ch                   = ch,\n                time_embed_dim       = time_embed_dim,\n                dropout              = dropout,\n                out_channels         = None,\n                dims                 = dims,\n                use_checkpoint       = use_checkpoint,\n                use_scale_shift_norm = use_scale_shift_norm,\n                dtype=self.dtype, device=device, operations=operations,\n            )]\n\n        self.middle_block = None\n        if transformer_depth_middle >= -1:\n            if transformer_depth_middle >= 0:\n                mid_block += [get_attention_layer(  # always uses a self-attn\n                                ch, num_heads, dim_head, depth=transformer_depth_middle, context_dim=context_dim,\n                                disable_self_attn=disable_middle_self_attn, use_checkpoint=use_checkpoint\n                            ),\n                get_resblock(\n                    merge_factor         = merge_factor,\n                    merge_strategy       = merge_strategy,\n                    video_kernel_size    = video_kernel_size,\n                    ch                   = ch,\n                    time_embed_dim       = time_embed_dim,\n                    dropout              = dropout,\n                    out_channels         = None,\n                    dims                 = dims,\n                    use_checkpoint       = use_checkpoint,\n                    use_scale_shift_norm = use_scale_shift_norm,\n                    dtype=self.dtype, device=device, operations=operations,\n                )]\n            self.middle_block = TimestepEmbedSequential(*mid_block)\n        self._feature_size += ch\n\n        self.output_blocks = nn.ModuleList([])\n        for level, mult in list(enumerate(channel_mult))[::-1]:\n            for i in range(self.num_res_blocks[level] + 1):\n                ich = input_block_chans.pop()\n                layers = [\n                    get_resblock(\n                        merge_factor         = merge_factor,\n                        merge_strategy       = merge_strategy,\n                        video_kernel_size    = video_kernel_size,\n                        ch                   = ch + ich,\n                        time_embed_dim       = time_embed_dim,\n                        dropout              = dropout,\n                        out_channels         = model_channels * mult,\n                        dims                 = dims,\n                        use_checkpoint       = use_checkpoint,\n                        use_scale_shift_norm = use_scale_shift_norm,\n                        dtype=self.dtype, device=device, operations=operations,\n                    )\n                ]\n                ch = model_channels * mult\n                num_transformers = transformer_depth_output.pop()\n                if num_transformers > 0:\n                    if num_head_channels == -1:\n                        dim_head = ch // num_heads\n                    else:\n                        num_heads = ch // num_head_channels\n                        dim_head = num_head_channels\n                    if legacy:\n                        #num_heads = 1\n                        dim_head = ch // num_heads if use_spatial_transformer else num_head_channels\n                    if exists(disable_self_attentions):\n                        disabled_sa = disable_self_attentions[level]\n                    else:\n                        disabled_sa = False\n\n                    if not exists(num_attention_blocks) or i < num_attention_blocks[level]:\n                        layers.append(\n                            get_attention_layer(\n                                ch, num_heads, dim_head, depth=num_transformers, context_dim=context_dim,\n                                disable_self_attn=disabled_sa, use_checkpoint=use_checkpoint\n                            )\n                        )\n                if level and i == self.num_res_blocks[level]:\n                    out_ch = ch\n                    layers.append(\n                        get_resblock(\n                            merge_factor         = merge_factor,\n                            merge_strategy       = merge_strategy,\n                            video_kernel_size    = video_kernel_size,\n                            ch                   = ch,\n                            time_embed_dim       = time_embed_dim,\n                            dropout              = dropout,\n                            out_channels         = out_ch,\n                            dims                 = dims,\n                            use_checkpoint       = use_checkpoint,\n                            use_scale_shift_norm = use_scale_shift_norm,\n                            up                   = True,\n                            dtype=self.dtype, device=device, operations=operations,\n                        )\n                        if resblock_updown\n                        else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch, dtype=self.dtype, device=device, operations=operations)\n                    )\n                    ds //= 2\n                self.output_blocks.append(TimestepEmbedSequential(*layers))\n                self._feature_size += ch\n\n        self.out = nn.Sequential(\n            operations.GroupNorm(32, ch, dtype=self.dtype, device=device),\n            nn.SiLU(),\n            operations.conv_nd(dims, model_channels, out_channels, 3, padding=1, dtype=self.dtype, device=device),\n        )\n        if self.predict_codebook_ids:\n            self.id_predictor = nn.Sequential(\n            operations.GroupNorm(32, ch, dtype=self.dtype, device=device),\n            operations.conv_nd(dims, model_channels, n_embed, 1, dtype=self.dtype, device=device),\n            #nn.LogSoftmax(dim=1)  # change to cross_entropy and produce non-normalized logits\n        )\n\n\n\n    def forward(self, x, timesteps=None, context=None, y=None, control=None, transformer_options={}, **kwargs):\n        return comfy.patcher_extension.WrapperExecutor.new_class_executor(\n            self._forward,\n            self,\n            comfy.patcher_extension.get_all_wrappers(comfy.patcher_extension.WrappersMP.DIFFUSION_MODEL, transformer_options)\n        ).execute(x, timesteps, context, y, control, transformer_options, **kwargs)\n\n    def _forward(self, x, timesteps=None, context=None, y=None, control=None, transformer_options={}, **kwargs):\n        \"\"\"\n        Apply the model to an input batch.\n        :param x: an [N x C x ...] Tensor of inputs.\n        :param timesteps: a 1-D batch of timesteps.\n        :param context: conditioning plugged in via crossattn\n        :param y: an [N] Tensor of labels, if class-conditional.\n        :return: an [N x C x ...] Tensor of outputs.\n        \"\"\"\n        h_len, w_len = x.shape[-2:]\n        img_len = h_len * w_len\n        transformer_options[\"original_shape\"] = list(x.shape)\n        transformer_options[\"transformer_index\"] = 0\n        transformer_patches = transformer_options.get(\"patches\", {})\n        SIGMA = transformer_options['sigmas'].to(x) # timestep[0].unsqueeze(0) #/ 1000\n\n        img_slice = slice(None, -1) #slice(None, img_len)   # for the sake of cross attn... :-1\n        txt_slice = slice(None, -1)\n        \n        EO = transformer_options.get(\"ExtraOptions\", ExtraOptions(\"\"))\n        if EO is not None:\n            EO.mute = True\n\n        if EO(\"zero_heads\"):\n            HEADS = 0\n        else:\n            HEADS = 10 # self.input_blocks[4][1].transformer_blocks[0].attn2.heads # HEADS = 10\n\n        StyleMMDiT = transformer_options.get('StyleMMDiT', StyleUNet_Model())        \n        StyleMMDiT.set_len(h_len, w_len, img_slice, txt_slice, HEADS=HEADS)\n        StyleMMDiT.Retrojector = self.Retrojector if hasattr(self, \"Retrojector\") else None\n        transformer_options['StyleMMDiT'] = None\n        \n        x_tmp = transformer_options.get(\"x_tmp\")\n        if x_tmp is not None:\n            x_tmp = x_tmp.clone() / ((SIGMA ** 2 + 1) ** 0.5)\n            x_tmp = x_tmp.expand_as(x) # (x.shape[0], -1, -1, -1) # .clone().to(x)\n        \n        y0_style, img_y0_style = None, None\n\n        \n        x_orig, timesteps_orig, y_orig, context_orig = clone_inputs(x, timesteps, y, context)\n        h_orig = x_orig.clone()\n\n        weight    = -1 * transformer_options.get(\"regional_conditioning_weight\", 0.0)\n        floor     = -1 * transformer_options.get(\"regional_conditioning_floor\",  0.0)\n        \n        #floor     = min(floor, weight)\n        mask_zero, mask_up_zero, mask_down_zero, mask_down2_zero = None, None, None, None\n        txt_len = context.shape[1] # mask_obj[0].text_len\n        \n\n        z_ = transformer_options.get(\"z_\")   # initial noise and/or image+noise from start of rk_sampler_beta() \n        rk_row = transformer_options.get(\"row\") # for \"smart noise\"\n        if z_ is not None:\n            x_init = z_[rk_row].to(x)\n        elif 'x_init' in transformer_options:\n            x_init = transformer_options.get('x_init').to(x)\n\n        # recon loop to extract exact noise pred for scattersort guide assembly\n        RECON_MODE = StyleMMDiT.noise_mode == \"recon\"\n        recon_iterations = 2 if StyleMMDiT.noise_mode == \"recon\" else 1\n        for recon_iter in range(recon_iterations):\n            y0_style = StyleMMDiT.guides\n            y0_style_active = True if type(y0_style) == torch.Tensor else False\n            \n            RECON_MODE = True     if StyleMMDiT.noise_mode == \"recon\" and recon_iter == 0     else False\n            \n            ISIGMA = SIGMA\n            if StyleMMDiT.noise_mode == \"recon\" and recon_iter == 1: \n                ISIGMA = SIGMA * EO(\"ISIGMA_FACTOR\", 1.0)\n                \n                model_sampling = transformer_options.get('model_sampling')     \n                timesteps_orig = model_sampling.timestep(ISIGMA).expand_as(timesteps_orig)\n                \n                x_recon = x_tmp if x_tmp is not None else x_orig\n                #noise_prediction = x_recon + (1-SIGMA.to(x_recon)) * eps.to(x_recon)\n                noise_prediction = eps.to(x_recon)\n                denoised = x_recon * ((SIGMA.to(x_recon) ** 2 + 1) ** 0.5)   -   SIGMA.to(x_recon) * eps.to(x_recon)\n                \n                denoised = StyleMMDiT.apply_recon_lure(denoised, y0_style.to(x_recon))   # .to(denoised)\n\n                new_x = (denoised + ISIGMA.to(x_recon) * noise_prediction) / ((ISIGMA.to(x_recon) ** 2 + 1) ** 0.5)\n                h_orig = new_x.clone().to(x)\n                x_init = noise_prediction\n            elif StyleMMDiT.noise_mode == \"bonanza\":\n                x_init = torch.randn_like(x_init)\n\n            if y0_style_active:\n                if y0_style.sum() == 0.0 and y0_style.std() == 0.0:\n                    y0_style_noised = x.clone()\n                else:\n                    y0_style_noised = (y0_style + ISIGMA.to(y0_style) * x_init.expand_as(x).to(y0_style)) / ((ISIGMA.to(y0_style) ** 2 + 1) ** 0.5)    #x_init.expand(x.shape[0],-1,-1,-1).to(y0_style)) \n\n            out_list = []\n            for cond_iter in range(len(transformer_options['cond_or_uncond'])):\n                UNCOND = transformer_options['cond_or_uncond'][cond_iter] == 1\n                \n                bsz_style = y0_style.shape[0] if y0_style_active else 0\n                bsz       = 1 if RECON_MODE else bsz_style + 1\n                \n                h, timesteps, context = clone_inputs(h_orig[cond_iter].unsqueeze(0), timesteps_orig[cond_iter].unsqueeze(0), context_orig[cond_iter].unsqueeze(0))\n                y = y_orig[cond_iter].unsqueeze(0).clone() if y_orig is not None else None\n                \n\n                mask, mask_up, mask_down, mask_down2 = None, None, None, None\n                if not UNCOND and 'AttnMask' in transformer_options: # and weight != 0:\n                    AttnMask = transformer_options['AttnMask']\n                    mask = transformer_options['AttnMask'].attn_mask.mask.to('cuda')\n                    mask_up   = transformer_options['AttnMask'].mask_up.to('cuda')\n                    mask_down = transformer_options['AttnMask'].mask_down.to('cuda')\n                    if hasattr(transformer_options['AttnMask'], \"mask_down2\"):\n                        mask_down2 = transformer_options['AttnMask'].mask_down2.to('cuda')\n                    if weight == 0:\n                        context = transformer_options['RegContext'].context.to(context.dtype).to(context.device)\n                        mask, mask_up, mask_down, mask_down2 = None, None, None, None\n                    else:\n                        context = transformer_options['RegContext'].context.to(context.dtype).to(context.device)\n                    \n                    txt_len = context.shape[1]\n                    if mask_zero is None:\n                        mask_zero = torch.ones_like(mask)\n                        mask_zero[:, :txt_len] = mask[:, :txt_len]\n                    if mask_up_zero is None:\n                        mask_up_zero = torch.ones_like(mask_up)\n                        mask_up_zero[:, :txt_len] = mask_up[:, :txt_len]\n                    if mask_down_zero is None:\n                        mask_down_zero = torch.ones_like(mask_down)\n                        mask_down_zero[:, :txt_len] = mask_down[:, :txt_len]\n                    if mask_down2_zero is None and mask_down2 is not None:\n                        mask_down2_zero = torch.ones_like(mask_down2)\n                        mask_down2_zero[:, :txt_len] = mask_down2[:, :txt_len]\n\n\n                if UNCOND and 'AttnMask_neg' in transformer_options: # and weight != 0:\n                    AttnMask = transformer_options['AttnMask_neg']\n                    mask = transformer_options['AttnMask_neg'].attn_mask.mask.to('cuda')\n                    mask_up   = transformer_options['AttnMask_neg'].mask_up.to('cuda')\n                    mask_down = transformer_options['AttnMask_neg'].mask_down.to('cuda')\n                    if hasattr(transformer_options['AttnMask_neg'], \"mask_down2\"):\n                        mask_down2 = transformer_options['AttnMask_neg'].mask_down2.to('cuda')\n                    if weight == 0:\n                        context = transformer_options['RegContext_neg'].context.to(context.dtype).to(context.device)\n                        mask, mask_up, mask_down, mask_down2 = None, None, None, None\n                    else:\n                        context = transformer_options['RegContext_neg'].context.to(context.dtype).to(context.device)\n                        \n                    txt_len = context.shape[1]\n                    if mask_zero is None:\n                        mask_zero = torch.ones_like(mask)\n                        mask_zero[:, :txt_len] = mask[:, :txt_len]\n                    if mask_up_zero is None:\n                        mask_up_zero = torch.ones_like(mask_up)\n                        mask_up_zero[:, :txt_len] = mask_up[:, :txt_len]\n                    if mask_down_zero is None:\n                        mask_down_zero = torch.ones_like(mask_down)\n                        mask_down_zero[:, :txt_len] = mask_down[:, :txt_len]\n                    if mask_down2_zero is None and mask_down2 is not None:\n                        mask_down2_zero = torch.ones_like(mask_down2)\n                        mask_down2_zero[:, :txt_len] = mask_down2[:, :txt_len]\n\n                elif UNCOND and 'AttnMask' in transformer_options:\n                    AttnMask = transformer_options['AttnMask']\n                    mask = transformer_options['AttnMask'].attn_mask.mask.to('cuda')\n                    mask_up   = transformer_options['AttnMask'].mask_up.to('cuda')\n                    mask_down = transformer_options['AttnMask'].mask_down.to('cuda')\n                    if hasattr(transformer_options['AttnMask'], \"mask_down2\"):\n                        mask_down2 = transformer_options['AttnMask'].mask_down2.to('cuda')\n                    A       = context\n                    B       = transformer_options['RegContext'].context\n                    context = A.repeat(1,    (B.shape[1] // A.shape[1]) + 1, 1)[:,   :B.shape[1], :]\n                    \n                    txt_len = context.shape[1]\n                    if mask_zero is None:\n                        mask_zero = torch.ones_like(mask)\n                        mask_zero[:, :txt_len] = mask[:, :txt_len]\n                    if mask_up_zero is None:\n                        mask_up_zero = torch.ones_like(mask_up)\n                        mask_up_zero[:, :txt_len] = mask_up[:, :txt_len]\n                    if mask_down_zero is None:\n                        mask_down_zero = torch.ones_like(mask_down)\n                        mask_down_zero[:, :txt_len] = mask_down[:, :txt_len]\n                    if mask_down2_zero is None and mask_down2 is not None:\n                        mask_down2_zero = torch.ones_like(mask_down2)\n                        mask_down2_zero[:, :txt_len] = mask_down2[:, :txt_len]\n                    if weight == 0:                                                                             # ADDED 5/23/2025\n                        mask, mask_up, mask_down, mask_down2 = None, None, None, None\n\n\n                if mask is not None:\n                    if mask            is not None and not type(mask[0][0]           .item()) == bool:\n                        mask            = mask           .to(x.dtype)\n                    if mask_up         is not None and not type(mask_up[0][0]        .item()) == bool:\n                        mask_up         = mask_up        .to(x.dtype)\n                    if mask_down       is not None and not type(mask_down[0][0]      .item()) == bool:\n                        mask_down       = mask_down      .to(x.dtype)\n                    if mask_down2      is not None and not type(mask_down2[0][0]     .item()) == bool:\n                        mask_down2      = mask_down2     .to(x.dtype)\n                        \n                    if mask_zero       is not None and not type(mask_zero[0][0]      .item()) == bool:\n                        mask_zero       = mask_zero      .to(x.dtype)\n                    if mask_up_zero    is not None and not type(mask_up_zero[0][0]   .item()) == bool:\n                        mask_up_zero    = mask_up_zero   .to(x.dtype)\n                    if mask_down_zero  is not None and not type(mask_down_zero[0][0] .item()) == bool:\n                        mask_down_zero  = mask_down_zero .to(x.dtype)\n                    if mask_down2_zero is not None and not type(mask_down2_zero[0][0].item()) == bool:\n                        mask_down2_zero = mask_down2_zero.to(x.dtype)\n                        \n                    transformer_options['cross_mask']       = mask      [:,:txt_len]\n                    transformer_options['self_mask']        = mask      [:,txt_len:]\n                    transformer_options['cross_mask_up']    = mask_up   [:,:txt_len]\n                    transformer_options['self_mask_up']     = mask_up   [:,txt_len:]\n                    transformer_options['cross_mask_down']  = mask_down [:,:txt_len]\n                    transformer_options['self_mask_down']   = mask_down [:,txt_len:]\n                    transformer_options['cross_mask_down2'] = mask_down2[:,:txt_len] if mask_down2 is not None else None\n                    transformer_options['self_mask_down2']  = mask_down2[:,txt_len:] if mask_down2 is not None else None\n                \n                #h = x\n                if y0_style_active and not RECON_MODE:\n                    if mask is None:\n                        context, y, _ = StyleMMDiT.apply_style_conditioning(\n                            UNCOND       = UNCOND,\n                            base_context = context,\n                            base_y       = y,\n                            base_llama3  = None,\n                        )\n                    else:\n                        context = context.repeat(bsz_style + 1, 1, 1)\n                        y = y.repeat(bsz_style + 1, 1)                   if y      is not None else None\n                    h = torch.cat([h, y0_style_noised[cond_iter:cond_iter+1]], dim=0).to(h)\n\n\n\n                total_layers = len(self.input_blocks) + len(self.middle_block) + len(self.output_blocks)\n\n                num_video_frames = kwargs.get(\"num_video_frames\", self.default_num_video_frames)\n                image_only_indicator = kwargs.get(\"image_only_indicator\", None)\n                time_context = kwargs.get(\"time_context\", None)\n\n                assert (y is not None) == (\n                    self.num_classes is not None\n                ), \"must specify y if and only if the model is class-conditional\"\n                hs, hs_adain = [], []\n                t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False).to(x.dtype)\n                emb = self.time_embed(t_emb)\n\n                if \"emb_patch\" in transformer_patches:\n                    patch = transformer_patches[\"emb_patch\"]\n                    for p in patch:\n                        emb = p(emb, self.model_channels, transformer_options)\n\n                if self.num_classes is not None:\n                    assert y.shape[0] == h.shape[0]\n                    emb = emb + self.label_emb(y)\n\n                #for id, module in enumerate(self.input_blocks):\n                for id, (module, style_block) in enumerate(zip(self.input_blocks, StyleMMDiT.input_blocks)):\n                    transformer_options[\"block\"] = (\"input\", id)\n\n                    if mask is not None:\n                        transformer_options['cross_mask']       = mask      [:,:txt_len]\n                        transformer_options['self_mask']        = mask      [:,txt_len:]\n                        transformer_options['cross_mask_up']    = mask_up   [:,:txt_len]\n                        transformer_options['self_mask_up']     = mask_up   [:,txt_len:]\n                        transformer_options['cross_mask_down']  = mask_down [:,:txt_len]\n                        transformer_options['self_mask_down']   = mask_down [:,txt_len:]\n                        transformer_options['cross_mask_down2'] = mask_down2[:,:txt_len] if mask_down2 is not None else None\n                        transformer_options['self_mask_down2']  = mask_down2[:,txt_len:] if mask_down2 is not None else None\n                        \n                    if   weight > 0 and mask is not None and     weight  <      id/total_layers:\n                        transformer_options['cross_mask'] = None\n                        transformer_options['self_mask']  = None\n                    \n                    elif weight < 0 and mask is not None and abs(weight) < (1 - id/total_layers):\n                        transformer_options['cross_mask'] = None\n                        transformer_options['self_mask']  = None\n                        \n                    elif floor > 0 and mask is not None and       floor  >      id/total_layers:\n                        transformer_options['cross_mask']       = mask_zero      [:,:txt_len]\n                        transformer_options['self_mask']        = mask_zero      [:,txt_len:]\n                        transformer_options['cross_mask_up']    = mask_up_zero   [:,:txt_len]\n                        transformer_options['self_mask_up']     = mask_up_zero   [:,txt_len:]\n                        transformer_options['cross_mask_down']  = mask_down_zero [:,:txt_len]\n                        transformer_options['self_mask_down']   = mask_down_zero [:,txt_len:]\n                        transformer_options['cross_mask_down2'] = mask_down2_zero[:,:txt_len] if mask_down2_zero is not None else None\n                        transformer_options['self_mask_down2']  = mask_down2_zero[:,txt_len:] if mask_down2_zero is not None else None\n                    \n                    elif floor < 0 and mask is not None and   abs(floor) > (1 - id/total_layers):\n                        transformer_options['cross_mask']       = mask_zero      [:,:txt_len]\n                        transformer_options['self_mask']        = mask_zero      [:,txt_len:]\n                        transformer_options['cross_mask_up']    = mask_up_zero   [:,:txt_len]\n                        transformer_options['self_mask_up']     = mask_up_zero   [:,txt_len:]\n                        transformer_options['cross_mask_down']  = mask_down_zero [:,:txt_len]\n                        transformer_options['self_mask_down']   = mask_down_zero [:,txt_len:]\n                        transformer_options['cross_mask_down2'] = mask_down2_zero[:,:txt_len] if mask_down2_zero is not None else None\n                        transformer_options['self_mask_down2']  = mask_down2_zero[:,txt_len:] if mask_down2_zero is not None else None\n\n                    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator, style_block=style_block)\n                    if id == 0:\n                        h = StyleMMDiT(h, \"proj_in\")\n                    h = apply_control(h, control, 'input')\n                    if \"input_block_patch\" in transformer_patches:\n                        patch = transformer_patches[\"input_block_patch\"]\n                        for p in patch:\n                            h = p(h, transformer_options)\n                    \n                    hs.append(h)\n                    \n                    if \"input_block_patch_after_skip\" in transformer_patches:\n                        patch = transformer_patches[\"input_block_patch_after_skip\"]\n                        for p in patch:\n                            h = p(h, transformer_options)\n\n                transformer_options[\"block\"] = (\"middle\", 0)\n                if self.middle_block is not None:\n                    style_block = StyleMMDiT.middle_blocks[0]\n\n                    if mask is not None:\n                        transformer_options['cross_mask']       = mask      [:,:txt_len]\n                        transformer_options['self_mask']        = mask      [:,txt_len:]\n                        transformer_options['cross_mask_up']    = mask_up   [:,:txt_len]\n                        transformer_options['self_mask_up']     = mask_up   [:,txt_len:]\n                        transformer_options['cross_mask_down']  = mask_down [:,:txt_len]\n                        transformer_options['self_mask_down']   = mask_down [:,txt_len:]\n                        transformer_options['cross_mask_down2'] = mask_down2[:,:txt_len] if mask_down2 is not None else None\n                        transformer_options['self_mask_down2']  = mask_down2[:,txt_len:] if mask_down2 is not None else None\n                        \n                    if   weight > 0 and mask is not None and     weight  <      (len(self.input_blocks) + 1)/total_layers:\n                        transformer_options['cross_mask'] = None\n                        transformer_options['self_mask']  = None\n                    \n                    elif weight < 0 and mask is not None and abs(weight) < (1 - (len(self.input_blocks) + 1)/total_layers):\n                        transformer_options['cross_mask'] = None\n                        transformer_options['self_mask']  = None\n                        \n                    elif floor > 0 and mask is not None and       floor  >      (len(self.input_blocks) + 1)/total_layers:\n                        transformer_options['cross_mask']       = mask_zero      [:,:txt_len]\n                        transformer_options['self_mask']        = mask_zero      [:,txt_len:]\n                        transformer_options['cross_mask_up']    = mask_up_zero   [:,:txt_len]\n                        transformer_options['self_mask_up']     = mask_up_zero   [:,txt_len:]\n                        transformer_options['cross_mask_down']  = mask_down_zero [:,:txt_len]\n                        transformer_options['self_mask_down']   = mask_down_zero [:,txt_len:]\n                        transformer_options['cross_mask_down2'] = mask_down2_zero[:,:txt_len] if mask_down2_zero is not None else None\n                        transformer_options['self_mask_down2']  = mask_down2_zero[:,txt_len:] if mask_down2_zero is not None else None\n                    \n                    elif floor < 0 and mask is not None and   abs(floor) > (1 - (len(self.input_blocks) + 1)/total_layers):\n                        transformer_options['cross_mask']       = mask_zero      [:,:txt_len]\n                        transformer_options['self_mask']        = mask_zero      [:,txt_len:]\n                        transformer_options['cross_mask_up']    = mask_up_zero   [:,:txt_len]\n                        transformer_options['self_mask_up']     = mask_up_zero   [:,txt_len:]\n                        transformer_options['cross_mask_down']  = mask_down_zero [:,:txt_len]\n                        transformer_options['self_mask_down']   = mask_down_zero [:,txt_len:]\n                        transformer_options['cross_mask_down2'] = mask_down2_zero[:,:txt_len] if mask_down2_zero is not None else None\n                        transformer_options['self_mask_down2']  = mask_down2_zero[:,txt_len:] if mask_down2_zero is not None else None\n\n                    h = forward_timestep_embed(self.middle_block, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator, style_block=style_block)\n                \n                h = apply_control(h, control, 'middle')\n\n                #for id, module in enumerate(self.output_blocks):\n                for id, (module, style_block) in enumerate(zip(self.output_blocks, StyleMMDiT.output_blocks)):\n                    transformer_options[\"block\"] = (\"output\", id)\n                    \n                    hsp = hs.pop()\n                    hsp = apply_control(hsp, control, 'output')\n\n                    if \"output_block_patch\" in transformer_patches:\n                        patch = transformer_patches[\"output_block_patch\"]\n                        for p in patch:\n                            h, hsp = p(h, hsp, transformer_options)\n\n                    h = th.cat([h, hsp], dim=1)\n                    del hsp\n                    if len(hs) > 0:\n                        output_shape = hs[-1].shape\n                    else:\n                        output_shape = None\n                        \n\n                    \n                    if mask is not None:\n                        transformer_options['cross_mask']       = mask      [:,:txt_len]\n                        transformer_options['self_mask']        = mask      [:,txt_len:]\n                        transformer_options['cross_mask_up']    = mask_up   [:,:txt_len]\n                        transformer_options['self_mask_up']     = mask_up   [:,txt_len:]\n                        transformer_options['cross_mask_down']  = mask_down [:,:txt_len]\n                        transformer_options['self_mask_down']   = mask_down [:,txt_len:]\n                        transformer_options['cross_mask_down2'] = mask_down2[:,:txt_len] if mask_down2 is not None else None\n                        transformer_options['self_mask_down2']  = mask_down2[:,txt_len:] if mask_down2 is not None else None\n                        \n                    if   weight > 0 and mask is not None and     weight  <      (len(self.input_blocks) + 1 + id)/total_layers:\n                        transformer_options['cross_mask'] = None\n                        transformer_options['self_mask']  = None\n                    \n                    elif weight < 0 and mask is not None and abs(weight) < (1 - (len(self.input_blocks) + 1 + id)/total_layers):\n                        transformer_options['cross_mask'] = None\n                        transformer_options['self_mask']  = None\n                        \n                    elif floor > 0 and mask is not None and       floor  >      (len(self.input_blocks) + 1 + id)/total_layers:\n                        transformer_options['cross_mask']       = mask_zero      [:,:txt_len]\n                        transformer_options['self_mask']        = mask_zero      [:,txt_len:]\n                        transformer_options['cross_mask_up']    = mask_up_zero   [:,:txt_len]\n                        transformer_options['self_mask_up']     = mask_up_zero   [:,txt_len:]\n                        transformer_options['cross_mask_down']  = mask_down_zero [:,:txt_len]\n                        transformer_options['self_mask_down']   = mask_down_zero [:,txt_len:]\n                        transformer_options['cross_mask_down2'] = mask_down2_zero[:,:txt_len] if mask_down2_zero is not None else None\n                        transformer_options['self_mask_down2']  = mask_down2_zero[:,txt_len:] if mask_down2_zero is not None else None\n                    \n                    elif floor < 0 and mask is not None and   abs(floor) > (1 - (len(self.input_blocks) + 1 + id)/total_layers):\n                        transformer_options['cross_mask']       = mask_zero      [:,:txt_len]\n                        transformer_options['self_mask']        = mask_zero      [:,txt_len:]\n                        transformer_options['cross_mask_up']    = mask_up_zero   [:,:txt_len]\n                        transformer_options['self_mask_up']     = mask_up_zero   [:,txt_len:]\n                        transformer_options['cross_mask_down']  = mask_down_zero [:,:txt_len]\n                        transformer_options['self_mask_down']   = mask_down_zero [:,txt_len:]\n                        transformer_options['cross_mask_down2'] = mask_down2_zero[:,:txt_len] if mask_down2_zero is not None else None\n                        transformer_options['self_mask_down2']  = mask_down2_zero[:,txt_len:] if mask_down2_zero is not None else None\n\n                    h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator, style_block=style_block)\n                \n                h = h.type(x.dtype)\n                \n                if self.predict_codebook_ids:\n                    eps = self.id_predictor(h)\n                else:\n                    eps = self.out(h)\n                    eps = StyleMMDiT(eps, \"proj_out\")\n                    \n                out_list.append(eps[0:1])\n                \n            eps = torch.stack(out_list, dim=0).squeeze(dim=1)\n\n            if recon_iter == 1:\n                denoised = new_x * ((ISIGMA ** 2 + 1) ** 0.5)  - ISIGMA.to(new_x) * eps.to(new_x)\n                if x_tmp is not None:\n                    eps = (x_tmp * ((SIGMA ** 2 + 1) ** 0.5) - denoised.to(x_tmp)) / SIGMA.to(x_tmp)\n                else:\n                    eps = (x_orig * ((SIGMA ** 2 + 1) ** 0.5) - denoised.to(x_orig)) / SIGMA.to(x_orig)\n            \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n        y0_style_pos        = transformer_options.get(\"y0_style_pos\")\n        y0_style_neg        = transformer_options.get(\"y0_style_neg\")\n\n        y0_style_pos_weight    = transformer_options.get(\"y0_style_pos_weight\", 0.0)\n        y0_style_pos_synweight = transformer_options.get(\"y0_style_pos_synweight\", 0.0)\n        y0_style_pos_synweight *= y0_style_pos_weight\n\n        y0_style_neg_weight    = transformer_options.get(\"y0_style_neg_weight\", 0.0)\n        y0_style_neg_synweight = transformer_options.get(\"y0_style_neg_synweight\", 0.0)\n        y0_style_neg_synweight *= y0_style_neg_weight\n        \n\n        freqsep_lowpass_method = transformer_options.get(\"freqsep_lowpass_method\")\n        freqsep_sigma          = transformer_options.get(\"freqsep_sigma\")\n        freqsep_kernel_size    = transformer_options.get(\"freqsep_kernel_size\")\n        freqsep_inner_kernel_size    = transformer_options.get(\"freqsep_inner_kernel_size\")\n        freqsep_stride    = transformer_options.get(\"freqsep_stride\")\n        \n        freqsep_lowpass_weight = transformer_options.get(\"freqsep_lowpass_weight\")\n        freqsep_highpass_weight= transformer_options.get(\"freqsep_highpass_weight\")\n        freqsep_mask           = transformer_options.get(\"freqsep_mask\")\n        \n\n\n        dtype = eps.dtype if self.style_dtype is None else self.style_dtype\n        h_len //= self.Retrojector.patch_size\n        w_len //= self.Retrojector.patch_size\n        \n        if y0_style_pos is not None:\n            y0_style_pos_weight    = transformer_options.get(\"y0_style_pos_weight\")\n            y0_style_pos_synweight = transformer_options.get(\"y0_style_pos_synweight\")\n            y0_style_pos_synweight *= y0_style_pos_weight\n            y0_style_pos_mask = transformer_options.get(\"y0_style_pos_mask\")\n            y0_style_pos_mask_edge = transformer_options.get(\"y0_style_pos_mask_edge\")\n\n            y0_style_pos = y0_style_pos.to(dtype)\n            #x   = x.to(dtype)\n            x   = x_orig.clone().to(torch.float64) * ((SIGMA ** 2 + 1) ** 0.5)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n            \n            denoised_embed = self.Retrojector.embed(denoised)     # 2,4,96,168 -> 2,16128,320\n            y0_adain_embed = self.Retrojector.embed(y0_style_pos)\n            \n            if transformer_options['y0_style_method'] == \"scattersort\":\n                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')\n                pad = transformer_options.get('y0_style_tile_padding')\n                if pad is not None and tile_h is not None and tile_w is not None:\n                    \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if EO(\"scattersort_median_LP\"):\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=EO(\"scattersort_median_LP\",7))\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=EO(\"scattersort_median_LP\",7))\n                        \n                        denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_LP = apply_scattersort_tiled(denoised_spatial_LP, y0_adain_spatial_LP, tile_h, tile_w, pad)\n                        \n                        denoised_spatial = denoised_spatial_LP + denoised_spatial_HP\n                        denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n                    else:\n                        denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)\n                    \n                    denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n                    \n                else:\n                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_pos_mask, y0_style_pos_mask_edge, h_len, w_len)\n\n\n\n            elif transformer_options['y0_style_method'] == \"AdaIN\":\n                if freqsep_mask is not None:\n                    freqsep_mask = freqsep_mask.view(1, 1, *freqsep_mask.shape[-2:]).float()\n                    freqsep_mask = F.interpolate(freqsep_mask.float(), size=(h_len, w_len), mode='nearest-exact')\n                \n                if hasattr(self, \"adain_tile\"):\n                    tile_h, tile_w = self.adain_tile\n                    \n                    denoised_pretile = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_pretile = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if self.adain_flag:\n                        h_off = tile_h // 2\n                        w_off = tile_w // 2\n                        denoised_pretile = denoised_pretile[:,:,h_off:-h_off, w_off:-w_off]\n                        self.adain_flag = False\n                    else:\n                        h_off = 0\n                        w_off = 0\n                        self.adain_flag = True\n                    \n                    tiles,    orig_shape, grid, strides = tile_latent(denoised_pretile, tile_size=(tile_h,tile_w))\n                    y0_tiles, orig_shape, grid, strides = tile_latent(y0_adain_pretile, tile_size=(tile_h,tile_w))\n                    \n                    tiles_out = []\n                    for i in range(tiles.shape[0]):\n                        tile = tiles[i].unsqueeze(0)\n                        y0_tile = y0_tiles[i].unsqueeze(0)\n                        \n                        tile    = rearrange(tile,    \"b c h w -> b (h w) c\", h=tile_h, w=tile_w)\n                        y0_tile = rearrange(y0_tile, \"b c h w -> b (h w) c\", h=tile_h, w=tile_w)\n                        \n                        tile = adain_seq_inplace(tile, y0_tile)\n                        tiles_out.append(rearrange(tile, \"b (h w) c -> b c h w\", h=tile_h, w=tile_w))\n                    \n                    tiles_out_tensor = torch.cat(tiles_out, dim=0)\n                    tiles_out_tensor = untile_latent(tiles_out_tensor, orig_shape, grid, strides)\n\n                    if h_off == 0:\n                        denoised_pretile = tiles_out_tensor\n                    else:\n                        denoised_pretile[:,:,h_off:-h_off, w_off:-w_off] = tiles_out_tensor\n                    denoised_embed = rearrange(denoised_pretile, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None and freqsep_lowpass_method.endswith(\"pw\"): #EO(\"adain_pw\"):\n\n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n\n                    if   freqsep_lowpass_method == \"median_pw\":\n                        denoised_spatial_new = adain_patchwise_row_batch_med(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size, use_median_blur=True, lowpass_weight=freqsep_lowpass_weight, highpass_weight=freqsep_highpass_weight)\n                    elif freqsep_lowpass_method == \"gaussian_pw\": \n                        denoised_spatial_new = adain_patchwise_row_batch(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None: \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if   freqsep_lowpass_method == \"median\":\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=freqsep_kernel_size)\n                    elif freqsep_lowpass_method == \"gaussian\":\n                        denoised_spatial_LP = gaussian_blur_2d(denoised_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = gaussian_blur_2d(y0_adain_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                    \n                    if EO(\"adain_fs_uhp\"):\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_ULP = gaussian_blur_2d(denoised_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        y0_adain_spatial_ULP = gaussian_blur_2d(y0_adain_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        \n                        denoised_spatial_UHP = denoised_spatial_HP  - denoised_spatial_ULP\n                        y0_adain_spatial_UHP = y0_adain_spatial_HP  - y0_adain_spatial_ULP\n                        \n                        #denoised_spatial_HP  = y0_adain_spatial_ULP + denoised_spatial_UHP\n                        denoised_spatial_HP  = denoised_spatial_ULP + y0_adain_spatial_UHP\n                    \n                    denoised_spatial_new = freqsep_lowpass_weight * y0_adain_spatial_LP + freqsep_highpass_weight * denoised_spatial_HP\n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                else:\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                self.StyleWCT.set(y0_adain_embed)\n                denoised_embed = self.StyleWCT.get(denoised_embed)\n                \n                if transformer_options.get('y0_standard_guide') is not None:\n                    y0_standard_guide = transformer_options.get('y0_standard_guide')\n                    \n                    y0_standard_guide_embed = self.Retrojector.embed(y0_standard_guide)\n                    f_cs = self.StyleWCT.get(y0_standard_guide_embed)\n                    self.y0_standard_guide = self.Retrojector.unembed(f_cs)\n\n                if transformer_options.get('y0_inv_standard_guide') is not None:\n                    y0_inv_standard_guide = transformer_options.get('y0_inv_standard_guide')\n\n                    y0_inv_standard_guide_embed = self.Retrojector.embed(y0_inv_standard_guide)\n                    f_cs = self.StyleWCT.get(y0_inv_standard_guide_embed)\n                    self.y0_inv_standard_guide = self.Retrojector.unembed(f_cs)\n\n            denoised_approx = self.Retrojector.unembed(denoised_embed)\n            \n            eps = (x - denoised_approx) / sigma\n\n            if not UNCOND:\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_pos_weight * (eps[1] - eps_orig[1])\n                    eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n                else:\n                    eps[0] = eps_orig[0] + y0_style_pos_weight * (eps[0] - eps_orig[0])\n            elif eps.shape[0] == 1 and UNCOND:\n                eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n            \n            eps = eps.float()\n        \n        if y0_style_neg is not None:\n            y0_style_neg_weight    = transformer_options.get(\"y0_style_neg_weight\")\n            y0_style_neg_synweight = transformer_options.get(\"y0_style_neg_synweight\")\n            y0_style_neg_synweight *= y0_style_neg_weight\n            y0_style_neg_mask = transformer_options.get(\"y0_style_neg_mask\")\n            y0_style_neg_mask_edge = transformer_options.get(\"y0_style_neg_mask_edge\")\n            \n            y0_style_neg = y0_style_neg.to(dtype)\n            #x   = x.to(dtype)\n            x   = x_orig.clone().to(torch.float64) * ((SIGMA ** 2 + 1) ** 0.5)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n\n            denoised_embed = self.Retrojector.embed(denoised)\n            y0_adain_embed = self.Retrojector.embed(y0_style_neg)\n            \n            if transformer_options['y0_style_method'] == \"scattersort\":\n                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')\n                pad = transformer_options.get('y0_style_tile_padding')\n                if pad is not None and tile_h is not None and tile_w is not None:\n                    \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)\n                    \n                    denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n\n                else:\n                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_neg_mask, y0_style_neg_mask_edge, h_len, w_len)\n            \n            \n            elif transformer_options['y0_style_method'] == \"AdaIN\":\n                denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                self.StyleWCT.set(y0_adain_embed)\n                denoised_embed = self.StyleWCT.get(denoised_embed)\n\n            denoised_approx = self.Retrojector.unembed(denoised_embed)\n\n            if UNCOND:\n                eps = (x - denoised_approx) / sigma\n                eps[0] = eps_orig[0] + y0_style_neg_weight * (eps[0] - eps_orig[0])\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n            elif eps.shape[0] == 1 and not UNCOND:\n                eps[0] = eps_orig[0] + y0_style_neg_synweight * (eps[0] - eps_orig[0])\n            \n            eps = eps.float()\n            \n        return eps\n\n\n\n\n\n\n\n\n\ndef clone_inputs_unsafe(*args, index: int=None):\n\n    if index is None:\n        return tuple(x.clone() for x in args)\n    else:\n        return tuple(x[index].unsqueeze(0).clone() for x in args)\n    \n    \ndef clone_inputs(*args, index: int = None):\n    if index is None:\n        return tuple(x.clone() if x is not None else None for x in args)\n    else:\n        return tuple(x[index].unsqueeze(0).clone() if x is not None else None for x in args)\n    \n    \n\n"
  },
  {
    "path": "sd35/mmdit.py",
    "content": "from functools import partial\nfrom typing import Dict, Optional, List\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nimport copy\n\nfrom comfy.ldm.modules.attention import optimized_attention\nfrom comfy.ldm.modules.attention import attention_pytorch #as optimized_attention\nfrom einops import rearrange, repeat\nfrom comfy.ldm.modules.diffusionmodules.util import timestep_embedding\nimport comfy.ops\nimport comfy.ldm.common_dit\n\nfrom ..helper import ExtraOptions\n\nfrom ..latents import tile_latent, untile_latent, gaussian_blur_2d, median_blur_2d\nfrom ..style_transfer import apply_scattersort_masked, apply_scattersort_tiled, adain_seq_inplace, adain_patchwise_row_batch_med, adain_patchwise_row_batch\n\n#from .attention import optimized_attention\n#from .util import timestep_embedding\n#import ops\n#import common_dit\n\n\ndef default(x, y):\n    if x is not None:\n        return x\n    return y\n\nclass Mlp(nn.Module):\n    \"\"\" MLP as used in Vision Transformer, MLP-Mixer and related networks\n    \"\"\"\n    def __init__(\n            self,\n            in_features,\n            hidden_features = None,\n            out_features    = None,\n            act_layer       = nn.GELU,\n            norm_layer      = None,\n            bias            = True,\n            drop            = 0.,\n            use_conv        = False,\n            dtype           = None,\n            device          = None,\n            operations      = None,\n    ):\n        super().__init__()\n        out_features    = out_features or in_features\n        hidden_features = hidden_features or in_features\n        drop_probs      = drop\n        linear_layer    = partial(operations.Conv2d, kernel_size=1) if use_conv else operations.Linear\n\n        self.fc1        = linear_layer(in_features, hidden_features, bias =bias, dtype=dtype, device=device)\n        self.act        = act_layer()\n        self.drop1      = nn.Dropout(drop_probs)\n        self.norm       = norm_layer(hidden_features) if norm_layer is not None else nn.Identity()\n        self.fc2        = linear_layer(hidden_features, out_features, bias=bias, dtype=dtype, device=device)\n        self.drop2      = nn.Dropout(drop_probs)\n\n    def forward(self, x):\n        x = self.fc1  (x)\n        x = self.act  (x)\n        x = self.drop1(x)\n        x = self.norm (x)\n        x = self.fc2  (x)\n        x = self.drop2(x)\n        return x\n\nclass PatchEmbed(nn.Module):\n    \"\"\" 2D Image to Patch Embedding\n    \"\"\"\n    dynamic_img_pad: torch.jit.Final[bool]\n\n    def __init__(\n            self,\n            img_size        : Optional[int] = 224,\n            patch_size      : int           = 16,\n            in_chans        : int           = 3,\n            embed_dim       : int           = 768,\n            norm_layer                      = None,\n            flatten         : bool          = True,\n            bias            : bool          = True,\n            strict_img_size : bool          = True,\n            dynamic_img_pad : bool          = True,\n            padding_mode                    ='circular',\n            conv3d                          = False,\n            dtype                           = None,\n            device                          = None,\n            operations                      = None,\n    ):\n        super().__init__()\n        try:\n            len(patch_size)\n            self.patch_size = patch_size\n        except:\n            if conv3d:\n                self.patch_size = (patch_size, patch_size, patch_size)\n            else:\n                self.patch_size = (patch_size, patch_size)\n        self.padding_mode = padding_mode\n\n        # flatten spatial dim and transpose to channels last, kept for bwd compat\n        self.flatten         = flatten\n        self.strict_img_size = strict_img_size\n        self.dynamic_img_pad = dynamic_img_pad\n        if conv3d:\n            self.proj = operations.Conv3d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size, bias=bias, dtype=dtype, device=device)\n        else:\n            self.proj = operations.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size, bias=bias, dtype=dtype, device=device)\n        self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()\n\n    def forward(self, x):\n        if self.dynamic_img_pad:\n            x = comfy.ldm.common_dit.pad_to_patch_size(x, self.patch_size, padding_mode=self.padding_mode)\n        x = self.proj(x)\n        if self.flatten:\n            x = x.flatten(2).transpose(1, 2)  # NCHW -> NLC\n        x = self.norm(x)\n        return x\n\ndef modulate(x, shift, scale):\n    if shift is None:\n        shift = torch.zeros_like(scale)\n    return x * (1 + scale.unsqueeze(1)) + shift.unsqueeze(1)\n\n\n#################################################################################\n#                   Sine/Cosine Positional Embedding Functions                  #\n#################################################################################\n\n\ndef get_2d_sincos_pos_embed(\n    embed_dim,\n    grid_size,\n    cls_token      = False,\n    extra_tokens   = 0,\n    scaling_factor = None,\n    offset         = None,\n):\n    \"\"\"\n    grid_size: int of the grid height and width\n    return:\n    pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token)\n    \"\"\"\n    grid_h = np.arange(grid_size, dtype=np.float32)\n    grid_w = np.arange(grid_size, dtype=np.float32)\n    grid   = np.meshgrid(grid_w, grid_h)  # here w goes first\n    grid   = np.stack(grid, axis=0)\n    if scaling_factor is not None:\n        grid = grid / scaling_factor\n    if offset is not None:\n        grid = grid - offset\n\n    grid = grid.reshape([2, 1, grid_size, grid_size])\n    pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid)\n    if cls_token and extra_tokens > 0:\n        pos_embed = np.concatenate(\n            [np.zeros([extra_tokens, embed_dim]), pos_embed], axis=0\n        )\n    return pos_embed\n\n\ndef get_2d_sincos_pos_embed_from_grid(embed_dim, grid):\n    assert embed_dim % 2 == 0\n\n    # use half of dimensions to encode grid_h\n    emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0])  # (H*W, D/2)\n    emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1])  # (H*W, D/2)\n\n    emb = np.concatenate([emb_h, emb_w], axis=1)  # (H*W, D)\n    return emb\n\n\ndef get_1d_sincos_pos_embed_from_grid(embed_dim, pos):\n    \"\"\"\n    embed_dim: output dimension for each position\n    pos: a list of positions to be encoded: size (M,)\n    out: (M, D)\n    \"\"\"\n    assert embed_dim % 2 == 0\n    omega  = np.arange(embed_dim // 2, dtype=np.float64)\n    omega /= embed_dim / 2.0\n    omega  = 1.0 / 10000**omega  # (D/2,)\n\n    pos = pos.reshape(-1)  # (M,)\n    out = np.einsum(\"m,d->md\", pos, omega)  # (M, D/2), outer product\n\n    emb_sin = np.sin(out)  # (M, D/2)\n    emb_cos = np.cos(out)  # (M, D/2)\n\n    emb = np.concatenate([emb_sin, emb_cos], axis=1)  # (M, D)\n    return emb\n\ndef get_1d_sincos_pos_embed_from_grid_torch(embed_dim, pos, device=None, dtype=torch.float32):\n    omega   = torch.arange(embed_dim // 2, device=device, dtype=dtype)\n    omega  /= embed_dim / 2.0\n    omega   = 1.0 / 10000**omega  # (D/2,)\n    pos     = pos.reshape(-1)  # (M,)\n    out     = torch.einsum(\"m,d->md\", pos, omega)  # (M, D/2), outer product\n    emb_sin = torch.sin(out)  # (M, D/2)\n    emb_cos = torch.cos(out)  # (M, D/2)\n    emb     = torch.cat([emb_sin, emb_cos], dim=1)  # (M, D)\n    return emb\n\ndef get_2d_sincos_pos_embed_torch(embed_dim, w, h, val_center=7.5, val_magnitude=7.5, device=None, dtype=torch.float32):\n    small          = min(h, w)\n    val_h          = (h / small) * val_magnitude\n    val_w          = (w / small) * val_magnitude\n    grid_h, grid_w = torch.meshgrid(torch.linspace(-val_h + val_center, val_h + val_center, h, device=device, dtype=dtype), torch.linspace(-val_w + val_center, val_w + val_center, w, device=device, dtype=dtype), indexing='ij')\n    emb_h          = get_1d_sincos_pos_embed_from_grid_torch(embed_dim // 2, grid_h, device=device, dtype=dtype)\n    emb_w          = get_1d_sincos_pos_embed_from_grid_torch(embed_dim // 2, grid_w, device=device, dtype=dtype)\n    emb            = torch.cat([emb_w, emb_h], dim=1)  # (H*W, D)\n    return emb\n\n\n#################################################################################\n#               Embedding Layers for Timesteps and Class Labels                 #\n#################################################################################\n\n\nclass TimestepEmbedder(nn.Module):\n    \"\"\"\n    Embeds scalar timesteps into vector representations.\n    \"\"\"\n\n    def __init__(self, hidden_size, frequency_embedding_size=256, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.mlp = nn.Sequential(\n            operations.Linear(frequency_embedding_size, hidden_size, bias=True, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Linear(hidden_size,              hidden_size, bias=True, dtype=dtype, device=device),\n        )\n        self.frequency_embedding_size = frequency_embedding_size\n\n    def forward(self, t, dtype, **kwargs):\n        t_freq = timestep_embedding(t, self.frequency_embedding_size).to(dtype)\n        t_emb = self.mlp(t_freq)\n        return t_emb\n\n\nclass VectorEmbedder(nn.Module):\n    \"\"\"\n    Embeds a flat vector of dimension input_dim\n    \"\"\"\n\n    def __init__(self, input_dim: int, hidden_size: int, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.mlp = nn.Sequential(\n            operations.Linear(input_dim,   hidden_size, bias=True, dtype=dtype, device=device),\n            nn.SiLU(),\n            operations.Linear(hidden_size, hidden_size, bias=True, dtype=dtype, device=device),\n        )\n\n    def forward(self, x: torch.Tensor) -> torch.Tensor:\n        emb = self.mlp(x)\n        return emb\n\n\n#################################################################################\n#                                 Core DiT Model                                #\n#################################################################################\n\n\ndef split_qkv(qkv, head_dim):\n    qkv = qkv.reshape(qkv.shape[0], qkv.shape[1], 3, -1, head_dim).movedim(2, 0)\n    return qkv[0], qkv[1], qkv[2]\n\n\nclass SelfAttention(nn.Module):\n    ATTENTION_MODES = (\"xformers\", \"torch\", \"torch-hb\", \"math\", \"debug\")\n\n    def __init__(\n        self,\n        dim       : int,\n        num_heads : int             = 8,\n        qkv_bias  : bool            = False,\n        qk_scale  : Optional[float] = None,\n        proj_drop : float           = 0.0,\n        attn_mode : str             = \"xformers\",\n        pre_only  : bool            = False,\n        qk_norm   : Optional[str]   = None,\n        rmsnorm   : bool            = False,\n        dtype                       = None,\n        device                      = None,\n        operations                  = None,\n    ):\n        super().__init__()\n        self.num_heads = num_heads\n        self.head_dim  = dim // num_heads\n\n        self.qkv = operations.Linear(dim, dim * 3, bias=qkv_bias, dtype=dtype, device=device)\n        if not pre_only:\n            self.proj      = operations.Linear(dim, dim, dtype=dtype, device=device)\n            self.proj_drop = nn.Dropout(proj_drop)\n        assert attn_mode in self.ATTENTION_MODES\n        self.attn_mode = attn_mode\n        self.pre_only  = pre_only\n\n        if qk_norm == \"rms\":\n            self.ln_q =              RMSNorm(self.head_dim, elementwise_affine=True, eps=1.0e-6, dtype=dtype, device=device)\n            self.ln_k =              RMSNorm(self.head_dim, elementwise_affine=True, eps=1.0e-6, dtype=dtype, device=device)\n        elif qk_norm == \"ln\":\n            self.ln_q = operations.LayerNorm(self.head_dim, elementwise_affine=True, eps=1.0e-6, dtype=dtype, device=device)\n            self.ln_k = operations.LayerNorm(self.head_dim, elementwise_affine=True, eps=1.0e-6, dtype=dtype, device=device)\n        elif qk_norm is None:\n            self.ln_q = nn.Identity()\n            self.ln_k = nn.Identity()\n        else:\n            raise ValueError(qk_norm)\n\n    def pre_attention(self, x: torch.Tensor) -> torch.Tensor:\n        B, L, C = x.shape\n        qkv     = self.qkv(x)\n        q, k, v = split_qkv(qkv, self.head_dim)\n        q       = self.ln_q(q).reshape(q.shape[0], q.shape[1], -1)\n        k       = self.ln_k(k).reshape(q.shape[0], q.shape[1], -1)\n        return (q, k, v)\n\n    def post_attention(self, x: torch.Tensor) -> torch.Tensor:\n        assert not self.pre_only\n        x = self.proj     (x)\n        x = self.proj_drop(x)\n        return x\n\n    def forward(self, x: torch.Tensor) -> torch.Tensor:\n        q, k, v = self.pre_attention(x)\n        x       = optimized_attention(\n            q, k, v, heads=self.num_heads\n        )\n        x       = self.post_attention(x)\n        return x\n\n\nclass RMSNorm(torch.nn.Module):\n    def __init__(\n        self, dim: int, elementwise_affine: bool = False, eps: float = 1e-6, device=None, dtype=None\n    ):\n        \"\"\"\n        Initialize the RMSNorm normalization layer.\n        Args:\n            dim (int): The dimension of the input tensor.\n            eps (float, optional): A small value added to the denominator for numerical stability. Default is 1e-6.\n        Attributes:\n            eps (float): A small value added to the denominator for numerical stability.\n            weight (nn.Parameter): Learnable scaling parameter.\n        \"\"\"\n        super().__init__()\n        self.eps             = eps\n        self.learnable_scale = elementwise_affine\n        if self.learnable_scale:\n            self.weight = nn.Parameter(torch.empty(dim, device=device, dtype=dtype))\n        else:\n            self.register_parameter(\"weight\", None)\n\n    def forward(self, x):\n        return comfy.ldm.common_dit.rms_norm(x, self.weight, self.eps)\n\n\n\nclass SwiGLUFeedForward(nn.Module):\n    def __init__(\n        self,\n        dim                : int,\n        hidden_dim         : int,\n        multiple_of        : int,\n        ffn_dim_multiplier : Optional[float] = None,\n    ):\n        \"\"\"\n        Initialize the FeedForward module.\n\n        Args:\n            dim (int): Input dimension.\n            hidden_dim (int): Hidden dimension of the feedforward layer.\n            multiple_of (int): Value to ensure hidden dimension is a multiple of this value.\n            ffn_dim_multiplier (float, optional): Custom multiplier for hidden dimension. Defaults to None.\n\n        Attributes:\n            w1 (ColumnParallelLinear): Linear transformation for the first layer.\n            w2 (RowParallelLinear): Linear transformation for the second layer.\n            w3 (ColumnParallelLinear): Linear transformation for the third layer.\n\n        \"\"\"\n        super().__init__()\n        hidden_dim     = int(2 * hidden_dim / 3)\n        # custom dim factor multiplier\n        if ffn_dim_multiplier is not None:\n            hidden_dim = int(ffn_dim_multiplier * hidden_dim)\n        hidden_dim     = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of)\n\n        self.w1 = nn.Linear(dim, hidden_dim, bias=False)\n        self.w2 = nn.Linear(hidden_dim, dim, bias=False)\n        self.w3 = nn.Linear(dim, hidden_dim, bias=False)\n\n    def forward(self, x):\n        return self.w2(nn.functional.silu(self.w1(x)) * self.w3(x))\n\n\nclass DismantledBlock(nn.Module):\n    \"\"\"\n    A DiT block with gated adaptive layer norm (adaLN) conditioning.\n    \"\"\"\n\n    ATTENTION_MODES = (\"xformers\", \"torch\", \"torch-hb\", \"math\", \"debug\")\n\n    def __init__(\n        self,\n        hidden_size       : int,\n        num_heads         : int,\n        mlp_ratio         : float         = 4.0,\n        attn_mode         : str           = \"xformers\",\n        qkv_bias          : bool          = False,\n        pre_only          : bool          = False,\n        rmsnorm           : bool          = False,\n        scale_mod_only    : bool          = False,\n        swiglu            : bool          = False,\n        qk_norm           : Optional[str] = None,\n        x_block_self_attn : bool          = False,\n        dtype                             = None,\n        device                            = None,\n        operations                        = None,\n        **block_kwargs,\n    ):\n        super().__init__()\n        assert attn_mode in self.ATTENTION_MODES\n        if not rmsnorm:\n            self.norm1 = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n        else:\n            self.norm1 = RMSNorm(hidden_size, elementwise_affine=False, eps=1e-6)\n        self.attn = SelfAttention(\n            dim        = hidden_size,\n            num_heads  = num_heads,\n            qkv_bias   = qkv_bias,\n            attn_mode  = attn_mode,\n            pre_only   = pre_only,\n            qk_norm    = qk_norm,\n            rmsnorm    = rmsnorm,\n            dtype      = dtype,\n            device     = device,\n            operations = operations\n        )\n        if x_block_self_attn:\n            assert not pre_only\n            assert not scale_mod_only\n            self.x_block_self_attn = True\n            self.attn2 = SelfAttention(\n                dim        = hidden_size,\n                num_heads  = num_heads,\n                qkv_bias   = qkv_bias,\n                attn_mode  = attn_mode,\n                pre_only   = False,\n                qk_norm    = qk_norm,\n                rmsnorm    = rmsnorm,\n                dtype      = dtype,\n                device     = device,\n                operations = operations\n            )\n        else:\n            self.x_block_self_attn = False\n        if not pre_only:\n            if not rmsnorm:\n                self.norm2 = operations.LayerNorm(\n                    hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device\n                )\n            else:\n                self.norm2 = RMSNorm(hidden_size, elementwise_affine=False, eps=1e-6)\n        mlp_hidden_dim = int(hidden_size * mlp_ratio)\n        if not pre_only:\n            if not swiglu:\n                self.mlp = Mlp(\n                    in_features     = hidden_size,\n                    hidden_features = mlp_hidden_dim,\n                    act_layer       = lambda: nn.GELU(approximate = \"tanh\"),\n                    drop            = 0,\n                    dtype           = dtype,\n                    device          = device,\n                    operations      = operations\n                )\n            else:\n                self.mlp = SwiGLUFeedForward(\n                    dim         = hidden_size,\n                    hidden_dim  = mlp_hidden_dim,\n                    multiple_of = 256,\n                )\n        self.scale_mod_only = scale_mod_only\n        if x_block_self_attn:\n            assert not pre_only\n            assert not scale_mod_only\n            n_mods = 9\n        elif not scale_mod_only:\n            n_mods = 6 if not pre_only else 2\n        else:\n            n_mods = 4 if not pre_only else 1\n        self.adaLN_modulation = nn.Sequential(\n            nn.SiLU(), operations.Linear(hidden_size, n_mods * hidden_size, bias=True, dtype=dtype, device=device)\n        )\n        self.pre_only = pre_only\n\n    def pre_attention(self, x: torch.Tensor, c: torch.Tensor) -> torch.Tensor:\n        if not self.pre_only:\n            if not self.scale_mod_only:\n                (\n                    shift_msa,\n                    scale_msa,\n                    gate_msa,\n                    shift_mlp,\n                    scale_mlp,\n                    gate_mlp,\n                ) = self.adaLN_modulation(c).chunk(6, dim=1)\n            else:\n                shift_msa = None\n                shift_mlp = None\n                (\n                    scale_msa,\n                    gate_msa,\n                    scale_mlp,\n                    gate_mlp,\n                ) = self.adaLN_modulation(\n                    c\n                ).chunk(4, dim=1)\n            qkv = self.attn.pre_attention(modulate(self.norm1(x), shift_msa, scale_msa))\n            return qkv, (\n                x,\n                gate_msa,\n                shift_mlp,\n                scale_mlp,\n                gate_mlp,\n            )\n        else:\n            if not self.scale_mod_only:\n                (\n                    shift_msa,\n                    scale_msa,\n                ) = self.adaLN_modulation(\n                    c\n                ).chunk(2, dim=1)\n            else:\n                shift_msa = None\n                scale_msa = self.adaLN_modulation(c)\n            qkv = self.attn.pre_attention(modulate(self.norm1(x), shift_msa, scale_msa))\n            return qkv, None\n\n    def post_attention(self, attn, x, gate_msa, shift_mlp, scale_mlp, gate_mlp):\n        assert not self.pre_only\n        x = x + gate_msa.unsqueeze(1) * self.attn.post_attention(attn)\n        x = x + gate_mlp.unsqueeze(1) * self.mlp(\n            modulate(self.norm2(x), shift_mlp, scale_mlp)\n        )\n        return x\n\n    def pre_attention_x(self, x: torch.Tensor, c: torch.Tensor) -> torch.Tensor:\n        assert self.x_block_self_attn\n        (\n            shift_msa,\n            scale_msa,\n            gate_msa,\n            shift_mlp,\n            scale_mlp,\n            gate_mlp,\n            shift_msa2,\n            scale_msa2,\n            gate_msa2,\n        ) = self.adaLN_modulation(c).chunk(9, dim=1)\n        x_norm = self.norm1(x)\n        qkv  = self.attn .pre_attention(modulate(x_norm, shift_msa,  scale_msa ))\n        qkv2 = self.attn2.pre_attention(modulate(x_norm, shift_msa2, scale_msa2))\n        return qkv, qkv2, (\n            x,\n            gate_msa,\n            shift_mlp,\n            scale_mlp,\n            gate_mlp,\n            gate_msa2,\n        )\n\n    def post_attention_x(self, attn, attn2, x, gate_msa, shift_mlp, scale_mlp, gate_mlp, gate_msa2):\n        assert not self.pre_only\n        attn1 = self.attn .post_attention(attn)\n        attn2 = self.attn2.post_attention(attn2)\n        out1  = gate_msa .unsqueeze(1)  * attn1\n        out2  = gate_msa2.unsqueeze(1)  * attn2\n        x     = x + out1\n        x     = x + out2\n        x     = x + gate_mlp.unsqueeze(1) * self.mlp(\n            modulate(self.norm2(x), shift_mlp, scale_mlp)\n        )\n        return x\n\n    def forward(self, x: torch.Tensor, c: torch.Tensor) -> torch.Tensor:\n        assert not self.pre_only\n        if self.x_block_self_attn:\n            qkv, qkv2, intermediates = self.pre_attention_x(x, c)\n            attn, _ = optimized_attention(\n                qkv[0], qkv[1], qkv[2],\n                num_heads=self.attn.num_heads,\n            )\n            attn2, _ = optimized_attention(\n                qkv2[0], qkv2[1], qkv2[2],\n                num_heads=self.attn2.num_heads,\n            )\n            return self.post_attention_x(attn, attn2, *intermediates)\n        else:\n            qkv,       intermediates = self.pre_attention  (x, c)\n            attn = optimized_attention(\n                qkv[0], qkv[1], qkv[2],\n                heads=self.attn.num_heads,\n            )\n            return self.post_attention(attn, *intermediates)\n\n\ndef block_mixing(*args, use_checkpoint=True, **kwargs):\n    if use_checkpoint:\n        return torch.utils.checkpoint.checkpoint(\n            _block_mixing, *args, use_reentrant=False, **kwargs\n        )\n    else:\n        return _block_mixing(*args, **kwargs)\n\n# context_qkv = Tuple[Tensor,Tensor,Tensor] 2,154,1536 2,154,1536 2,154,24,64             x_qkv 2,4096,1536, ..., 2,4096,24,64\ndef _block_mixing(context, x, context_block, x_block, c, mask=None):\n    context_qkv, context_intermediates = context_block.pre_attention(context, c)\n\n    if x_block.x_block_self_attn:  # x_qkv2 = self-attn?\n        x_qkv, x_qkv2, x_intermediates = x_block.pre_attention_x(x, c)\n    else:\n        x_qkv,         x_intermediates = x_block.pre_attention  (x, c)\n\n    o = []\n    for t in range(3):\n        o.append(torch.cat((context_qkv[t], x_qkv[t]), dim=1))\n    qkv = tuple(o)\n\n    if mask is not None:\n        attn = attention_pytorch(      #1,4186,1536    \n            qkv[0], qkv[1], qkv[2],\n            heads = x_block.attn.num_heads,\n            mask  = mask #> 0 if mask is not None else None,\n        )\n    else:\n        attn = optimized_attention(      #1,4186,1536    \n            qkv[0], qkv[1], qkv[2],\n            heads = x_block.attn.num_heads,\n            mask  = None #> 0 if mask is not None else None,\n        )\n    \n    context_attn, x_attn = (\n        attn[:, : context_qkv[0].shape[1]   ],\n        attn[:,   context_qkv[0].shape[1] : ],\n    )\n\n    if not context_block.pre_only:\n        context = context_block.post_attention(context_attn, *context_intermediates)\n\n    else:\n        context = None\n    if x_block.x_block_self_attn:\n        attn2 = optimized_attention(      # x_qkv2  2,4096,1536\n                x_qkv2[0], x_qkv2[1], x_qkv2[2],\n                heads = x_block.attn2.num_heads,\n            )\n        x = x_block.post_attention_x(x_attn, attn2, *x_intermediates)\n    else:\n        x = x_block.post_attention  (x_attn, *x_intermediates)\n    return context, x\n\n\nclass ReJointBlock(nn.Module):\n    \"\"\"just a small wrapper to serve as a fsdp unit\"\"\"\n\n    def __init__(\n        self,\n        *args,\n        **kwargs,\n    ):\n        super().__init__()\n        pre_only           = kwargs.pop(\"pre_only\")\n        qk_norm            = kwargs.pop(\"qk_norm\",           None )\n        x_block_self_attn  = kwargs.pop(\"x_block_self_attn\", False)\n        self.context_block = DismantledBlock(*args, pre_only=pre_only, qk_norm=qk_norm,                                      **kwargs)\n        self.x_block       = DismantledBlock(*args, pre_only=False,    qk_norm=qk_norm, x_block_self_attn=x_block_self_attn, **kwargs)\n\n    def forward(self, *args, **kwargs):  # context_block, x_block are DismantledBlock\n        return block_mixing(                      # args = Tuple[Tensor,Tensor]   2,154,1536   2,4096,1536\n            *args, context_block=self.context_block, x_block=self.x_block, **kwargs\n        )\n\n\nclass FinalLayer(nn.Module):\n    \"\"\"\n    The final layer of DiT.\n    \"\"\"\n\n    def __init__(\n        self,\n        hidden_size        : int,\n        patch_size         : int,\n        out_channels       : int,\n        total_out_channels : Optional[int] = None,\n        dtype                              = None,\n        device                             = None,\n        operations                         = None,\n    ):\n        super().__init__()\n        self.norm_final = operations.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n        self.linear = (\n            operations.Linear(hidden_size, patch_size * patch_size * out_channels,   bias=True, dtype=dtype, device=device)\n            if (total_out_channels is None)\n            else operations.Linear(hidden_size, total_out_channels,                  bias=True, dtype=dtype, device=device)\n        )\n        self.adaLN_modulation = nn.Sequential(\n            nn.SiLU(), operations.Linear(hidden_size, 2 * hidden_size,               bias=True, dtype=dtype, device=device)\n        )\n\n    def forward(self, x: torch.Tensor, c: torch.Tensor) -> torch.Tensor:\n        shift, scale = self.adaLN_modulation(c).chunk(2, dim=1)\n        x            = modulate(self.norm_final(x), shift, scale)\n        x            = self.linear(x)\n        return x\n\nclass SelfAttentionContext(nn.Module):\n    def __init__(self, dim, heads=8, dim_head=64, dtype=None, device=None, operations=None):\n        super().__init__()\n        dim_head      = dim // heads\n        inner_dim     = dim\n\n        self.heads    = heads\n        self.dim_head = dim_head\n\n        self.qkv      = operations.Linear(dim, dim * 3, bias=True, dtype=dtype, device=device)\n\n        self.proj     = operations.Linear(inner_dim, dim,          dtype=dtype, device=device)\n\n    def forward(self, x):\n        qkv     = self.qkv(x)\n        q, k, v = split_qkv(qkv, self.dim_head)\n        x       = optimized_attention(q.reshape(q.shape[0], q.shape[1], -1), k, v, heads=self.heads)\n        return self.proj(x)\n\nclass ContextProcessorBlock(nn.Module):\n    def __init__(self, context_size, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.norm1 = operations.LayerNorm(context_size, elementwise_affine=False, eps=1e-6,                                                   dtype=dtype, device=device)\n        self.attn  = SelfAttentionContext(context_size,                                                                                       dtype=dtype, device=device, operations=operations)\n        self.norm2 = operations.LayerNorm(context_size, elementwise_affine=False, eps=1e-6,                                                   dtype=dtype, device=device)\n        self.mlp   = Mlp(in_features=context_size, hidden_features=(context_size * 4), act_layer=lambda: nn.GELU(approximate=\"tanh\"), drop=0, dtype=dtype, device=device, operations=operations)\n\n    def forward(self, x):\n        x += self.attn(self.norm1(x))\n        x += self.mlp (self.norm2(x))\n        return x\n\nclass ContextProcessor(nn.Module):\n    def __init__(self, context_size, num_layers, dtype=None, device=None, operations=None):\n        super().__init__()\n        self.layers = torch.nn.ModuleList([ContextProcessorBlock(context_size,                                     dtype=dtype, device=device, operations=operations) for i in range(num_layers)])\n        self.norm   =                       operations.LayerNorm(context_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device)\n\n    def forward(self, x):\n        for i, l in enumerate(self.layers):\n            x = l(x)\n        return self.norm(x)\n\nclass MMDiT(nn.Module):\n    \"\"\"\n    Diffusion model with a Transformer backbone.\n    \"\"\"\n\n    def __init__(\n        self,\n        input_size               :  int                 = 32,\n        patch_size               :  int                 = 2,\n        in_channels              :  int                 = 4,\n        depth                    :  int                 = 28,\n        # hidden_size            :  Optional[int]       = None,\n        # num_heads              :  Optional[int]       = None,\n        mlp_ratio                :  float               = 4.0,\n        learn_sigma              :  bool                = False,\n        adm_in_channels          :  Optional[int]       = None,\n        context_embedder_config  :  Optional[Dict]      = None,\n        compile_core             :  bool                = False,\n        use_checkpoint           :  bool                = False,\n        register_length          :  int                 = 0,\n        attn_mode                :  str                 = \"torch\",\n        rmsnorm                  :  bool                = False,\n        scale_mod_only           :  bool                = False,\n        swiglu                   :  bool                = False,\n        out_channels             :  Optional[int]       = None,\n        pos_embed_scaling_factor :  Optional[float]     = None,\n        pos_embed_offset         :  Optional[float]     = None,\n        pos_embed_max_size       :  Optional[int]       = None,\n        num_patches                                     = None,\n        qk_norm                  :  Optional[str]       = None,\n        qkv_bias                 :  bool                = True,\n        context_processor_layers                        = None,\n        x_block_self_attn        :  bool                = False,\n        x_block_self_attn_layers :  Optional[List[int]] = [],\n        context_size                                    = 4096,\n        num_blocks                                      = None,\n        final_layer                                     = True,\n        skip_blocks                                     = False,\n        dtype                                           = None, #TODO\n        device                                          = None,\n        operations                                      = None,\n    ):\n        super().__init__()\n        self.dtype                    = dtype\n        self.learn_sigma              = learn_sigma\n        self.in_channels              = in_channels\n        default_out_channels          = in_channels * 2 if learn_sigma else in_channels\n        self.out_channels             = default(out_channels, default_out_channels)\n        self.patch_size               = patch_size\n        self.pos_embed_scaling_factor = pos_embed_scaling_factor\n        self.pos_embed_offset         = pos_embed_offset\n        self.pos_embed_max_size       = pos_embed_max_size\n        self.x_block_self_attn_layers = x_block_self_attn_layers\n\n        # hidden_size = default(hidden_size, 64 * depth)\n        # num_heads = default(num_heads, hidden_size // 64)\n\n        # apply magic --> this defines a head_size of 64\n        self.hidden_size = 64 * depth\n        num_heads        = depth\n        if num_blocks is None:\n            num_blocks   = depth\n\n        self.depth       = depth\n        self.num_heads   = num_heads\n\n        self.x_embedder  = PatchEmbed(\n            input_size,\n            patch_size,\n            in_channels,\n            self.hidden_size,\n            bias            = True,\n            strict_img_size = self.pos_embed_max_size is None,\n            dtype           = dtype,\n            device          = device,\n            operations      = operations\n        )\n        self.t_embedder = TimestepEmbedder(self.hidden_size,                                                    dtype=dtype, device=device, operations=operations)\n\n        self.y_embedder = None\n        if adm_in_channels is not None:\n            assert isinstance(adm_in_channels, int)\n            self.y_embedder = VectorEmbedder(adm_in_channels,                                 self.hidden_size, dtype=dtype, device=device, operations=operations)\n\n        if context_processor_layers is not None:\n            self.context_processor = ContextProcessor(context_size, context_processor_layers,                   dtype=dtype, device=device, operations=operations)\n        else:\n            self.context_processor = None\n\n        self.context_embedder = nn.Identity()\n        if context_embedder_config is not None:\n            if context_embedder_config[\"target\"] == \"torch.nn.Linear\":\n                self.context_embedder = operations.Linear(**context_embedder_config[\"params\"],                  dtype=dtype, device=device)\n\n        self.register_length = register_length\n        if self.register_length > 0:\n            self.register = nn.Parameter(torch.randn(1, register_length,                      self.hidden_size, dtype=dtype, device=device))\n\n        # num_patches = self.x_embedder.num_patches\n        # Will use fixed sin-cos embedding:\n        # just use a buffer already\n        if num_patches is not None:\n            self.register_buffer(\n                \"pos_embed\",\n                torch.empty(1, num_patches,                                                   self.hidden_size, dtype=dtype, device=device),\n            )\n        else:\n            self.pos_embed = None\n\n        self.use_checkpoint = use_checkpoint\n        if not skip_blocks:\n            self.joint_blocks = nn.ModuleList(\n                [\n                    ReJointBlock(\n                        self.hidden_size,\n                        num_heads,\n                        mlp_ratio         = mlp_ratio,\n                        qkv_bias          = qkv_bias,\n                        attn_mode         = attn_mode,\n                        pre_only          = (i == num_blocks - 1) and final_layer,\n                        rmsnorm           = rmsnorm,\n                        scale_mod_only    = scale_mod_only,\n                        swiglu            = swiglu,\n                        qk_norm           = qk_norm,\n                        x_block_self_attn = (i in self.x_block_self_attn_layers) or x_block_self_attn,\n                        dtype             = dtype,\n                        device            = device,\n                        operations        = operations,\n                    )\n                    for i in range(num_blocks)\n                ]\n            )\n\n        if final_layer:\n            self.final_layer = FinalLayer(self.hidden_size, patch_size, self.out_channels,                      dtype=dtype, device=device, operations=operations)\n\n        if compile_core:\n            assert False\n            self.forward_core_with_concat = torch.compile(self.forward_core_with_concat)\n\n    def cropped_pos_embed(self, hw, device=None):\n        p    = self.x_embedder.patch_size[0]\n        h, w = hw\n        # patched size\n        h    = (h + 1) // p\n        w    = (w + 1) // p\n        if self.pos_embed is None:\n            return get_2d_sincos_pos_embed_torch(self.hidden_size, w, h, device=device)\n        assert self.pos_embed_max_size is not None\n        assert h <= self.pos_embed_max_size, (h, self.pos_embed_max_size)\n        assert w <= self.pos_embed_max_size, (w, self.pos_embed_max_size)\n        top  = (self.pos_embed_max_size - h) // 2\n        left = (self.pos_embed_max_size - w) // 2\n        spatial_pos_embed = rearrange(\n            self.pos_embed,\n            \"1 (h w) c -> 1 h w c\",\n            h = self.pos_embed_max_size,\n            w = self.pos_embed_max_size,\n        )\n        spatial_pos_embed = spatial_pos_embed[:, top : top + h, left : left + w, :]\n        spatial_pos_embed = rearrange(spatial_pos_embed, \"1 h w c -> 1 (h w) c\")\n        # print(spatial_pos_embed, top, left, h, w)\n        # # t = get_2d_sincos_pos_embed_torch(self.hidden_size, w, h, 7.875, 7.875, device=device) #matches exactly for 1024 res\n        # t = get_2d_sincos_pos_embed_torch(self.hidden_size, w, h, 7.5, 7.5, device=device) #scales better\n        # # print(t)\n        # return t\n        return spatial_pos_embed\n\n    def unpatchify(self, x, hw=None):\n        \"\"\"\n        x: (N, T, patch_size**2 * C)\n        imgs: (N, H, W, C)\n        \"\"\"\n        c = self.out_channels\n        p = self.x_embedder.patch_size[0]\n        if hw is None:\n            h = w = int(x.shape[1] ** 0.5)\n        else:\n            h, w = hw\n            h    = (h + 1) // p\n            w    = (w + 1) // p\n        assert h * w == x.shape[1]\n\n        x    = x.reshape(shape=(x.shape[0], h, w, p, p, c))\n        x    = torch.einsum(\"nhwpqc->nchpwq\", x)\n        imgs = x.reshape(shape=(x.shape[0], c, h * p, w * p))\n        return imgs\n\n\n\n\n    def forward_core_with_concat(\n        self,\n        x       : torch.Tensor,\n        c_mod   : torch.Tensor,\n        c_mod_base : torch.Tensor,\n        context : Optional[torch.Tensor] = None,\n        context_base : Optional[torch.Tensor] = None,\n        control                          = None,\n        transformer_options              = {},\n    ) -> torch.Tensor:\n        patches_replace = transformer_options.get(\"patches_replace\", {})\n        if self.register_length > 0:\n            context = torch.cat(\n                (\n                    repeat(self.register, \"1 ... -> b ...\", b=x.shape[0]),\n                    default(context, torch.Tensor([]).type_as(x)),\n                ),\n                1,\n            )\n\n        weight    = transformer_options['reg_cond_weight'] if 'reg_cond_weight' in transformer_options else 0.0\n        floor     = transformer_options['reg_cond_floor']  if 'reg_cond_floor'  in transformer_options else 0.0\n        floor     = min(floor, weight)\n                \n        if type(weight) == float or type(weight) == int:\n            pass\n        else:\n            weight = weight.item()\n\n        AttnMask = transformer_options.get('AttnMask')\n        mask     = None\n        if AttnMask is not None and weight > 0:\n            mask                      = AttnMask.get(weight=weight) #mask_obj[0](transformer_options, weight.item())\n            \n            mask_type_bool = type(mask[0][0].item()) == bool if mask is not None else False\n            if not mask_type_bool:\n                mask = mask.to(x.dtype)\n            \n            text_len                  = context.shape[1] # mask_obj[0].text_len\n            \n            mask[text_len:,text_len:] = torch.clamp(mask[text_len:,text_len:], min=floor.to(mask.device))   #ORIGINAL SELF-ATTN REGION BLEED\n            #reg_cond_mask = reg_cond_mask_expanded.unsqueeze(0).clone() if reg_cond_mask_expanded is not None else None\n        mask_type_bool = type(mask[0][0].item()) == bool if mask is not None else False\n        if weight <= 0.0:\n            mask = None\n            context = context_base\n            c_mod = c_mod_base\n\n        # context is B, L', D\n        # x is B, L, D\n        blocks_replace = patches_replace.get(\"dit\", {})\n        blocks         = len(self.joint_blocks)\n        for i in range(blocks):\n            if mask_type_bool and weight < (i / (blocks-1)) and mask is not None:\n                mask = mask.to(x.dtype) # torch.ones((*mask.shape,), dtype=mask.dtype, device=mask.device) #(mask == mask) #set all to false\n                \n            if (\"double_block\", i) in blocks_replace:\n                def block_wrap(args):\n                    out                    = {}\n                    out[\"txt\"], out[\"img\"] = self.joint_blocks[i](args[\"txt\"], args[\"img\"], c=args[\"vec\"])\n                    return out\n\n                out     = blocks_replace[(\"double_block\", i)]({\"img\": x, \"txt\": context, \"vec\": c_mod}, {\"original_block\": block_wrap})\n                context = out[\"txt\"]\n                x       = out[\"img\"]\n            else:\n                context, x = self.joint_blocks[i](\n                    context,\n                    x,\n                    c              = c_mod,\n                    use_checkpoint = self.use_checkpoint,\n                    mask           = mask,\n                )\n            if control is not None:\n                control_o = control.get(\"output\")\n                if i < len(control_o):\n                    add = control_o[i]\n                    if add is not None:\n                        x += add\n\n        x = self.final_layer(x, c_mod)  # (N, T, patch_size ** 2 * out_channels)\n        return x\n\n    def forward(\n        self,\n        x      : torch.Tensor,\n        t      : torch.Tensor,\n        y      : Optional[torch.Tensor] = None,\n        context: Optional[torch.Tensor] = None,\n        control                         =   None,\n        transformer_options             = {},\n    ) -> torch.Tensor:\n        \"\"\"\n        Forward pass of DiT.\n        x: (N, C, H, W) tensor of spatial inputs (images or latent representations of images)\n        t: (N,) tensor of diffusion timesteps\n        y: (N,) tensor of class labels\n        \"\"\"\n        SIGMA = t[0].clone() / 1000\n        EO = transformer_options.get(\"ExtraOptions\", ExtraOptions(\"\"))\n        if EO is not None:\n            EO.mute = True\n            \n        y0_style_pos        = transformer_options.get(\"y0_style_pos\")\n        y0_style_neg        = transformer_options.get(\"y0_style_neg\")\n\n        y0_style_pos_weight    = transformer_options.get(\"y0_style_pos_weight\", 0.0)\n        y0_style_pos_synweight = transformer_options.get(\"y0_style_pos_synweight\", 0.0)\n        y0_style_pos_synweight *= y0_style_pos_weight\n\n        y0_style_neg_weight    = transformer_options.get(\"y0_style_neg_weight\", 0.0)\n        y0_style_neg_synweight = transformer_options.get(\"y0_style_neg_synweight\", 0.0)\n        y0_style_neg_synweight *= y0_style_neg_weight\n        \n        weight    = -1 * transformer_options.get(\"regional_conditioning_weight\", 0.0)\n        floor     = -1 * transformer_options.get(\"regional_conditioning_floor\",  0.0)\n        \n        freqsep_lowpass_method = transformer_options.get(\"freqsep_lowpass_method\")\n        freqsep_sigma          = transformer_options.get(\"freqsep_sigma\")\n        freqsep_kernel_size    = transformer_options.get(\"freqsep_kernel_size\")\n        freqsep_inner_kernel_size    = transformer_options.get(\"freqsep_inner_kernel_size\")\n        freqsep_stride    = transformer_options.get(\"freqsep_stride\")\n        \n        freqsep_lowpass_weight = transformer_options.get(\"freqsep_lowpass_weight\")\n        freqsep_highpass_weight= transformer_options.get(\"freqsep_highpass_weight\")\n        freqsep_mask           = transformer_options.get(\"freqsep_mask\")\n        \n\n        x_orig = x.clone()\n        y_orig = y.clone()\n        \n        h,w = x.shape[-2:]\n        h_len = ((h + (self.patch_size // 2)) // self.patch_size) # h_len 96\n        w_len = ((w + (self.patch_size // 2)) // self.patch_size) # w_len 96\n\n        out_list = []\n        for i in range(len(transformer_options['cond_or_uncond'])):\n            UNCOND = transformer_options['cond_or_uncond'][i] == 1\n\n            x = x_orig.clone()\n            y = y_orig.clone()\n            \n            context_base = context[i][None,...].clone()\n\n            if UNCOND:\n                #transformer_options['reg_cond_weight'] = -1\n                #context_tmp = context[i][None,...].clone()\n                \n                transformer_options['reg_cond_weight'] = transformer_options.get(\"regional_conditioning_weight\", 0.0) #transformer_options['regional_conditioning_weight']\n                transformer_options['reg_cond_floor']  = transformer_options.get(\"regional_conditioning_floor\",  0.0) #transformer_options['regional_conditioning_floor'] #if \"regional_conditioning_floor\" in transformer_options else 0.0\n                transformer_options['reg_cond_mask_orig'] = transformer_options.get('regional_conditioning_mask_orig')\n                \n                AttnMask   = transformer_options.get('AttnMask',   None)                    \n                RegContext = transformer_options.get('RegContext', None)\n                \n                if AttnMask is not None and transformer_options['reg_cond_weight'] > 0.0:\n                    AttnMask.attn_mask_recast(x.dtype)\n                    context_tmp = RegContext.get().to(context.dtype)\n                    #context_tmp = 0 * context_tmp.clone()\n                    \n                    A = context[i][None,...].clone()\n                    B = context_tmp\n                    context_tmp = A.repeat(1, (B.shape[1] // A.shape[1]) + 1, 1)[:, :B.shape[1], :]\n\n                else:\n                    context_tmp = context[i][None,...].clone()\n            \n            elif UNCOND == False:\n                transformer_options['reg_cond_weight'] = transformer_options.get(\"regional_conditioning_weight\", 0.0) #transformer_options['regional_conditioning_weight']\n                transformer_options['reg_cond_floor']  = transformer_options.get(\"regional_conditioning_floor\",  0.0) #transformer_options['regional_conditioning_floor'] #if \"regional_conditioning_floor\" in transformer_options else 0.0\n                transformer_options['reg_cond_mask_orig'] = transformer_options.get('regional_conditioning_mask_orig')\n                \n                AttnMask   = transformer_options.get('AttnMask',   None)                    \n                RegContext = transformer_options.get('RegContext', None)\n                \n                if AttnMask is not None and transformer_options['reg_cond_weight'] > 0.0:\n                    AttnMask.attn_mask_recast(x.dtype)\n                    context_tmp = RegContext.get().to(context.dtype)\n                else:\n                    context_tmp = context[i][None,...].clone()\n\n            \n            if context_tmp is None:\n                context_tmp = context[i][None,...].clone()\n                \n            #context = context_tmp\n\n\n            if self.context_processor is not None:\n                context_tmp = self.context_processor(context_tmp)\n\n            hw = x.shape[-2:]\n            x  = self.x_embedder(x) + comfy.ops.cast_to_input(self.cropped_pos_embed(hw, device=x.device), x)\n            c  = self.t_embedder(t, dtype=x.dtype)  # (N, D)      # c is like vec...\n            if y is not None and self.y_embedder is not None:\n                y = self.y_embedder(y_orig.clone())  # (N, D)\n                c = c + y  # (N, D)                              # vec = vec + y     (y = pooled_output    1,2048)\n\n            if context_tmp is not None:\n                context_tmp = self.context_embedder(context_tmp)\n\n\n\n            if self.context_processor is not None:\n                context_base = self.context_processor(context_base)\n\n            #hw = x.shape[-2:]\n            #x  = self.x_embedder(x) + comfy.ops.cast_to_input(self.cropped_pos_embed(hw, device=x.device), x)\n            c_base  = self.t_embedder(t, dtype=x.dtype)  # (N, D)      # c is like vec...\n            if y is not None and self.y_embedder is not None:\n                y = self.y_embedder(y_orig.clone())  # (N, D)\n                c_base = c_base + y  # (N, D)                              # vec = vec + y     (y = pooled_output    1,2048)\n\n            if context_base is not None:\n                context_base = self.context_embedder(context_base)\n\n\n            x = self.forward_core_with_concat(\n                                                x[i][None,...],\n                                                c[i][None,...],\n                                                c_base[i][None,...],\n                                                context_tmp,\n                                                context_base, #context[i][None,...].clone(),\n                                                control, \n                                                transformer_options,\n                                                )\n\n            x = self.unpatchify(x, hw=hw)  # (N, out_channels, H, W)\n            \n            out_list.append(x)\n        \n        \n        x = torch.stack(out_list, dim=0).squeeze(dim=1)\n        eps = x[:,:,:hw[-2],:hw[-1]]\n        \n        \n        \n        \n        \n        \n        \n        \n        dtype = eps.dtype if self.style_dtype is None else self.style_dtype\n        \n        if y0_style_pos is not None:\n            y0_style_pos_weight    = transformer_options.get(\"y0_style_pos_weight\")\n            y0_style_pos_synweight = transformer_options.get(\"y0_style_pos_synweight\")\n            y0_style_pos_synweight *= y0_style_pos_weight\n            y0_style_pos_mask = transformer_options.get(\"y0_style_pos_mask\")\n            y0_style_pos_mask_edge = transformer_options.get(\"y0_style_pos_mask_edge\")\n\n            y0_style_pos = y0_style_pos.to(dtype)\n            x   = x_orig.clone().to(dtype)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n            \n            denoised_embed = self.Retrojector.embed(denoised)\n            y0_adain_embed = self.Retrojector.embed(y0_style_pos)\n            \n            if transformer_options['y0_style_method'] == \"scattersort\":\n                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')\n                pad = transformer_options.get('y0_style_tile_padding')\n                if pad is not None and tile_h is not None and tile_w is not None:\n                    \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if EO(\"scattersort_median_LP\"):\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=EO(\"scattersort_median_LP\",7))\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=EO(\"scattersort_median_LP\",7))\n                        \n                        denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_LP = apply_scattersort_tiled(denoised_spatial_LP, y0_adain_spatial_LP, tile_h, tile_w, pad)\n                        \n                        denoised_spatial = denoised_spatial_LP + denoised_spatial_HP\n                        denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n                    else:\n                        denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)\n                    \n                    denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n                    \n                else:\n                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_pos_mask, y0_style_pos_mask_edge, h_len, w_len)\n\n\n\n            elif transformer_options['y0_style_method'] == \"AdaIN\":\n                if freqsep_mask is not None:\n                    freqsep_mask = freqsep_mask.view(1, 1, *freqsep_mask.shape[-2:]).float()\n                    freqsep_mask = F.interpolate(freqsep_mask.float(), size=(h_len, w_len), mode='nearest-exact')\n                \n                if hasattr(self, \"adain_tile\"):\n                    tile_h, tile_w = self.adain_tile\n                    \n                    denoised_pretile = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_pretile = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if self.adain_flag:\n                        h_off = tile_h // 2\n                        w_off = tile_w // 2\n                        denoised_pretile = denoised_pretile[:,:,h_off:-h_off, w_off:-w_off]\n                        self.adain_flag = False\n                    else:\n                        h_off = 0\n                        w_off = 0\n                        self.adain_flag = True\n                    \n                    tiles,    orig_shape, grid, strides = tile_latent(denoised_pretile, tile_size=(tile_h,tile_w))\n                    y0_tiles, orig_shape, grid, strides = tile_latent(y0_adain_pretile, tile_size=(tile_h,tile_w))\n                    \n                    tiles_out = []\n                    for i in range(tiles.shape[0]):\n                        tile = tiles[i].unsqueeze(0)\n                        y0_tile = y0_tiles[i].unsqueeze(0)\n                        \n                        tile    = rearrange(tile,    \"b c h w -> b (h w) c\", h=tile_h, w=tile_w)\n                        y0_tile = rearrange(y0_tile, \"b c h w -> b (h w) c\", h=tile_h, w=tile_w)\n                        \n                        tile = adain_seq_inplace(tile, y0_tile)\n                        tiles_out.append(rearrange(tile, \"b (h w) c -> b c h w\", h=tile_h, w=tile_w))\n                    \n                    tiles_out_tensor = torch.cat(tiles_out, dim=0)\n                    tiles_out_tensor = untile_latent(tiles_out_tensor, orig_shape, grid, strides)\n\n                    if h_off == 0:\n                        denoised_pretile = tiles_out_tensor\n                    else:\n                        denoised_pretile[:,:,h_off:-h_off, w_off:-w_off] = tiles_out_tensor\n                    denoised_embed = rearrange(denoised_pretile, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None and freqsep_lowpass_method.endswith(\"pw\"): #EO(\"adain_pw\"):\n\n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n\n                    if   freqsep_lowpass_method == \"median_pw\":\n                        denoised_spatial_new = adain_patchwise_row_batch_med(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size, use_median_blur=True, lowpass_weight=freqsep_lowpass_weight, highpass_weight=freqsep_highpass_weight)\n                    elif freqsep_lowpass_method == \"gaussian_pw\": \n                        denoised_spatial_new = adain_patchwise_row_batch(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                elif freqsep_lowpass_method is not None: \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    if   freqsep_lowpass_method == \"median\":\n                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=freqsep_kernel_size)\n                    elif freqsep_lowpass_method == \"gaussian\":\n                        denoised_spatial_LP = gaussian_blur_2d(denoised_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                        y0_adain_spatial_LP = gaussian_blur_2d(y0_adain_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)\n                    \n                    denoised_spatial_HP = denoised_spatial - denoised_spatial_LP\n                    \n                    if EO(\"adain_fs_uhp\"):\n                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP\n                        \n                        denoised_spatial_ULP = gaussian_blur_2d(denoised_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        y0_adain_spatial_ULP = gaussian_blur_2d(y0_adain_spatial, sigma=EO(\"adain_fs_uhp_sigma\", 1.0), kernel_size=EO(\"adain_fs_uhp_kernel_size\", 3))\n                        \n                        denoised_spatial_UHP = denoised_spatial_HP  - denoised_spatial_ULP\n                        y0_adain_spatial_UHP = y0_adain_spatial_HP  - y0_adain_spatial_ULP\n                        \n                        #denoised_spatial_HP  = y0_adain_spatial_ULP + denoised_spatial_UHP\n                        denoised_spatial_HP  = denoised_spatial_ULP + y0_adain_spatial_UHP\n                    \n                    denoised_spatial_new = freqsep_lowpass_weight * y0_adain_spatial_LP + freqsep_highpass_weight * denoised_spatial_HP\n                    denoised_embed = rearrange(denoised_spatial_new, \"b c h w -> b (h w) c\", h=h_len, w=w_len)\n\n                else:\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                self.StyleWCT.set(y0_adain_embed)\n                denoised_embed = self.StyleWCT.get(denoised_embed)\n                \n                if transformer_options.get('y0_standard_guide') is not None:\n                    y0_standard_guide = transformer_options.get('y0_standard_guide')\n                    \n                    y0_standard_guide_embed = self.Retrojector.embed(y0_standard_guide)\n                    f_cs = self.StyleWCT.get(y0_standard_guide_embed)\n                    self.y0_standard_guide = self.Retrojector.unembed(f_cs)\n\n                if transformer_options.get('y0_inv_standard_guide') is not None:\n                    y0_inv_standard_guide = transformer_options.get('y0_inv_standard_guide')\n\n                    y0_inv_standard_guide_embed = self.Retrojector.embed(y0_inv_standard_guide)\n                    f_cs = self.StyleWCT.get(y0_inv_standard_guide_embed)\n                    self.y0_inv_standard_guide = self.Retrojector.unembed(f_cs)\n\n            denoised_approx = self.Retrojector.unembed(denoised_embed)\n            \n            eps = (x - denoised_approx) / sigma\n\n            if not UNCOND:\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_pos_weight * (eps[1] - eps_orig[1])\n                    eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n                else:\n                    eps[0] = eps_orig[0] + y0_style_pos_weight * (eps[0] - eps_orig[0])\n            elif eps.shape[0] == 1 and UNCOND:\n                eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n            \n            eps = eps.float()\n        \n        if y0_style_neg is not None:\n            y0_style_neg_weight    = transformer_options.get(\"y0_style_neg_weight\")\n            y0_style_neg_synweight = transformer_options.get(\"y0_style_neg_synweight\")\n            y0_style_neg_synweight *= y0_style_neg_weight\n            y0_style_neg_mask = transformer_options.get(\"y0_style_neg_mask\")\n            y0_style_neg_mask_edge = transformer_options.get(\"y0_style_neg_mask_edge\")\n            \n            y0_style_neg = y0_style_neg.to(dtype)\n            x   = x_orig.clone().to(dtype)\n            eps = eps.to(dtype)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n\n            denoised_embed = self.Retrojector.embed(denoised)\n            y0_adain_embed = self.Retrojector.embed(y0_style_neg)\n            \n            if transformer_options['y0_style_method'] == \"scattersort\":\n                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')\n                pad = transformer_options.get('y0_style_tile_padding')\n                if pad is not None and tile_h is not None and tile_w is not None:\n                    \n                    denoised_spatial = rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    y0_adain_spatial = rearrange(y0_adain_embed, \"b (h w) c -> b c h w\", h=h_len, w=w_len)\n                    \n                    denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)\n                    \n                    denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n\n                else:\n                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_neg_mask, y0_style_neg_mask_edge, h_len, w_len)\n            \n            \n            elif transformer_options['y0_style_method'] == \"AdaIN\":\n                denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                self.StyleWCT.set(y0_adain_embed)\n                denoised_embed = self.StyleWCT.get(denoised_embed)\n\n            denoised_approx = self.Retrojector.unembed(denoised_embed)\n\n            if UNCOND:\n                eps = (x - denoised_approx) / sigma\n                eps[0] = eps_orig[0] + y0_style_neg_weight * (eps[0] - eps_orig[0])\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n            elif eps.shape[0] == 1 and not UNCOND:\n                eps[0] = eps_orig[0] + y0_style_neg_synweight * (eps[0] - eps_orig[0])\n            \n            eps = eps.float()\n            \n        return eps\n\n\n\n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        dtype = eps.dtype if self.style_dtype is None else self.style_dtype\n        pinv_dtype = torch.float32 if dtype != torch.float64 else dtype\n        W_inv = None\n\n        #if eps.shape[0] == 2 or (eps.shape[0] == 1 and not UNCOND):\n        if y0_style_pos is not None:\n            y0_style_pos_weight    = transformer_options.get(\"y0_style_pos_weight\")\n            y0_style_pos_synweight = transformer_options.get(\"y0_style_pos_synweight\")\n            y0_style_pos_synweight *= y0_style_pos_weight\n            \n            y0_style_pos = y0_style_pos.to(torch.float64)\n            x   = x_orig.to(torch.float64)\n            eps = eps.to(torch.float64)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA# t_orig[0].to(torch.float64) / 1000\n            denoised = x - sigma * eps\n\n            hw = denoised.shape[-2:]\n            \n            features = 1536# denoised_embed.shape[-1] # should be 1536\n            \n            W_conv = self.x_embedder.proj.weight.to(torch.float64)  # [1536, 16, 2, 2]\n            W_flat = W_conv.view(features, -1).to(torch.float64)    # [1536, 64]\n            W_pinv = torch.linalg.pinv(W_flat)            # [64, 1536]\n\n            x_embedder64 = copy.deepcopy(self.x_embedder.proj).to(denoised)\n\n            #y = self.x_embedder.proj(denoised.to(torch.float16)).float()\n            y = x_embedder64(denoised)\n            B, C_out, H_out, W_out = y.shape\n            y_flat = y.view(B, C_out, -1)                   # [B, 1536, N]\n            y_flat = y_flat.permute(0, 2, 1)                # [B, N, 1536]\n\n            bias = self.x_embedder.proj.bias.to(torch.float64)               # [1536]\n            denoised_embed = y_flat - bias.view(1, 1, -1)\n\n\n\n\n            #y = self.x_embedder.proj(y0_style_pos.to(torch.float16)).float()\n            y = x_embedder64(y0_style_pos)\n            \n            B, C_out, H_out, W_out = y.shape\n            y_flat = y.view(B, C_out, -1)                   # [B, 1536,    N]\n            y_flat = y_flat.permute(0, 2, 1)                # [B, N   , 1536]\n\n            bias = self.x_embedder.proj.bias.to(torch.float64)              # [1536]\n            y0_adain_embed = y_flat - bias.view(1, 1, -1)\n\n\n            #denoised_embed = adain_seq(denoised_embed, y0_adain_embed)\n            \n\n            \n            if transformer_options['y0_style_method'] == \"AdaIN\":\n                denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                \"\"\"for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = (denoised_embed - b) @ torch.linalg.pinv(W.to(pinv_dtype)).T.to(dtype)       #  not going to work! needs \n                    denoised_embed = F.linear(denoised_embed         .to(W), W, b).to(img)\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\"\"\"\n\n            elif transformer_options['y0_style_method'] == \"WCT\":\n                if self.y0_adain_embed is None or self.y0_adain_embed.shape != y0_adain_embed.shape or torch.norm(self.y0_adain_embed - y0_adain_embed) > 0:\n                    self.y0_adain_embed = y0_adain_embed\n                    \n                    f_s          = y0_adain_embed[0].clone()\n                    self.mu_s    = f_s.mean(dim=0, keepdim=True)\n                    f_s_centered = f_s - self.mu_s\n                    \n                    cov = (f_s_centered.T.double() @ f_s_centered.double()) / (f_s_centered.size(0) - 1)\n\n                    S_eig, U_eig = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    S_eig_sqrt    = S_eig.clamp(min=0).sqrt() # eigenvalues -> singular values\n                    \n                    whiten = U_eig @ torch.diag(S_eig_sqrt) @ U_eig.T\n                    self.y0_color  = whiten.to(f_s_centered)\n\n                for wct_i in range(eps.shape[0]):\n                    f_c          = denoised_embed[wct_i].clone()\n                    mu_c         = f_c.mean(dim=0, keepdim=True)\n                    f_c_centered = f_c - mu_c\n                    \n                    cov = (f_c_centered.T.double() @ f_c_centered.double()) / (f_c_centered.size(0) - 1)\n\n                    S_eig, U_eig  = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    inv_sqrt_eig  = S_eig.clamp(min=0).rsqrt() \n                    \n                    whiten = U_eig @ torch.diag(inv_sqrt_eig) @ U_eig.T\n                    whiten = whiten.to(f_c_centered)\n\n                    f_c_whitened = f_c_centered @ whiten.T\n                    f_cs         = f_c_whitened @ self.y0_color.T + self.mu_s\n                    \n                    denoised_embed[wct_i] = f_cs\n\n            \n            x_patches = denoised_embed @ W_pinv.T                  # [B,N,64]\n\n            x_patches = x_patches.permute(0, 2, 1)                 # [B,64,N]\n\n            x_reconstructed = torch.nn.functional.fold(\n                x_patches,                                         # [B, 64, N]\n                output_size=(H_out * 2, W_out * 2),        # restore original input shape\n                kernel_size=2,\n                stride=2\n            )\n\n            denoised_approx = x_reconstructed #.view(B, 16, H_out * 2, W_out * 2)\n            \n            \n            \n            eps = (x - denoised_approx) / sigma\n            #if eps.shape[0] == 2:\n            #    eps[1] = eps_orig[1] + y0_style_pos_weight * (eps[1] - eps_orig[1])\n            #    eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n            #else:\n            #    eps[0] = eps_orig[0] + y0_style_pos_weight * (eps[0] - eps_orig[0])\n                \n            if not UNCOND:\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_pos_weight * (eps[1] - eps_orig[1])\n                    eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n                else:\n                    eps[0] = eps_orig[0] + y0_style_pos_weight * (eps[0] - eps_orig[0])\n            elif eps.shape[0] == 1 and UNCOND:\n                eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n                \n            eps = eps.float()\n\n        #if eps.shape[0] == 2 or (eps.shape[0] == 1 and UNCOND):\n        if y0_style_neg is not None:\n            y0_style_neg_weight    = transformer_options.get(\"y0_style_neg_weight\")\n            y0_style_neg_synweight = transformer_options.get(\"y0_style_neg_synweight\")\n            y0_style_neg_synweight *= y0_style_neg_weight\n            \n            y0_style_neg = y0_style_neg.to(torch.float64)\n            x   = x_orig.to(torch.float64)\n            eps = eps.to(torch.float64)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA# t_orig[0].to(torch.float64) / 1000\n            denoised = x - sigma * eps\n\n            hw = denoised.shape[-2:]\n            \n            features = 1536# denoised_embed.shape[-1] # should be 1536\n            \n            W_conv = self.x_embedder.proj.weight.float()  # [1536, 16, 2, 2]\n            W_flat = W_conv.view(features, -1).float()    # [1536, 64]\n            W_pinv = torch.linalg.pinv(W_flat)            # [64, 1536]\n\n\n\n            y = self.x_embedder.proj(denoised.to(torch.float16)).float()\n            B, C_out, H_out, W_out = y.shape\n            y_flat = y.view(B, C_out, -1)                   # [B, 1536, N]\n            y_flat = y_flat.permute(0, 2, 1)                # [B, N, 1536]\n\n            bias = self.x_embedder.proj.bias.float()               # [1536]\n            denoised_embed = y_flat - bias.view(1, 1, -1)\n\n\n\n            y = self.x_embedder.proj(y0_style_neg.to(torch.float16)).float()\n            B, C_out, H_out, W_out = y.shape\n            y_flat = y.view(B, C_out, -1)                   # [B, 1536,    N]\n            y_flat = y_flat.permute(0, 2, 1)                # [B, N   , 1536]\n\n            bias = self.x_embedder.proj.bias.float()               # [1536]\n            y0_adain_embed = y_flat - bias.view(1, 1, -1)\n\n\n            #denoised_embed = adain_seq(denoised_embed, y0_adain_embed)\n            \n            if transformer_options['y0_style_method'] == \"AdaIN\":\n                denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                \"\"\"for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    denoised_embed = (denoised_embed - b) @ torch.linalg.pinv(W.to(pinv_dtype)).T.to(dtype)\n                    denoised_embed = F.linear(denoised_embed         .to(W), W, b).to(img)\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\"\"\"\n\n            elif transformer_options['y0_style_method'] == \"WCT\":\n                if self.y0_adain_embed is None or self.y0_adain_embed.shape != y0_adain_embed.shape or torch.norm(self.y0_adain_embed - y0_adain_embed) > 0:\n                    self.y0_adain_embed = y0_adain_embed\n                    \n                    f_s          = y0_adain_embed[0].clone()\n                    self.mu_s    = f_s.mean(dim=0, keepdim=True)\n                    f_s_centered = f_s - self.mu_s\n                    \n                    cov = (f_s_centered.T.double() @ f_s_centered.double()) / (f_s_centered.size(0) - 1)\n\n                    S_eig, U_eig = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    S_eig_sqrt    = S_eig.clamp(min=0).sqrt() # eigenvalues -> singular values\n                    \n                    whiten = U_eig @ torch.diag(S_eig_sqrt) @ U_eig.T\n                    self.y0_color  = whiten.to(f_s_centered)\n\n                for wct_i in range(eps.shape[0]):\n                    f_c          = denoised_embed[wct_i].clone()\n                    mu_c         = f_c.mean(dim=0, keepdim=True)\n                    f_c_centered = f_c - mu_c\n                    \n                    cov = (f_c_centered.T.double() @ f_c_centered.double()) / (f_c_centered.size(0) - 1)\n\n                    S_eig, U_eig  = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    inv_sqrt_eig  = S_eig.clamp(min=0).rsqrt() \n                    \n                    whiten = U_eig @ torch.diag(inv_sqrt_eig) @ U_eig.T\n                    whiten = whiten.to(f_c_centered)\n\n                    f_c_whitened = f_c_centered @ whiten.T\n                    f_cs         = f_c_whitened @ self.y0_color.T + self.mu_s\n                    \n                    denoised_embed[wct_i] = f_cs\n            \n            \n            x_patches = denoised_embed @ W_pinv.T                  # [B,N,64]\n\n            x_patches = x_patches.permute(0, 2, 1)                 # [B,64,N]\n\n            x_reconstructed = torch.nn.functional.fold(\n                x_patches,                                         # [B, 64, N]\n                output_size=(H_out * 2, W_out * 2),        # restore original input shape\n                kernel_size=2,\n                stride=2\n            )\n\n            denoised_approx = x_reconstructed #.view(B, 16, H_out * 2, W_out * 2)\n            \n            \n            \n            #eps = (x - denoised_approx) / sigma\n            #eps[0] = eps_orig[0] + y0_style_neg_weight * (eps[0] - eps_orig[0])\n            #if eps.shape[0] == 2:\n            #    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n                \n            if UNCOND:\n                eps = (x - denoised_approx) / sigma\n                eps[0] = eps_orig[0] + y0_style_neg_weight * (eps[0] - eps_orig[0])\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n            elif eps.shape[0] == 1 and not UNCOND:\n                eps[0] = eps_orig[0] + y0_style_neg_synweight * (eps[0] - eps_orig[0])\n                \n            eps = eps.float()\n\n\n        \n        return eps\n\n\n\n\n\nclass ReOpenAISignatureMMDITWrapper(MMDiT):\n    def forward(\n        self,\n        x         : torch.Tensor,\n        timesteps : torch.Tensor,\n        context   : Optional[torch.Tensor] = None,\n        y         : Optional[torch.Tensor] = None,\n        control                            = None,\n        transformer_options                = {},\n        **kwargs,\n    ) -> torch.Tensor:\n        return super().forward(x, timesteps, context=context, y=y, control=control, transformer_options=transformer_options)\n\n\n\ndef adain_seq_inplace(content: torch.Tensor, style: torch.Tensor, eps: float = 1e-7) -> torch.Tensor:\n    mean_c = content.mean(1, keepdim=True)\n    std_c  = content.std (1, keepdim=True).add_(eps)  # in-place add\n    mean_s = style.mean  (1, keepdim=True)\n    std_s  = style.std   (1, keepdim=True).add_(eps)\n\n    content.sub_(mean_c).div_(std_c).mul_(std_s).add_(mean_s)  # in-place chain\n    return content\n\n\n\ndef adain_seq(content: torch.Tensor, style: torch.Tensor, eps: float = 1e-7) -> torch.Tensor:\n    return ((content - content.mean(1, keepdim=True)) / (content.std(1, keepdim=True) + eps)) * (style.std(1, keepdim=True) + eps) + style.mean(1, keepdim=True)\n\n\n\n\n"
  },
  {
    "path": "sigmas.py",
    "content": "import torch\r\nimport numpy as np\r\nfrom math import *\r\nimport builtins\r\nfrom scipy.interpolate import CubicSpline\r\nfrom scipy import special, stats\r\nimport torch.nn.functional as F\r\nimport torch.nn as nn\r\nimport torch.optim as optim\r\nimport math\r\n\r\n\r\nfrom comfy.k_diffusion.sampling import get_sigmas_polyexponential, get_sigmas_karras\r\nimport comfy.samplers\r\n\r\nfrom torch import Tensor, nn\r\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKING, TypeVar\r\n\r\nfrom .res4lyf import RESplain\r\nfrom .helper  import get_res4lyf_scheduler_list\r\n\r\n\r\ndef rescale_linear(input, input_min, input_max, output_min, output_max):\r\n    output = ((input - input_min) / (input_max - input_min)) * (output_max - output_min) + output_min;\r\n    return output\r\n\r\nclass set_precision_sigmas:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                    \"sigmas\": (\"SIGMAS\", ),   \r\n                    \"precision\": ([\"16\", \"32\", \"64\"], ),\r\n                    \"set_default\": (\"BOOLEAN\", {\"default\": False})\r\n                     },\r\n                }\r\n\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"passthrough\",)\r\n    CATEGORY = \"RES4LYF/precision\"\r\n\r\n    FUNCTION = \"main\"\r\n\r\n    def main(self, precision=\"32\", sigmas=None, set_default=False):\r\n        match precision:\r\n            case \"16\":\r\n                if set_default is True:\r\n                    torch.set_default_dtype(torch.float16)\r\n                sigmas = sigmas.to(torch.float16)\r\n            case \"32\":\r\n                if set_default is True:\r\n                    torch.set_default_dtype(torch.float32)\r\n                sigmas = sigmas.to(torch.float32)\r\n            case \"64\":\r\n                if set_default is True:\r\n                    torch.set_default_dtype(torch.float64)\r\n                sigmas = sigmas.to(torch.float64)\r\n        return (sigmas, )\r\n\r\n\r\nclass SimpleInterpolator(nn.Module):\r\n    def __init__(self):\r\n        super(SimpleInterpolator, self).__init__()\r\n        self.net = nn.Sequential(\r\n            nn.Linear(1, 16),\r\n            nn.ReLU(),\r\n            nn.Linear(16, 32),\r\n            nn.ReLU(),\r\n            nn.Linear(32, 1)\r\n        )\r\n\r\n    def forward(self, x):\r\n        return self.net(x)\r\n\r\ndef train_interpolator(model, sigma_schedule, steps, epochs=5000, lr=0.01):\r\n    with torch.inference_mode(False):\r\n        model = SimpleInterpolator()\r\n        sigma_schedule = sigma_schedule.clone()\r\n\r\n        criterion = nn.MSELoss()\r\n        optimizer = optim.Adam(model.parameters(), lr=lr)\r\n        \r\n        x_train = torch.linspace(0, 1, steps=steps).unsqueeze(1)\r\n        y_train = sigma_schedule.unsqueeze(1)\r\n\r\n        # disable inference mode for training\r\n        model.train()\r\n        for epoch in range(epochs):\r\n            optimizer.zero_grad()\r\n\r\n            # fwd pass\r\n            outputs = model(x_train)\r\n            loss = criterion(outputs, y_train)\r\n            loss.backward()\r\n            optimizer.step()\r\n\r\n    return model\r\n\r\ndef interpolate_sigma_schedule_model(sigma_schedule, target_steps):\r\n    model = SimpleInterpolator()\r\n    sigma_schedule = sigma_schedule.float().detach()\r\n\r\n    # train on original sigma schedule\r\n    trained_model = train_interpolator(model, sigma_schedule, len(sigma_schedule))\r\n\r\n    # generate target steps for interpolation\r\n    x_interpolated = torch.linspace(0, 1, target_steps).unsqueeze(1)\r\n\r\n    # inference w/o gradients\r\n    trained_model.eval()\r\n    with torch.no_grad():\r\n        interpolated_sigma = trained_model(x_interpolated).squeeze()\r\n\r\n    return interpolated_sigma\r\n\r\n\r\n\r\n\r\nclass sigmas_interpolate:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas_in\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"output_length\": (\"INT\", {\"default\": 0, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"mode\": ([\"linear\", \"nearest\", \"polynomial\", \"exponential\", \"power\", \"model\"],),\r\n                \"order\": (\"INT\", {\"default\": 8, \"min\": 1,\"max\": 64,\"step\": 1}),\r\n                \"rescale_after\": (\"BOOLEAN\", {\"default\": True, \"tooltip\": \"Rescale the output to the original min/max range after interpolation.\"}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    DESCRIPTION = \"Interpolate the sigmas schedule to a new length clamping the start and end values.\"\r\n\r\n    def interpolate_sigma_schedule_poly(self, sigma_schedule, target_steps):\r\n        order = self.order\r\n        sigma_schedule_np = sigma_schedule.cpu().numpy()\r\n\r\n        # orig steps (assuming even spacing)\r\n        original_steps = np.linspace(0, 1, len(sigma_schedule_np))\r\n\r\n        # fit polynomial of the given order\r\n        coefficients = np.polyfit(original_steps, sigma_schedule_np, deg=order)\r\n\r\n        # generate new steps where we want to interpolate the data\r\n        target_steps_np = np.linspace(0, 1, target_steps)\r\n\r\n        # eval polynomial at new steps\r\n        interpolated_sigma_np = np.polyval(coefficients, target_steps_np)\r\n\r\n        interpolated_sigma = torch.tensor(interpolated_sigma_np, device=sigma_schedule.device, dtype=sigma_schedule.dtype)\r\n        return interpolated_sigma\r\n\r\n    def interpolate_sigma_schedule_constrained(self, sigma_schedule, target_steps):\r\n        sigma_schedule_np = sigma_schedule.cpu().numpy()\r\n\r\n        # orig steps\r\n        original_steps = np.linspace(0, 1, len(sigma_schedule_np))\r\n\r\n        # target steps for interpolation\r\n        target_steps_np = np.linspace(0, 1, target_steps)\r\n\r\n        # fit cubic spline with fixed start and end values\r\n        cs = CubicSpline(original_steps, sigma_schedule_np, bc_type=((1, 0.0), (1, 0.0)))\r\n\r\n        # eval spline at the target steps\r\n        interpolated_sigma_np = cs(target_steps_np)\r\n\r\n        interpolated_sigma = torch.tensor(interpolated_sigma_np, device=sigma_schedule.device, dtype=sigma_schedule.dtype)\r\n\r\n        return interpolated_sigma\r\n    \r\n    def interpolate_sigma_schedule_exp(self, sigma_schedule, target_steps):\r\n        # transform to log space\r\n        log_sigma_schedule = torch.log(sigma_schedule)\r\n\r\n        # define the original and target step ranges\r\n        original_steps = torch.linspace(0, 1, steps=len(sigma_schedule))\r\n        target_steps = torch.linspace(0, 1, steps=target_steps)\r\n\r\n        # interpolate in log space\r\n        interpolated_log_sigma = F.interpolate(\r\n            log_sigma_schedule.unsqueeze(0).unsqueeze(0),  # Add fake batch and channel dimensions\r\n            size=target_steps.shape[0],\r\n            mode='linear',\r\n            align_corners=True\r\n        ).squeeze()\r\n\r\n        # transform back to exponential space\r\n        interpolated_sigma_schedule = torch.exp(interpolated_log_sigma)\r\n\r\n        return interpolated_sigma_schedule\r\n    \r\n    def interpolate_sigma_schedule_power(self, sigma_schedule, target_steps):\r\n        sigma_schedule_np = sigma_schedule.cpu().numpy()\r\n        original_steps = np.linspace(1, len(sigma_schedule_np), len(sigma_schedule_np))\r\n\r\n        # power regression using a log-log transformation\r\n        log_x = np.log(original_steps)\r\n        log_y = np.log(sigma_schedule_np)\r\n\r\n        # linear regression on log-log data\r\n        coefficients = np.polyfit(log_x, log_y, deg=1)  # degree 1 for linear fit in log-log space\r\n        a = np.exp(coefficients[1])  # a = \"b\" = intercept (exp because of the log transform)\r\n        b = coefficients[0]  # b = \"m\" = slope\r\n\r\n        target_steps_np = np.linspace(1, len(sigma_schedule_np), target_steps)\r\n\r\n        # power law prediction: y = a * x^b\r\n        interpolated_sigma_np = a * (target_steps_np ** b)\r\n\r\n        interpolated_sigma = torch.tensor(interpolated_sigma_np, device=sigma_schedule.device, dtype=sigma_schedule.dtype)\r\n\r\n        return interpolated_sigma\r\n            \r\n    def interpolate_sigma_schedule_linear(self, sigma_schedule, target_steps):\r\n        return F.interpolate(sigma_schedule.unsqueeze(0).unsqueeze(0), target_steps, mode='linear').squeeze(0).squeeze(0)\r\n\r\n    def interpolate_sigma_schedule_nearest(self, sigma_schedule, target_steps):\r\n        return F.interpolate(sigma_schedule.unsqueeze(0).unsqueeze(0), target_steps, mode='nearest').squeeze(0).squeeze(0)    \r\n    \r\n    def interpolate_nearest_neighbor(self, sigma_schedule, target_steps):\r\n        original_steps = torch.linspace(0, 1, steps=len(sigma_schedule))\r\n        target_steps = torch.linspace(0, 1, steps=target_steps)\r\n\r\n        # interpolate original -> target steps using nearest neighbor\r\n        indices = torch.searchsorted(original_steps, target_steps)\r\n        indices = torch.clamp(indices, 0, len(sigma_schedule) - 1)  # clamp indices to valid range\r\n\r\n        # set nearest neighbor via indices\r\n        interpolated_sigma = sigma_schedule[indices]\r\n\r\n        return interpolated_sigma\r\n\r\n\r\n    def main(self, sigmas_in, output_length, mode, order, rescale_after=True):\r\n\r\n        self.order = order\r\n\r\n        sigmas_in = sigmas_in.clone().to(sigmas_in.dtype)\r\n        start = sigmas_in[0]\r\n        end = sigmas_in[-1]\r\n\r\n        if   mode == \"linear\": \r\n            interpolate = self.interpolate_sigma_schedule_linear\r\n        if   mode == \"nearest\": \r\n            interpolate = self.interpolate_nearest_neighbor\r\n        elif mode == \"polynomial\":\r\n            interpolate = self.interpolate_sigma_schedule_poly\r\n        elif mode == \"exponential\":\r\n            interpolate = self.interpolate_sigma_schedule_exp\r\n        elif mode == \"power\":\r\n            interpolate = self.interpolate_sigma_schedule_power\r\n        elif mode == \"model\":\r\n            with torch.inference_mode(False):\r\n                interpolate = interpolate_sigma_schedule_model\r\n\r\n        sigmas_interp = interpolate(sigmas_in, output_length)\r\n        if rescale_after:\r\n            sigmas_interp = ((sigmas_interp - sigmas_interp.min()) * (start - end)) / (sigmas_interp.max() - sigmas_interp.min()) + end\r\n        return (sigmas_interp,)\r\n\r\nclass sigmas_noise_inversion:\r\n    # flip sigmas for unsampling, and pad both fwd/rev directions with null bytes to disable noise scaling, etc from the model.\r\n    # will cause model to return epsilon prediction instead of calculated denoised latent image.\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas_fwd\",\"sigmas_rev\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    DESCRIPTION = \"For use with unsampling. Connect sigmas_fwd to the unsampling (first) node, and sigmas_rev to the sampling (second) node.\"\r\n    \r\n    def main(self, sigmas):\r\n        sigmas = sigmas.clone().to(sigmas.dtype)\r\n\r\n        null = torch.tensor([0.0], device=sigmas.device, dtype=sigmas.dtype)\r\n        sigmas_fwd = torch.flip(sigmas, dims=[0])\r\n        sigmas_fwd = torch.cat([sigmas_fwd, null])\r\n        \r\n        sigmas_rev = torch.cat([null, sigmas])\r\n        sigmas_rev = torch.cat([sigmas_rev, null])\r\n        \r\n        return (sigmas_fwd, sigmas_rev,)\r\n\r\n\r\ndef compute_sigma_next_variance_floor(sigma):\r\n    return (-1 + torch.sqrt(1 + 4 * sigma)) / 2\r\n\r\nclass sigmas_variance_floor:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    DESCRIPTION = (\"Process a sigma schedule so that any steps that are too large for variance-locked SDE sampling are replaced with the maximum permissible value.\"\r\n        \"Will be very difficult to approach sigma = 0 due to the nature of the math, as steps become very small much below approximately sigma = 0.15 to 0.2.\")\r\n    \r\n    def main(self, sigmas):\r\n        dtype = sigmas.dtype\r\n        sigmas = sigmas.clone().to(sigmas.dtype)\r\n        for i in range(len(sigmas) - 1):\r\n            sigma_next = (-1 + torch.sqrt(1 + 4 * sigmas[i])) / 2\r\n            \r\n            if sigmas[i+1] < sigma_next and sigmas[i+1] > 0.0:\r\n                print(\"swapped i+1 with sigma_next+0.001: \", sigmas[i+1], sigma_next + 0.001)\r\n                sigmas[i+1] = sigma_next + 0.001\r\n        return (sigmas.to(dtype),)\r\n\r\n\r\nclass sigmas_from_text:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"text\": (\"STRING\", {\"default\": \"\", \"multiline\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, text):\r\n        text_list = [float(val) for val in text.replace(\",\", \" \").split()]\r\n        #text_list = [float(val.strip()) for val in text.split(\",\")]\r\n\r\n        sigmas = torch.tensor(text_list) #.to('cuda').to(torch.float64)\r\n        \r\n        return (sigmas,)\r\n\r\n\r\n\r\nclass sigmas_concatenate:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas_1\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_2\": (\"SIGMAS\", {\"forceInput\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas_1, sigmas_2):\r\n        return (torch.cat((sigmas_1, sigmas_2.to(sigmas_1))),)\r\n\r\nclass sigmas_truncate:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_until\": (\"INT\", {\"default\": 10, \"min\": 0,\"max\": 1000,\"step\": 1}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, sigmas_until):\r\n        sigmas = sigmas.clone()\r\n        return (sigmas[:sigmas_until],)\r\n\r\nclass sigmas_start:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_until\": (\"INT\", {\"default\": 10, \"min\": 0,\"max\": 1000,\"step\": 1}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, sigmas_until):\r\n        sigmas = sigmas.clone()\r\n        return (sigmas[sigmas_until:],)\r\n        \r\nclass sigmas_split:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_start\": (\"INT\", {\"default\": 0, \"min\": 0,\"max\": 1000,\"step\": 1}),\r\n                \"sigmas_end\": (\"INT\", {\"default\": 1000, \"min\": 0,\"max\": 1000,\"step\": 1}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, sigmas_start, sigmas_end):\r\n        sigmas = sigmas.clone()\r\n        return (sigmas[sigmas_start:sigmas_end],)\r\n\r\n        sigmas_stop_step = sigmas_end - sigmas_start\r\n        return (sigmas[sigmas_start:][:sigmas_stop_step],)\r\n    \r\nclass sigmas_pad:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"value\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000,\"max\": 10000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, value):\r\n        sigmas = sigmas.clone()\r\n        return (torch.cat((sigmas, torch.tensor([value], dtype=sigmas.dtype))),)\r\n    \r\nclass sigmas_unpad:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas):\r\n        sigmas = sigmas.clone()\r\n        return (sigmas[:-1],)\r\n\r\nclass sigmas_set_floor:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"floor\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"new_floor\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    FUNCTION = \"set_floor\"\r\n\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n\r\n    def set_floor(self, sigmas, floor, new_floor):\r\n        sigmas = sigmas.clone()\r\n        sigmas[sigmas <= floor] = new_floor\r\n        return (sigmas,)    \r\n    \r\nclass sigmas_delete_below_floor:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"floor\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    FUNCTION = \"delete_below_floor\"\r\n\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n\r\n    def delete_below_floor(self, sigmas, floor):\r\n        sigmas = sigmas.clone()\r\n        return (sigmas[sigmas >= floor],)    \r\n\r\nclass sigmas_delete_value:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"value\": (\"FLOAT\", {\"default\": 0.0, \"min\": -1000,\"max\": 1000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    FUNCTION = \"delete_value\"\r\n\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n\r\n    def delete_value(self, sigmas, value):\r\n        return (sigmas[sigmas != value],) \r\n\r\nclass sigmas_delete_consecutive_duplicates:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas_1\": (\"SIGMAS\", {\"forceInput\": True})\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    FUNCTION = \"delete_consecutive_duplicates\"\r\n\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n\r\n    def delete_consecutive_duplicates(self, sigmas_1):\r\n        mask = sigmas_1[:-1] != sigmas_1[1:]\r\n        mask = torch.cat((mask, torch.tensor([True])))\r\n        return (sigmas_1[mask],) \r\n\r\nclass sigmas_cleanup:\r\n    def __init__(self):\r\n        pass\r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmin\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": 0,\"max\": 1000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    FUNCTION = \"cleanup\"\r\n\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n\r\n    def cleanup(self, sigmas, sigmin):\r\n        sigmas_culled = sigmas[sigmas >= sigmin]\r\n    \r\n        mask = sigmas_culled[:-1] != sigmas_culled[1:]\r\n        mask = torch.cat((mask, torch.tensor([True])))\r\n        filtered_sigmas = sigmas_culled[mask]\r\n        return (torch.cat((filtered_sigmas,torch.tensor([0]))),)\r\n\r\nclass sigmas_mult:\r\n    def __init__(self):\r\n        pass   \r\n\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"multiplier\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01})\r\n            },\r\n            \"optional\": {\r\n                \"sigmas2\": (\"SIGMAS\", {\"forceInput\": False})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, multiplier, sigmas2=None):\r\n        if sigmas2 is not None:\r\n            return (sigmas * sigmas2 * multiplier,)\r\n        else:\r\n            return (sigmas * multiplier,)    \r\n\r\nclass sigmas_modulus:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"divisor\": (\"FLOAT\", {\"default\": 1, \"min\": -1000,\"max\": 1000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, divisor):\r\n        return (sigmas % divisor,)\r\n        \r\nclass sigmas_quotient:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"divisor\": (\"FLOAT\", {\"default\": 1, \"min\": -1000,\"max\": 1000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, divisor):\r\n        return (sigmas // divisor,)\r\n\r\nclass sigmas_add:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"addend\": (\"FLOAT\", {\"default\": 1, \"min\": -1000,\"max\": 1000,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, addend):\r\n        return (sigmas + addend,)\r\n\r\nclass sigmas_power:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"power\": (\"FLOAT\", {\"default\": 1, \"min\": -100,\"max\": 100,\"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, power):\r\n        return (sigmas ** power,)\r\n\r\nclass sigmas_abs:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas):\r\n        return (abs(sigmas),)\r\n\r\nclass sigmas2_mult:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas_1\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_2\": (\"SIGMAS\", {\"forceInput\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas_1, sigmas_2):\r\n        return (sigmas_1 * sigmas_2,)\r\n\r\nclass sigmas2_add:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas_1\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_2\": (\"SIGMAS\", {\"forceInput\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas_1, sigmas_2):\r\n        return (sigmas_1 + sigmas_2,)\r\n\r\nclass sigmas_rescale:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"start\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"end\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"sigmas\": (\"SIGMAS\", ),\r\n            },\r\n            \"optional\": {\r\n            }\r\n        }\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas_rescaled\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    DESCRIPTION = (\"Can be used to set denoise. Results are generally better than with the approach used by KSampler and most nodes with denoise values \"\r\n                   \"(which slice the sigmas schedule according to step count, not the noise level). Will also flip the sigma schedule if the start and end values are reversed.\" \r\n                   )\r\n      \r\n    def main(self, start=0, end=-1, sigmas=None):\r\n\r\n        s_out_1 = ((sigmas - sigmas.min()) * (start - end)) / (sigmas.max() - sigmas.min()) + end     \r\n        \r\n        return (s_out_1,)\r\n\r\n\r\nclass sigmas_count:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", ),\r\n            }\r\n        }\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"INT\",)\r\n    RETURN_NAMES = (\"count\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas=None):\r\n        return (len(sigmas),)\r\n\r\n\r\nclass sigmas_math1:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"start\": (\"INT\", {\"default\": 0, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"stop\": (\"INT\", {\"default\": 0, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"trim\": (\"INT\", {\"default\": 0, \"min\": -10000,\"max\": 0,\"step\": 1}),\r\n                \"x\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"y\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"z\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"f1\": (\"STRING\", {\"default\": \"s\", \"multiline\": True}),\r\n                \"rescale\" : (\"BOOLEAN\", {\"default\": False}),\r\n                \"max1\": (\"FLOAT\", {\"default\": 14.614642, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"min1\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n            },\r\n            \"optional\": {\r\n                \"a\": (\"SIGMAS\", {\"forceInput\": False}),\r\n                \"b\": (\"SIGMAS\", {\"forceInput\": False}),               \r\n                \"c\": (\"SIGMAS\", {\"forceInput\": False}),\r\n            }\r\n        }\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    def main(self, start=0, stop=0, trim=0, a=None, b=None, c=None, x=1.0, y=1.0, z=1.0, f1=\"s\", rescale=False, min1=1.0, max1=1.0):\r\n        if stop == 0:\r\n            t_lens = [len(tensor) for tensor in [a, b, c] if tensor is not None]\r\n            t_len = stop = min(t_lens) if t_lens else 0\r\n        else:\r\n            stop = stop + 1\r\n            t_len = stop - start \r\n            \r\n        stop = stop + trim\r\n        t_len = t_len + trim\r\n        \r\n        t_a = t_b = t_c = None\r\n        if a is not None:\r\n            t_a = a[start:stop]\r\n        if b is not None:\r\n            t_b = b[start:stop]\r\n        if c is not None:\r\n            t_c = c[start:stop]               \r\n            \r\n        t_s = torch.arange(0.0, t_len)\r\n    \r\n        t_x = torch.full((t_len,), x)\r\n        t_y = torch.full((t_len,), y)\r\n        t_z = torch.full((t_len,), z)\r\n        eval_namespace = {\"__builtins__\": None, \"round\": builtins.round, \"np\": np, \"a\": t_a, \"b\": t_b, \"c\": t_c, \"x\": t_x, \"y\": t_y, \"z\": t_z, \"s\": t_s, \"torch\": torch}\r\n        eval_namespace.update(np.__dict__)\r\n        \r\n        s_out_1 = eval(f1, eval_namespace)\r\n        \r\n        if rescale == True:\r\n            s_out_1 = ((s_out_1 - min(s_out_1)) * (max1 - min1)) / (max(s_out_1) - min(s_out_1)) + min1     \r\n        \r\n        return (s_out_1,)\r\n\r\nclass sigmas_math3:\r\n    def __init__(self):\r\n        pass\r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"start\": (\"INT\", {\"default\": 0, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"stop\": (\"INT\", {\"default\": 0, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"trim\": (\"INT\", {\"default\": 0, \"min\": -10000,\"max\": 0,\"step\": 1}),\r\n            },\r\n            \"optional\": {\r\n                \"a\": (\"SIGMAS\", {\"forceInput\": False}),\r\n                \"b\": (\"SIGMAS\", {\"forceInput\": False}),               \r\n                \"c\": (\"SIGMAS\", {\"forceInput\": False}),\r\n                \"x\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"y\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"z\": (\"FLOAT\", {\"default\": 1, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"f1\": (\"STRING\", {\"default\": \"s\", \"multiline\": True}),\r\n                \"rescale1\" : (\"BOOLEAN\", {\"default\": False}),\r\n                \"max1\": (\"FLOAT\", {\"default\": 14.614642, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"min1\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"f2\": (\"STRING\", {\"default\": \"s\", \"multiline\": True}),\r\n                \"rescale2\" : (\"BOOLEAN\", {\"default\": False}),\r\n                \"max2\": (\"FLOAT\", {\"default\": 14.614642, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"min2\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"f3\": (\"STRING\", {\"default\": \"s\", \"multiline\": True}),\r\n                \"rescale3\" : (\"BOOLEAN\", {\"default\": False}),\r\n                \"max3\": (\"FLOAT\", {\"default\": 14.614642, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"min3\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n            }\r\n        }\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",\"SIGMAS\",\"SIGMAS\")\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    def main(self, start=0, stop=0, trim=0, a=None, b=None, c=None, x=1.0, y=1.0, z=1.0, f1=\"s\", f2=\"s\", f3=\"s\", rescale1=False, rescale2=False, rescale3=False, min1=1.0, max1=1.0, min2=1.0, max2=1.0, min3=1.0, max3=1.0):\r\n        if stop == 0:\r\n            t_lens = [len(tensor) for tensor in [a, b, c] if tensor is not None]\r\n            t_len = stop = min(t_lens) if t_lens else 0\r\n        else:\r\n            stop = stop + 1\r\n            t_len = stop - start \r\n            \r\n        stop = stop + trim\r\n        t_len = t_len + trim\r\n        \r\n        t_a = t_b = t_c = None\r\n        if a is not None:\r\n            t_a = a[start:stop]\r\n        if b is not None:\r\n            t_b = b[start:stop]\r\n        if c is not None:\r\n            t_c = c[start:stop]               \r\n            \r\n        t_s = torch.arange(0.0, t_len)\r\n    \r\n        t_x = torch.full((t_len,), x)\r\n        t_y = torch.full((t_len,), y)\r\n        t_z = torch.full((t_len,), z)\r\n        eval_namespace = {\"__builtins__\": None, \"np\": np, \"a\": t_a, \"b\": t_b, \"c\": t_c, \"x\": t_x, \"y\": t_y, \"z\": t_z, \"s\": t_s, \"torch\": torch}\r\n        eval_namespace.update(np.__dict__)\r\n        \r\n        s_out_1 = eval(f1, eval_namespace)\r\n        s_out_2 = eval(f2, eval_namespace)\r\n        s_out_3 = eval(f3, eval_namespace)\r\n        \r\n        if rescale1 == True:\r\n            s_out_1 = ((s_out_1 - min(s_out_1)) * (max1 - min1)) / (max(s_out_1) - min(s_out_1)) + min1\r\n        if rescale2 == True:\r\n            s_out_2 = ((s_out_2 - min(s_out_2)) * (max2 - min2)) / (max(s_out_2) - min(s_out_2)) + min2\r\n        if rescale3 == True:\r\n            s_out_3 = ((s_out_3 - min(s_out_3)) * (max3 - min3)) / (max(s_out_3) - min(s_out_3)) + min3        \r\n        \r\n        return s_out_1, s_out_2, s_out_3\r\n\r\nclass sigmas_iteration_karras:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps_up\": (\"INT\", {\"default\": 30, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"steps_down\": (\"INT\", {\"default\": 30, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"rho_up\": (\"FLOAT\", {\"default\": 3, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"rho_down\": (\"FLOAT\", {\"default\": 4, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"s_min_start\": (\"FLOAT\", {\"default\":0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"s_max\": (\"FLOAT\", {\"default\": 2, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"s_min_end\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n            },\r\n            \"optional\": {\r\n                \"momentums\": (\"SIGMAS\", {\"forceInput\": False}),\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": False}),             \r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",\"SIGMAS\")\r\n    RETURN_NAMES = (\"momentums\",\"sigmas\")\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n    \r\n    def main(self, steps_up, steps_down, rho_up, rho_down, s_min_start, s_max, s_min_end, sigmas=None, momentums=None):\r\n        s_up = get_sigmas_karras(steps_up, s_min_start, s_max, rho_up)\r\n        s_down = get_sigmas_karras(steps_down, s_min_end, s_max, rho_down) \r\n        s_up = s_up[:-1]\r\n        s_down = s_down[:-1]  \r\n        s_up = torch.flip(s_up, dims=[0])\r\n        sigmas_new = torch.cat((s_up, s_down), dim=0)\r\n        momentums_new = torch.cat((s_up, -1*s_down), dim=0)\r\n        \r\n        if sigmas is not None:\r\n            sigmas = torch.cat([sigmas, sigmas_new])\r\n        else:\r\n            sigmas = sigmas_new\r\n            \r\n        if momentums is not None:\r\n            momentums = torch.cat([momentums, momentums_new])\r\n        else:\r\n            momentums = momentums_new\r\n        \r\n        return (momentums,sigmas) \r\n \r\nclass sigmas_iteration_polyexp:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps_up\": (\"INT\", {\"default\": 30, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"steps_down\": (\"INT\", {\"default\": 30, \"min\": 0,\"max\": 10000,\"step\": 1}),\r\n                \"rho_up\": (\"FLOAT\", {\"default\": 0.6, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"rho_down\": (\"FLOAT\", {\"default\": 0.8, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"s_min_start\": (\"FLOAT\", {\"default\":0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"s_max\": (\"FLOAT\", {\"default\": 2, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n                \"s_min_end\": (\"FLOAT\", {\"default\": 0.0291675, \"min\": -10000,\"max\": 10000,\"step\": 0.01}),\r\n            },\r\n            \"optional\": {\r\n                \"momentums\": (\"SIGMAS\", {\"forceInput\": False}),\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": False}),             \r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",\"SIGMAS\")\r\n    RETURN_NAMES = (\"momentums\",\"sigmas\")\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n    \r\n    def main(self, steps_up, steps_down, rho_up, rho_down, s_min_start, s_max, s_min_end, sigmas=None, momentums=None):\r\n        s_up = get_sigmas_polyexponential(steps_up, s_min_start, s_max, rho_up)\r\n        s_down = get_sigmas_polyexponential(steps_down, s_min_end, s_max, rho_down) \r\n        s_up = s_up[:-1]\r\n        s_down = s_down[:-1]\r\n        s_up = torch.flip(s_up, dims=[0])\r\n        sigmas_new = torch.cat((s_up, s_down), dim=0)\r\n        momentums_new = torch.cat((s_up, -1*s_down), dim=0)\r\n\r\n        if sigmas is not None:\r\n            sigmas = torch.cat([sigmas, sigmas_new])\r\n        else:\r\n            sigmas = sigmas_new\r\n\r\n        if momentums is not None:\r\n            momentums = torch.cat([momentums, momentums_new])\r\n        else:\r\n            momentums = momentums_new\r\n\r\n        return (momentums,sigmas) \r\n\r\nclass tan_scheduler:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 20, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"offset\": (\"FLOAT\", {\"default\": 20, \"min\": 0,\"max\": 100000,\"step\": 0.1}),\r\n                \"slope\": (\"FLOAT\", {\"default\": 20, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"start\": (\"FLOAT\", {\"default\": 20, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"end\": (\"FLOAT\", {\"default\": 20, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"sgm\" : (\"BOOLEAN\", {\"default\": False}),\r\n                \"pad\" : (\"BOOLEAN\", {\"default\": False}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n    \r\n    def main(self, steps, slope, offset, start, end, sgm, pad):\r\n        smax = ((2/pi)*atan(-slope*(0-offset))+1)/2\r\n        smin = ((2/pi)*atan(-slope*((steps-1)-offset))+1)/2\r\n\r\n        srange = smax-smin\r\n        sscale = start - end\r\n        \r\n        if sgm:\r\n            steps+=1\r\n\r\n        sigmas = [  ( (((2/pi)*atan(-slope*(x-offset))+1)/2) - smin) * (1/srange) * sscale + end    for x in range(steps)]\r\n        \r\n        if sgm:\r\n            sigmas = sigmas[:-1]\r\n        if pad:\r\n            sigmas = torch.tensor(sigmas+[0])\r\n        else:\r\n            sigmas = torch.tensor(sigmas)\r\n        return (sigmas,)\r\n\r\nclass tan_scheduler_2stage:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 40, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"midpoint\": (\"INT\", {\"default\": 20, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"pivot_1\": (\"INT\", {\"default\": 10, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"pivot_2\": (\"INT\", {\"default\": 30, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"slope_1\": (\"FLOAT\", {\"default\": 1, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"slope_2\": (\"FLOAT\", {\"default\": 1, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"start\": (\"FLOAT\", {\"default\": 1.0, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"middle\": (\"FLOAT\", {\"default\": 0.5, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"end\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100000,\"max\": 100000,\"step\": 0.1}),\r\n                \"pad\" : (\"BOOLEAN\", {\"default\": False}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas\",)\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n\r\n    def get_tan_sigmas(self, steps, slope, pivot, start, end):\r\n        smax = ((2/pi)*atan(-slope*(0-pivot))+1)/2\r\n        smin = ((2/pi)*atan(-slope*((steps-1)-pivot))+1)/2\r\n\r\n        srange = smax-smin\r\n        sscale = start - end\r\n\r\n        sigmas = [  ( (((2/pi)*atan(-slope*(x-pivot))+1)/2) - smin) * (1/srange) * sscale + end    for x in range(steps)]\r\n        \r\n        return sigmas\r\n\r\n    def main(self, steps, midpoint, start, middle, end, pivot_1, pivot_2, slope_1, slope_2, pad):\r\n        steps += 2\r\n        stage_2_len = steps - midpoint\r\n        stage_1_len = steps - stage_2_len\r\n\r\n        tan_sigmas_1 = self.get_tan_sigmas(stage_1_len, slope_1, pivot_1, start, middle)\r\n        tan_sigmas_2 = self.get_tan_sigmas(stage_2_len, slope_2, pivot_2 - stage_1_len, middle, end)\r\n        \r\n        tan_sigmas_1 = tan_sigmas_1[:-1]\r\n        if pad:\r\n            tan_sigmas_2 = tan_sigmas_2+[0]\r\n\r\n        tan_sigmas = torch.tensor(tan_sigmas_1 + tan_sigmas_2)\r\n\r\n        return (tan_sigmas,)\r\n\r\nclass tan_scheduler_2stage_simple:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 40, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"pivot_1\": (\"FLOAT\", {\"default\": 1, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"pivot_2\": (\"FLOAT\", {\"default\": 1, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"slope_1\": (\"FLOAT\", {\"default\": 1, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"slope_2\": (\"FLOAT\", {\"default\": 1, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"start\": (\"FLOAT\", {\"default\": 1.0, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"middle\": (\"FLOAT\", {\"default\": 0.5, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"end\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"pad\" : (\"BOOLEAN\", {\"default\": False}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas\",)\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n\r\n    def get_tan_sigmas(self, steps, slope, pivot, start, end):\r\n        smax = ((2/pi)*atan(-slope*(0-pivot))+1)/2\r\n        smin = ((2/pi)*atan(-slope*((steps-1)-pivot))+1)/2\r\n\r\n        srange = smax-smin\r\n        sscale = start - end\r\n\r\n        sigmas = [  ( (((2/pi)*atan(-slope*(x-pivot))+1)/2) - smin) * (1/srange) * sscale + end    for x in range(steps)]\r\n        \r\n        return sigmas\r\n\r\n    def main(self, steps, start=1.0, middle=0.5, end=0.0, pivot_1=0.6, pivot_2=0.6, slope_1=0.2, slope_2=0.2, pad=False, model_sampling=None):\r\n        steps += 2\r\n\r\n        midpoint = int( (steps*pivot_1 + steps*pivot_2) / 2 )\r\n        pivot_1 = int(steps * pivot_1)\r\n        pivot_2 = int(steps * pivot_2)\r\n\r\n        slope_1 = slope_1 / (steps/40)\r\n        slope_2 = slope_2 / (steps/40)\r\n\r\n        stage_2_len = steps - midpoint\r\n        stage_1_len = steps - stage_2_len\r\n\r\n        tan_sigmas_1 = self.get_tan_sigmas(stage_1_len, slope_1, pivot_1, start, middle)\r\n        tan_sigmas_2 = self.get_tan_sigmas(stage_2_len, slope_2, pivot_2 - stage_1_len, middle, end)\r\n        \r\n        tan_sigmas_1 = tan_sigmas_1[:-1]\r\n        if pad:\r\n            tan_sigmas_2 = tan_sigmas_2+[0]\r\n\r\n        tan_sigmas = torch.tensor(tan_sigmas_1 + tan_sigmas_2)\r\n\r\n        return (tan_sigmas,)\r\n    \r\nclass linear_quadratic_advanced:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"model\": (\"MODEL\",),\r\n                \"steps\": (\"INT\", {\"default\": 40, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"denoise\": (\"FLOAT\", {\"default\": 1.0, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"inflection_percent\": (\"FLOAT\", {\"default\": 0.5, \"min\": 0,\"max\": 1,\"step\": 0.01}),\r\n                \"threshold_noise\": (\"FLOAT\", {\"default\": 0.025, \"min\": 0.001,\"max\": 1.000,\"step\": 0.001}),\r\n            },\r\n            # \"optional\": {\r\n            # }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas\",)\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n\r\n    def main(self, steps, denoise, inflection_percent, threshold_noise, model=None):\r\n        sigmas = get_sigmas(model, \"linear_quadratic\", steps, denoise, 0.0, inflection_percent, threshold_noise)\r\n\r\n        return (sigmas, )\r\n\r\n\r\nclass constant_scheduler:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 40, \"min\": 0,\"max\": 100000,\"step\": 1}),\r\n                \"value_start\": (\"FLOAT\", {\"default\": 1.0, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"value_end\": (\"FLOAT\", {\"default\": 0.0, \"min\": -100000,\"max\": 100000,\"step\": 0.01}),\r\n                \"cutoff_percent\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0,\"max\": 1,\"step\": 0.01}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas\",)\r\n    CATEGORY = \"RES4LYF/schedulers\"\r\n\r\n    def main(self, steps, value_start, value_end, cutoff_percent):\r\n        sigmas = torch.ones(steps + 1) * value_start\r\n        cutoff_step = int(round(steps * cutoff_percent)) + 1\r\n        sigmas = torch.concat((sigmas[:cutoff_step], torch.ones(steps + 1 - cutoff_step) * value_end), dim=0)\r\n\r\n        return (sigmas,)\r\n    \r\n    \r\n    \r\n\r\n\r\n\r\nclass ClownScheduler:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r\n        return {\r\n            \"required\": { \r\n                \"pad_start_value\":      (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"start_value\":          (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"end_value\":            (\"FLOAT\",                                     {\"default\": 1.0, \"min\":  -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"pad_end_value\":        (\"FLOAT\",                                     {\"default\": 0.0, \"min\":  -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"scheduler\":            ([\"constant\"] + get_res4lyf_scheduler_list(), {\"default\": \"beta57\"},),\r\n                \"scheduler_start_step\": (\"INT\",                                       {\"default\": 0,   \"min\":  0,        \"max\": 10000}),\r\n                \"scheduler_end_step\":   (\"INT\",                                       {\"default\": 30,  \"min\": -1,        \"max\": 10000}),\r\n                \"total_steps\":          (\"INT\",                                       {\"default\": 100, \"min\": -1,        \"max\": 10000}),\r\n                \"flip_schedule\":        (\"BOOLEAN\",                                   {\"default\": False}),\r\n            }, \r\n            \"optional\": {\r\n                \"model\":                (\"MODEL\", ),\r\n            }\r\n        }\r\n\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    RETURN_NAMES = (\"sigmas\",)\r\n    FUNCTION     = \"main\"\r\n    CATEGORY     = \"RES4LYF/schedulers\"\r\n\r\n    def create_callback(self, **kwargs):\r\n        def callback(model):\r\n            kwargs[\"model\"] = model  \r\n            schedule, = self.prepare_schedule(**kwargs)\r\n            return schedule\r\n        return callback\r\n\r\n    def main(self,\r\n            model                        = None,\r\n            pad_start_value      : float = 1.0,\r\n            start_value          : float = 0.0,\r\n            end_value            : float = 1.0,\r\n            pad_end_value                = None,\r\n            denoise              : int   = 1.0,\r\n            scheduler                    = None,\r\n            scheduler_start_step : int   = 0,\r\n            scheduler_end_step   : int   = 30,\r\n            total_steps          : int   = 60,\r\n            flip_schedule                = False,\r\n            ) -> Tuple[Tensor]:\r\n        \r\n        if model is None:\r\n            callback = self.create_callback(pad_start_value = pad_start_value,\r\n                                            start_value     = start_value,\r\n                                            end_value       = end_value,\r\n                                            pad_end_value   = pad_end_value,\r\n                                            \r\n                                            scheduler       = scheduler,\r\n                                            start_step      = scheduler_start_step,\r\n                                            end_step        = scheduler_end_step,\r\n                                            flip_schedule   = flip_schedule,\r\n                                            )\r\n        else:\r\n            default_dtype  = torch.float64\r\n            default_device = torch.device(\"cuda\") \r\n            \r\n            if scheduler_end_step == -1:\r\n                scheduler_total_steps = total_steps - scheduler_start_step\r\n            else:\r\n                scheduler_total_steps = scheduler_end_step - scheduler_start_step\r\n            \r\n            if total_steps == -1:\r\n                total_steps = scheduler_start_step + scheduler_end_step\r\n            \r\n            end_pad_steps = total_steps - scheduler_end_step\r\n            \r\n            if scheduler != \"constant\":\r\n                values     = get_sigmas(model, scheduler, scheduler_total_steps, denoise).to(dtype=default_dtype, device=default_device) \r\n                values     = ((values - values.min()) * (start_value - end_value))   /   (values.max() - values.min())   +   end_value\r\n            else:\r\n                values = torch.linspace(start_value, end_value, scheduler_total_steps, dtype=default_dtype, device=default_device)\r\n            \r\n            if flip_schedule:\r\n                values = torch.flip(values, dims=[0])\r\n            \r\n            prepend    = torch.full((scheduler_start_step,),  pad_start_value, dtype=default_dtype, device=default_device)\r\n            postpend   = torch.full((end_pad_steps,),         pad_end_value,   dtype=default_dtype, device=default_device)\r\n            \r\n            values     = torch.cat((prepend, values, postpend), dim=0)\r\n\r\n        #ositive[0][1]['callback_regional'] = callback\r\n        \r\n        return (values,)\r\n\r\n\r\n\r\n    def prepare_schedule(self,\r\n                                model                    = None,\r\n                                pad_start_value  : float = 1.0,\r\n                                start_value      : float = 0.0,\r\n                                end_value        : float = 1.0,\r\n                                pad_end_value            = None,\r\n                                weight_scheduler         = None,\r\n                                start_step       : int   = 0,\r\n                                end_step         : int   = 30,\r\n                                flip_schedule            = False,\r\n                                ) -> Tuple[Tensor]:\r\n\r\n        default_dtype  = torch.float64\r\n        default_device = torch.device(\"cuda\") \r\n        \r\n        return (None,)\r\n\r\n\r\n\r\n\r\ndef get_sigmas_simple_exponential(model, steps):\r\n    s = model.model_sampling\r\n    sigs = []\r\n    ss = len(s.sigmas) / steps\r\n    for x in range(steps):\r\n        sigs += [float(s.sigmas[-(1 + int(x * ss))])]\r\n    sigs += [0.0]\r\n    sigs = torch.FloatTensor(sigs)\r\n    exp = torch.exp(torch.log(torch.linspace(1, 0, steps + 1)))\r\n    return sigs * exp\r\n\r\nextra_schedulers = {\r\n    \"simple_exponential\": get_sigmas_simple_exponential\r\n}\r\n\r\n\r\n\r\ndef get_sigmas(model, scheduler, steps, denoise, shift=0.0, lq_inflection_percent=0.5, lq_threshold_noise=0.025): #adapted from comfyui\r\n    total_steps = steps\r\n    if denoise < 1.0:\r\n        if denoise <= 0.0:\r\n            return (torch.FloatTensor([]),)\r\n        total_steps = int(steps/denoise)\r\n\r\n    try:\r\n        model_sampling = model.get_model_object(\"model_sampling\")\r\n    except:\r\n        if hasattr(model, \"model\"):\r\n            model_sampling = model.model.model_sampling\r\n        elif hasattr(model, \"inner_model\"):\r\n            model_sampling = model.inner_model.inner_model.model_sampling\r\n        else:\r\n            raise Exception(\"get_sigmas: Could not get model_sampling\")\r\n\r\n    if shift > 1e-6:\r\n        import copy\r\n        model_sampling = copy.deepcopy(model_sampling)\r\n        model_sampling.set_parameters(shift=shift)\r\n        RESplain(\"model_sampling shift manually set to \" + str(shift), debug=True)\r\n    \r\n    if scheduler == \"beta57\":\r\n        sigmas = comfy.samplers.beta_scheduler(model_sampling, total_steps, alpha=0.5, beta=0.7).cpu()\r\n    elif scheduler == \"linear_quadratic\":\r\n        linear_steps = int(total_steps * lq_inflection_percent)\r\n        sigmas = comfy.samplers.linear_quadratic_schedule(model_sampling, total_steps, threshold_noise=lq_threshold_noise, linear_steps=linear_steps).cpu()\r\n    else:\r\n        sigmas = comfy.samplers.calculate_sigmas(model_sampling, scheduler, total_steps).cpu()\r\n    \r\n    sigmas = sigmas[-(steps + 1):]\r\n    return sigmas\r\n\r\n#/// Adam Kormendi /// Inspired from Unreal Engine Maths ///\r\n\r\n\r\n# Sigmoid Function\r\nclass sigmas_sigmoid:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"variant\": ([\"logistic\", \"tanh\", \"softsign\", \"hardswish\", \"mish\", \"swish\"], {\"default\": \"logistic\"}),\r\n                \"gain\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.01, \"max\": 10.0, \"step\": 0.01}),\r\n                \"offset\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10.0, \"max\": 10.0, \"step\": 0.01}),\r\n                \"normalize_output\": (\"BOOLEAN\", {\"default\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, variant, gain, offset, normalize_output):\r\n        # Apply gain and offset\r\n        x = gain * (sigmas + offset)\r\n        \r\n        if variant == \"logistic\":\r\n            result = 1.0 / (1.0 + torch.exp(-x))\r\n        elif variant == \"tanh\":\r\n            result = torch.tanh(x)\r\n        elif variant == \"softsign\":\r\n            result = x / (1.0 + torch.abs(x))\r\n        elif variant == \"hardswish\":\r\n            result = x * torch.minimum(torch.maximum(x + 3, torch.zeros_like(x)), torch.tensor(6.0)) / 6.0\r\n        elif variant == \"mish\":\r\n            result = x * torch.tanh(torch.log(1.0 + torch.exp(x)))\r\n        elif variant == \"swish\":\r\n            result = x * torch.sigmoid(x)\r\n        \r\n        if normalize_output:\r\n            # Normalize to [min(sigmas), max(sigmas)]\r\n            result = ((result - result.min()) / (result.max() - result.min())) * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n            \r\n        return (result,)\r\n\r\n# ----- Easing Function -----\r\nclass sigmas_easing:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"easing_type\": ([\"sine\", \"quad\", \"cubic\", \"quart\", \"quint\", \"expo\", \"circ\", \r\n                                 \"back\", \"elastic\", \"bounce\"], {\"default\": \"cubic\"}),\r\n                \"easing_mode\": ([\"in\", \"out\", \"in_out\"], {\"default\": \"in_out\"}),\r\n                \"normalize_input\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"normalize_output\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"strength\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.1, \"max\": 10.0, \"step\": 0.1})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, easing_type, easing_mode, normalize_input, normalize_output, strength):\r\n        # Normalize input to [0, 1] if requested\r\n        if normalize_input:\r\n            t = (sigmas - sigmas.min()) / (sigmas.max() - sigmas.min())\r\n        else:\r\n            t = torch.clamp(sigmas, 0.0, 1.0)\r\n        \r\n        # Apply strength\r\n        t_orig = t.clone()\r\n        t = t ** strength\r\n            \r\n        # Apply easing function based on type and mode\r\n        if easing_mode == \"in\":\r\n            result = self._ease_in(t, easing_type)\r\n        elif easing_mode == \"out\":\r\n            result = self._ease_out(t, easing_type)\r\n        else:  # in_out\r\n            result = self._ease_in_out(t, easing_type)\r\n            \r\n        # Normalize output if requested\r\n        if normalize_output:\r\n            if normalize_input:\r\n                result = ((result - result.min()) / (result.max() - result.min())) * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n            else:\r\n                result = ((result - result.min()) / (result.max() - result.min()))\r\n                \r\n        return (result,)\r\n    \r\n    def _ease_in(self, t, easing_type):\r\n        if easing_type == \"sine\":\r\n            return 1 - torch.cos((t * math.pi) / 2)\r\n        elif easing_type == \"quad\":\r\n            return t * t\r\n        elif easing_type == \"cubic\":\r\n            return t * t * t\r\n        elif easing_type == \"quart\":\r\n            return t * t * t * t\r\n        elif easing_type == \"quint\":\r\n            return t * t * t * t * t\r\n        elif easing_type == \"expo\":\r\n            return torch.where(t == 0, torch.zeros_like(t), torch.pow(2, 10 * t - 10))\r\n        elif easing_type == \"circ\":\r\n            return 1 - torch.sqrt(1 - torch.pow(t, 2))\r\n        elif easing_type == \"back\":\r\n            c1 = 1.70158\r\n            c3 = c1 + 1\r\n            return c3 * t * t * t - c1 * t * t\r\n        elif easing_type == \"elastic\":\r\n            c4 = (2 * math.pi) / 3\r\n            return torch.where(\r\n                t == 0, \r\n                torch.zeros_like(t),\r\n                torch.where(\r\n                    t == 1,\r\n                    torch.ones_like(t),\r\n                    -torch.pow(2, 10 * t - 10) * torch.sin((t * 10 - 10.75) * c4)\r\n                )\r\n            )\r\n        elif easing_type == \"bounce\":\r\n            return 1 - self._ease_out_bounce(1 - t)\r\n    \r\n    def _ease_out(self, t, easing_type):\r\n        if easing_type == \"sine\":\r\n            return torch.sin((t * math.pi) / 2)\r\n        elif easing_type == \"quad\":\r\n            return 1 - (1 - t) * (1 - t)\r\n        elif easing_type == \"cubic\":\r\n            return 1 - torch.pow(1 - t, 3)\r\n        elif easing_type == \"quart\":\r\n            return 1 - torch.pow(1 - t, 4)\r\n        elif easing_type == \"quint\":\r\n            return 1 - torch.pow(1 - t, 5)\r\n        elif easing_type == \"expo\":\r\n            return torch.where(t == 1, torch.ones_like(t), 1 - torch.pow(2, -10 * t))\r\n        elif easing_type == \"circ\":\r\n            return torch.sqrt(1 - torch.pow(t - 1, 2))\r\n        elif easing_type == \"back\":\r\n            c1 = 1.70158\r\n            c3 = c1 + 1\r\n            return 1 + c3 * torch.pow(t - 1, 3) + c1 * torch.pow(t - 1, 2)\r\n        elif easing_type == \"elastic\":\r\n            c4 = (2 * math.pi) / 3\r\n            return torch.where(\r\n                t == 0, \r\n                torch.zeros_like(t),\r\n                torch.where(\r\n                    t == 1,\r\n                    torch.ones_like(t),\r\n                    torch.pow(2, -10 * t) * torch.sin((t * 10 - 0.75) * c4) + 1\r\n                )\r\n            )\r\n        elif easing_type == \"bounce\":\r\n            return self._ease_out_bounce(t)\r\n    \r\n    def _ease_in_out(self, t, easing_type):\r\n        if easing_type == \"sine\":\r\n            return -(torch.cos(math.pi * t) - 1) / 2\r\n        elif easing_type == \"quad\":\r\n            return torch.where(t < 0.5, 2 * t * t, 1 - torch.pow(-2 * t + 2, 2) / 2)\r\n        elif easing_type == \"cubic\":\r\n            return torch.where(t < 0.5, 4 * t * t * t, 1 - torch.pow(-2 * t + 2, 3) / 2)\r\n        elif easing_type == \"quart\":\r\n            return torch.where(t < 0.5, 8 * t * t * t * t, 1 - torch.pow(-2 * t + 2, 4) / 2)\r\n        elif easing_type == \"quint\":\r\n            return torch.where(t < 0.5, 16 * t * t * t * t * t, 1 - torch.pow(-2 * t + 2, 5) / 2)\r\n        elif easing_type == \"expo\":\r\n            return torch.where(\r\n                t < 0.5, \r\n                torch.pow(2, 20 * t - 10) / 2,\r\n                (2 - torch.pow(2, -20 * t + 10)) / 2\r\n            )\r\n        elif easing_type == \"circ\":\r\n            return torch.where(\r\n                t < 0.5,\r\n                (1 - torch.sqrt(1 - torch.pow(2 * t, 2))) / 2,\r\n                (torch.sqrt(1 - torch.pow(-2 * t + 2, 2)) + 1) / 2\r\n            )\r\n        elif easing_type == \"back\":\r\n            c1 = 1.70158\r\n            c2 = c1 * 1.525\r\n            return torch.where(\r\n                t < 0.5,\r\n                (torch.pow(2 * t, 2) * ((c2 + 1) * 2 * t - c2)) / 2,\r\n                (torch.pow(2 * t - 2, 2) * ((c2 + 1) * (t * 2 - 2) + c2) + 2) / 2\r\n            )\r\n        elif easing_type == \"elastic\":\r\n            c5 = (2 * math.pi) / 4.5\r\n            return torch.where(\r\n                t < 0.5,\r\n                -(torch.pow(2, 20 * t - 10) * torch.sin((20 * t - 11.125) * c5)) / 2,\r\n                (torch.pow(2, -20 * t + 10) * torch.sin((20 * t - 11.125) * c5)) / 2 + 1\r\n            )\r\n        elif easing_type == \"bounce\":\r\n            return torch.where(\r\n                t < 0.5,\r\n                (1 - self._ease_out_bounce(1 - 2 * t)) / 2,\r\n                (1 + self._ease_out_bounce(2 * t - 1)) / 2\r\n            )\r\n    \r\n    def _ease_out_bounce(self, t):\r\n        n1 = 7.5625\r\n        d1 = 2.75\r\n        \r\n        mask1 = t < 1 / d1\r\n        mask2 = t < 2 / d1\r\n        mask3 = t < 2.5 / d1\r\n        \r\n        result = torch.zeros_like(t)\r\n        result = torch.where(mask1, n1 * t * t, result)\r\n        result = torch.where(mask2 & ~mask1, n1 * (t - 1.5 / d1) * (t - 1.5 / d1) + 0.75, result)\r\n        result = torch.where(mask3 & ~mask2, n1 * (t - 2.25 / d1) * (t - 2.25 / d1) + 0.9375, result)\r\n        result = torch.where(~mask3, n1 * (t - 2.625 / d1) * (t - 2.625 / d1) + 0.984375, result)\r\n        \r\n        return result\r\n\r\n# -----  Hyperbolic Function -----\r\nclass sigmas_hyperbolic:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"function\": ([\"sinh\", \"cosh\", \"tanh\", \"asinh\", \"acosh\", \"atanh\"], {\"default\": \"tanh\"}),\r\n                \"scale\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.01, \"max\": 10.0, \"step\": 0.01}),\r\n                \"normalize_output\": (\"BOOLEAN\", {\"default\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, function, scale, normalize_output):\r\n        # Apply scaling\r\n        x = sigmas * scale\r\n        \r\n        if function == \"sinh\":\r\n            result = torch.sinh(x)\r\n        elif function == \"cosh\":\r\n            result = torch.cosh(x)\r\n        elif function == \"tanh\":\r\n            result = torch.tanh(x)\r\n        elif function == \"asinh\":\r\n            result = torch.asinh(x)\r\n        elif function == \"acosh\":\r\n            # Domain of acosh is [1, inf)\r\n            result = torch.acosh(torch.clamp(x, min=1.0))\r\n        elif function == \"atanh\":\r\n            # Domain of atanh is (-1, 1)\r\n            result = torch.atanh(torch.clamp(x, min=-0.99, max=0.99))\r\n        \r\n        if normalize_output:\r\n            # Normalize to [min(sigmas), max(sigmas)]\r\n            result = ((result - result.min()) / (result.max() - result.min())) * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n            \r\n        return (result,)\r\n\r\n# ----- Gaussian Distribution Function -----\r\nclass sigmas_gaussian:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"mean\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10.0, \"max\": 10.0, \"step\": 0.01}),\r\n                \"std\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.01, \"max\": 10.0, \"step\": 0.01}),\r\n                \"operation\": ([\"pdf\", \"cdf\", \"inverse_cdf\", \"transform\", \"modulate\"], {\"default\": \"transform\"}),\r\n                \"normalize_output\": (\"BOOLEAN\", {\"default\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, mean, std, operation, normalize_output):\r\n        # Standardize values (z-score)\r\n        z = (sigmas - sigmas.mean()) / sigmas.std()\r\n        \r\n        if operation == \"pdf\":\r\n            # Probability density function\r\n            result = (1 / (std * math.sqrt(2 * math.pi))) * torch.exp(-0.5 * ((sigmas - mean) / std) ** 2)\r\n        elif operation == \"cdf\":\r\n            # Cumulative distribution function\r\n            result = 0.5 * (1 + torch.erf((sigmas - mean) / (std * math.sqrt(2))))\r\n        elif operation == \"inverse_cdf\":\r\n            # Inverse CDF (quantile function)\r\n            # First normalize to [0.01, 0.99] to avoid numerical issues\r\n            normalized = ((sigmas - sigmas.min()) / (sigmas.max() - sigmas.min())) * 0.98 + 0.01\r\n            result = mean + std * torch.sqrt(2) * torch.erfinv(2 * normalized - 1)\r\n        elif operation == \"transform\":\r\n            # Transform to Gaussian distribution with specified mean and std\r\n            result = z * std + mean\r\n        elif operation == \"modulate\":\r\n            # Modulate with a Gaussian curve centered at mean\r\n            result = sigmas * torch.exp(-0.5 * ((sigmas - mean) / std) ** 2)\r\n        \r\n        if normalize_output:\r\n            # Normalize to [min(sigmas), max(sigmas)]\r\n            result = ((result - result.min()) / (result.max() - result.min())) * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n            \r\n        return (result,)\r\n\r\n# ----- Percentile Function -----\r\nclass sigmas_percentile:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"percentile_min\": (\"FLOAT\", {\"default\": 5.0, \"min\": 0.0, \"max\": 49.0, \"step\": 0.1}),\r\n                \"percentile_max\": (\"FLOAT\", {\"default\": 95.0, \"min\": 51.0, \"max\": 100.0, \"step\": 0.1}),\r\n                \"target_min\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"target_max\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"clip_outliers\": (\"BOOLEAN\", {\"default\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, percentile_min, percentile_max, target_min, target_max, clip_outliers):\r\n        # Convert to numpy for percentile computation\r\n        sigmas_np = sigmas.cpu().numpy()\r\n        \r\n        # Compute percentiles\r\n        p_min = np.percentile(sigmas_np, percentile_min)\r\n        p_max = np.percentile(sigmas_np, percentile_max)\r\n        \r\n        # Convert back to tensor\r\n        p_min = torch.tensor(p_min, device=sigmas.device, dtype=sigmas.dtype)\r\n        p_max = torch.tensor(p_max, device=sigmas.device, dtype=sigmas.dtype)\r\n        \r\n        # Map values from [p_min, p_max] to [target_min, target_max]\r\n        if clip_outliers:\r\n            sigmas_clipped = torch.clamp(sigmas, p_min, p_max)\r\n            result = ((sigmas_clipped - p_min) / (p_max - p_min)) * (target_max - target_min) + target_min\r\n        else:\r\n            result = ((sigmas - p_min) / (p_max - p_min)) * (target_max - target_min) + target_min\r\n            \r\n        return (result,)\r\n\r\n# ----- Kernel Smooth Function -----\r\nclass sigmas_kernel_smooth:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"kernel\": ([\"gaussian\", \"box\", \"triangle\", \"epanechnikov\", \"cosine\"], {\"default\": \"gaussian\"}),\r\n                \"kernel_size\": (\"INT\", {\"default\": 5, \"min\": 3, \"max\": 51, \"step\": 2}),  # Must be odd\r\n                \"sigma\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.1, \"max\": 10.0, \"step\": 0.1}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, kernel, kernel_size, sigma):\r\n        # Ensure kernel_size is odd\r\n        if kernel_size % 2 == 0:\r\n            kernel_size += 1\r\n            \r\n        # Define kernel weights\r\n        if kernel == \"gaussian\":\r\n            # Gaussian kernel\r\n            kernel_1d = self._gaussian_kernel(kernel_size, sigma)\r\n        elif kernel == \"box\":\r\n            # Box (uniform) kernel\r\n            kernel_1d = torch.ones(kernel_size, device=sigmas.device, dtype=sigmas.dtype) / kernel_size\r\n        elif kernel == \"triangle\":\r\n            # Triangle kernel\r\n            x = torch.linspace(-(kernel_size//2), kernel_size//2, kernel_size, device=sigmas.device, dtype=sigmas.dtype)\r\n            kernel_1d = (1.0 - torch.abs(x) / (kernel_size//2))\r\n            kernel_1d = kernel_1d / kernel_1d.sum()\r\n        elif kernel == \"epanechnikov\":\r\n            # Epanechnikov kernel\r\n            x = torch.linspace(-(kernel_size//2), kernel_size//2, kernel_size, device=sigmas.device, dtype=sigmas.dtype)\r\n            x = x / (kernel_size//2)  # Scale to [-1, 1]\r\n            kernel_1d = 0.75 * (1 - x**2)\r\n            kernel_1d = kernel_1d / kernel_1d.sum()\r\n        elif kernel == \"cosine\":\r\n            # Cosine kernel\r\n            x = torch.linspace(-(kernel_size//2), kernel_size//2, kernel_size, device=sigmas.device, dtype=sigmas.dtype)\r\n            x = x / (kernel_size//2) * (math.pi/2)  # Scale to [-π/2, π/2]\r\n            kernel_1d = torch.cos(x)\r\n            kernel_1d = kernel_1d / kernel_1d.sum()\r\n            \r\n        # Pad input to handle boundary conditions\r\n        pad_size = kernel_size // 2\r\n        padded = F.pad(sigmas.unsqueeze(0).unsqueeze(0), (pad_size, pad_size), mode='reflect')\r\n        \r\n        # Apply convolution\r\n        smoothed = F.conv1d(padded, kernel_1d.unsqueeze(0).unsqueeze(0))\r\n        \r\n        return (smoothed.squeeze(),)\r\n    \r\n    def _gaussian_kernel(self, kernel_size, sigma):\r\n        # Generate 1D Gaussian kernel\r\n        x = torch.linspace(-(kernel_size//2), kernel_size//2, kernel_size)\r\n        kernel = torch.exp(-x**2 / (2*sigma**2))\r\n        return kernel / kernel.sum()\r\n\r\n# ----- Quantile Normalization -----\r\nclass sigmas_quantile_norm:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"target_distribution\": ([\"uniform\", \"normal\", \"exponential\", \"logistic\", \"custom\"], {\"default\": \"uniform\"}),\r\n                \"num_quantiles\": (\"INT\", {\"default\": 100, \"min\": 10, \"max\": 1000, \"step\": 10}),\r\n            },\r\n            \"optional\": {\r\n                \"reference_sigmas\": (\"SIGMAS\", {\"forceInput\": False}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, target_distribution, num_quantiles, reference_sigmas=None):\r\n        # Convert to numpy for processing\r\n        sigmas_np = sigmas.cpu().numpy()\r\n        \r\n        # Sort values\r\n        sorted_values = np.sort(sigmas_np)\r\n        \r\n        # Create rank for each value (fractional rank)\r\n        ranks = np.zeros_like(sigmas_np)\r\n        for i, val in enumerate(sigmas_np):\r\n            ranks[i] = np.searchsorted(sorted_values, val, side='right') / len(sorted_values)\r\n        \r\n        # Generate target distribution\r\n        if target_distribution == \"uniform\":\r\n            # Uniform distribution between min and max of sigmas\r\n            target_values = np.linspace(sigmas_np.min(), sigmas_np.max(), num_quantiles)\r\n        elif target_distribution == \"normal\":\r\n            # Normal distribution with same mean and std as sigmas\r\n            target_values = np.random.normal(sigmas_np.mean(), sigmas_np.std(), num_quantiles)\r\n            target_values.sort()\r\n        elif target_distribution == \"exponential\":\r\n            # Exponential distribution with lambda=1/mean\r\n            target_values = np.random.exponential(1/max(1e-6, sigmas_np.mean()), num_quantiles)\r\n            target_values.sort()\r\n        elif target_distribution == \"logistic\":\r\n            # Logistic distribution\r\n            target_values = np.random.logistic(0, 1, num_quantiles)\r\n            target_values.sort()\r\n            # Rescale to match sigmas range\r\n            target_values = (target_values - target_values.min()) / (target_values.max() - target_values.min())\r\n            target_values = target_values * (sigmas_np.max() - sigmas_np.min()) + sigmas_np.min()\r\n        elif target_distribution == \"custom\" and reference_sigmas is not None:\r\n            # Use provided reference distribution\r\n            reference_np = reference_sigmas.cpu().numpy()\r\n            target_values = np.sort(reference_np)\r\n            if len(target_values) < num_quantiles:\r\n                # Interpolate if reference is smaller\r\n                old_indices = np.linspace(0, len(target_values)-1, len(target_values))\r\n                new_indices = np.linspace(0, len(target_values)-1, num_quantiles)\r\n                target_values = np.interp(new_indices, old_indices, target_values)\r\n            else:\r\n                # Subsample if reference is larger\r\n                indices = np.linspace(0, len(target_values)-1, num_quantiles, dtype=int)\r\n                target_values = target_values[indices]\r\n        else:\r\n            # Default to uniform\r\n            target_values = np.linspace(sigmas_np.min(), sigmas_np.max(), num_quantiles)\r\n        \r\n        # Map each value to its corresponding quantile in the target distribution\r\n        result_np = np.interp(ranks, np.linspace(0, 1, len(target_values)), target_values)\r\n        \r\n        # Convert back to tensor\r\n        result = torch.tensor(result_np, device=sigmas.device, dtype=sigmas.dtype)\r\n        \r\n        return (result,)\r\n\r\n# ----- Adaptive Step Function -----\r\nclass sigmas_adaptive_step:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"adaptation_type\": ([\"gradient\", \"curvature\", \"importance\", \"density\"], {\"default\": \"gradient\"}),\r\n                \"sensitivity\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.1, \"max\": 10.0, \"step\": 0.1}),\r\n                \"min_step\": (\"FLOAT\", {\"default\": 0.01, \"min\": 0.0001, \"max\": 1.0, \"step\": 0.01}),\r\n                \"max_step\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.01, \"max\": 10.0, \"step\": 0.01}),\r\n                \"target_steps\": (\"INT\", {\"default\": 0, \"min\": 0, \"max\": 1000, \"step\": 1}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, adaptation_type, sensitivity, min_step, max_step, target_steps):\r\n        if len(sigmas) <= 1:\r\n            return (sigmas,)\r\n            \r\n        # Compute step sizes based on chosen adaptation type\r\n        if adaptation_type == \"gradient\":\r\n            # Compute gradient (first difference)\r\n            grads = torch.abs(sigmas[1:] - sigmas[:-1])\r\n            # Normalize gradients\r\n            if grads.max() > grads.min():\r\n                norm_grads = (grads - grads.min()) / (grads.max() - grads.min())\r\n            else:\r\n                norm_grads = torch.ones_like(grads)\r\n            \r\n            # Convert to step sizes: smaller steps where gradient is large\r\n            step_sizes = 1.0 / (1.0 + norm_grads * sensitivity)\r\n            \r\n        elif adaptation_type == \"curvature\":\r\n            # Compute second derivative approximation\r\n            if len(sigmas) >= 3:\r\n                # Second difference\r\n                second_diff = sigmas[2:] - 2*sigmas[1:-1] + sigmas[:-2]\r\n                # Pad to match length\r\n                second_diff = F.pad(second_diff, (0, 1), mode='replicate')\r\n            else:\r\n                second_diff = torch.zeros_like(sigmas[:-1])\r\n                \r\n            # Normalize curvature\r\n            abs_curve = torch.abs(second_diff)\r\n            if abs_curve.max() > abs_curve.min():\r\n                norm_curve = (abs_curve - abs_curve.min()) / (abs_curve.max() - abs_curve.min())\r\n            else:\r\n                norm_curve = torch.ones_like(abs_curve)\r\n                \r\n            # Convert to step sizes: smaller steps where curvature is high\r\n            step_sizes = 1.0 / (1.0 + norm_curve * sensitivity)\r\n            \r\n        elif adaptation_type == \"importance\":\r\n            # Importance based on values: focus more on extremes\r\n            centered = torch.abs(sigmas - sigmas.mean())\r\n            if centered.max() > centered.min():\r\n                importance = (centered - centered.min()) / (centered.max() - centered.min())\r\n            else:\r\n                importance = torch.ones_like(centered)\r\n                \r\n            # Steps are smaller for important regions\r\n            step_sizes = 1.0 / (1.0 + importance[:-1] * sensitivity)\r\n            \r\n        elif adaptation_type == \"density\":\r\n            # Density-based adaptation using kernel density estimation\r\n            # Use a simple histogram approximation\r\n            sigma_min, sigma_max = sigmas.min(), sigmas.max()\r\n            bins = 20\r\n            hist = torch.histc(sigmas, bins=bins, min=sigma_min, max=sigma_max)\r\n            hist = hist / hist.sum()  # Normalize\r\n            \r\n            # Map each sigma to its bin density\r\n            bin_indices = torch.floor((sigmas - sigma_min) / (sigma_max - sigma_min) * (bins-1)).long()\r\n            bin_indices = torch.clamp(bin_indices, 0, bins-1)\r\n            densities = hist[bin_indices]\r\n            \r\n            # Compute step sizes: smaller steps in high density regions\r\n            step_sizes = 1.0 / (1.0 + densities[:-1] * sensitivity)\r\n        \r\n        # Scale step sizes to [min_step, max_step]\r\n        if step_sizes.max() > step_sizes.min():\r\n            step_sizes = (step_sizes - step_sizes.min()) / (step_sizes.max() - step_sizes.min())\r\n            step_sizes = step_sizes * (max_step - min_step) + min_step\r\n        else:\r\n            step_sizes = torch.ones_like(step_sizes) * min_step\r\n            \r\n        # Cumulative sum to get positions\r\n        positions = torch.cat([torch.tensor([0.0], device=step_sizes.device), torch.cumsum(step_sizes, dim=0)])\r\n        \r\n        # Normalize positions to match original range\r\n        positions = positions / positions[-1] * (sigmas[-1] - sigmas[0]) + sigmas[0]\r\n        \r\n        # Resample if target_steps is specified\r\n        if target_steps > 0:\r\n            new_positions = torch.linspace(sigmas[0], sigmas[-1], target_steps, device=sigmas.device)\r\n            # Interpolate to get new sigma values\r\n            new_sigmas = torch.zeros_like(new_positions)\r\n            \r\n            # Simple linear interpolation\r\n            for i, pos in enumerate(new_positions):\r\n                # Find enclosing original positions\r\n                idx = torch.searchsorted(positions, pos)\r\n                idx = torch.clamp(idx, 1, len(positions)-1)\r\n                \r\n                # Linear interpolation\r\n                t = (pos - positions[idx-1]) / (positions[idx] - positions[idx-1])\r\n                new_sigmas[i] = sigmas[idx-1] * (1-t) + sigmas[idx-1] * t\r\n                \r\n            result = new_sigmas\r\n        else:\r\n            result = positions\r\n            \r\n        return (result,)\r\n\r\n# ----- Chaos Function -----\r\nclass sigmas_chaos:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"system\": ([\"logistic\", \"henon\", \"tent\", \"sine\", \"cubic\"], {\"default\": \"logistic\"}),\r\n                \"parameter\": (\"FLOAT\", {\"default\": 3.9, \"min\": 0.1, \"max\": 5.0, \"step\": 0.01}),\r\n                \"iterations\": (\"INT\", {\"default\": 10, \"min\": 1, \"max\": 100, \"step\": 1}),\r\n                \"normalize_output\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"use_as_seed\": (\"BOOLEAN\", {\"default\": False})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, system, parameter, iterations, normalize_output, use_as_seed):\r\n        # Normalize input to [0,1] for chaotic maps\r\n        if use_as_seed:\r\n            # Use input as initial seed\r\n            x = (sigmas - sigmas.min()) / (sigmas.max() - sigmas.min())\r\n        else:\r\n            # Use single initial value and apply iterations\r\n            x = torch.zeros_like(sigmas)\r\n            for i in range(len(sigmas)):\r\n                # Use i/len as initial value for variety\r\n                x[i] = i / len(sigmas)\r\n        \r\n        # Apply chaos map iterations\r\n        for _ in range(iterations):\r\n            if system == \"logistic\":\r\n                # Logistic map: x_{n+1} = r * x_n * (1 - x_n)\r\n                x = parameter * x * (1 - x)\r\n                \r\n            elif system == \"henon\":\r\n                # Simplified 1D version of Henon map\r\n                x = 1 - parameter * x**2\r\n                \r\n            elif system == \"tent\":\r\n                # Tent map\r\n                x = torch.where(x < 0.5, parameter * x, parameter * (1 - x))\r\n                \r\n            elif system == \"sine\":\r\n                # Sine map: x_{n+1} = r * sin(pi * x_n)\r\n                x = parameter * torch.sin(math.pi * x)\r\n                \r\n            elif system == \"cubic\":\r\n                # Cubic map: x_{n+1} = r * x_n * (1 - x_n^2)\r\n                x = parameter * x * (1 - x**2)\r\n                \r\n        # Normalize output if requested\r\n        if normalize_output:\r\n            result = ((x - x.min()) / (x.max() - x.min())) * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n        else:\r\n            result = x\r\n            \r\n        return (result,)\r\n\r\n# ----- Reaction Diffusion Function -----\r\nclass sigmas_reaction_diffusion:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"system\": ([\"gray_scott\", \"fitzhugh_nagumo\", \"brusselator\"], {\"default\": \"gray_scott\"}),\r\n                \"iterations\": (\"INT\", {\"default\": 10, \"min\": 1, \"max\": 100, \"step\": 1}),\r\n                \"dt\": (\"FLOAT\", {\"default\": 0.1, \"min\": 0.01, \"max\": 1.0, \"step\": 0.01}),\r\n                \"param_a\": (\"FLOAT\", {\"default\": 0.04, \"min\": 0.01, \"max\": 0.1, \"step\": 0.001}),\r\n                \"param_b\": (\"FLOAT\", {\"default\": 0.06, \"min\": 0.01, \"max\": 0.1, \"step\": 0.001}),\r\n                \"diffusion_a\": (\"FLOAT\", {\"default\": 0.1, \"min\": 0.01, \"max\": 1.0, \"step\": 0.01}),\r\n                \"diffusion_b\": (\"FLOAT\", {\"default\": 0.05, \"min\": 0.01, \"max\": 1.0, \"step\": 0.01}),\r\n                \"normalize_output\": (\"BOOLEAN\", {\"default\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, system, iterations, dt, param_a, param_b, diffusion_a, diffusion_b, normalize_output):\r\n        # Initialize a and b based on sigmas\r\n        a = (sigmas - sigmas.min()) / (sigmas.max() - sigmas.min())\r\n        b = 1.0 - a\r\n        \r\n        # Pad for diffusion calculation (periodic boundary)\r\n        a_pad = F.pad(a.unsqueeze(0).unsqueeze(0), (1, 1), mode='circular').squeeze()\r\n        b_pad = F.pad(b.unsqueeze(0).unsqueeze(0), (1, 1), mode='circular').squeeze()\r\n        \r\n        # Simple 1D reaction-diffusion\r\n        for _ in range(iterations):\r\n            # Compute Laplacian (diffusion term) as second derivative\r\n            laplacian_a = a_pad[:-2] + a_pad[2:] - 2 * a\r\n            laplacian_b = b_pad[:-2] + b_pad[2:] - 2 * b\r\n            \r\n            if system == \"gray_scott\":\r\n                # Gray-Scott model for pattern formation\r\n                # a is \"U\" (activator), b is \"V\" (inhibitor)\r\n                feed = 0.055  # feed rate\r\n                kill = 0.062  # kill rate\r\n                \r\n                # Update equations\r\n                a_new = a + dt * (diffusion_a * laplacian_a - a * b**2 + feed * (1 - a))\r\n                b_new = b + dt * (diffusion_b * laplacian_b + a * b**2 - (feed + kill) * b)\r\n                \r\n            elif system == \"fitzhugh_nagumo\":\r\n                # FitzHugh-Nagumo model (simplified)\r\n                # a is the membrane potential, b is the recovery variable\r\n                \r\n                # Update equations\r\n                a_new = a + dt * (diffusion_a * laplacian_a + a - a**3 - b + param_a)\r\n                b_new = b + dt * (diffusion_b * laplacian_b + param_b * (a - b))\r\n                \r\n            elif system == \"brusselator\":\r\n                # Brusselator model\r\n                # a is U, b is V\r\n                \r\n                # Update equations\r\n                a_new = a + dt * (diffusion_a * laplacian_a + 1 - (param_b + 1) * a + param_a * a**2 * b)\r\n                b_new = b + dt * (diffusion_b * laplacian_b + param_b * a - param_a * a**2 * b)\r\n            \r\n            # Update and repad\r\n            a, b = a_new, b_new\r\n            a_pad = F.pad(a.unsqueeze(0).unsqueeze(0), (1, 1), mode='circular').squeeze()\r\n            b_pad = F.pad(b.unsqueeze(0).unsqueeze(0), (1, 1), mode='circular').squeeze()\r\n            \r\n        # Use the activator component as the result\r\n        result = a\r\n        \r\n        # Normalize output if requested\r\n        if normalize_output:\r\n            result = ((result - result.min()) / (result.max() - result.min())) * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n            \r\n        return (result,)\r\n\r\n# ----- Attractor Function -----\r\nclass sigmas_attractor:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"attractor\": ([\"lorenz\", \"rossler\", \"aizawa\", \"chen\", \"thomas\"], {\"default\": \"lorenz\"}),\r\n                \"iterations\": (\"INT\", {\"default\": 5, \"min\": 1, \"max\": 50, \"step\": 1}),\r\n                \"dt\": (\"FLOAT\", {\"default\": 0.01, \"min\": 0.001, \"max\": 0.1, \"step\": 0.001}),\r\n                \"component\": ([\"x\", \"y\", \"z\", \"magnitude\"], {\"default\": \"x\"}),\r\n                \"normalize_output\": (\"BOOLEAN\", {\"default\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, attractor, iterations, dt, component, normalize_output):\r\n        # Initialize 3D state from sigmas\r\n        n = len(sigmas)\r\n        \r\n        # Normalize sigmas to a reasonable range for the attractor\r\n        norm_sigmas = (sigmas - sigmas.min()) / (sigmas.max() - sigmas.min()) * 2.0 - 1.0\r\n        \r\n        # Create initial state\r\n        x = norm_sigmas\r\n        y = torch.roll(norm_sigmas, 1)  # Shifted version for variety\r\n        z = torch.roll(norm_sigmas, 2)  # Another shifted version\r\n        \r\n        # Parameters for the attractors\r\n        if attractor == \"lorenz\":\r\n            sigma, rho, beta = 10.0, 28.0, 8.0/3.0\r\n        elif attractor == \"rossler\":\r\n            a, b, c = 0.2, 0.2, 5.7\r\n        elif attractor == \"aizawa\":\r\n            a, b, c, d, e, f = 0.95, 0.7, 0.6, 3.5, 0.25, 0.1\r\n        elif attractor == \"chen\":\r\n            a, b, c = 5.0, -10.0, -0.38\r\n        elif attractor == \"thomas\":\r\n            b = 0.208186\r\n            \r\n        # Run the attractor dynamics\r\n        for _ in range(iterations):\r\n            if attractor == \"lorenz\":\r\n                # Lorenz attractor\r\n                dx = sigma * (y - x)\r\n                dy = x * (rho - z) - y\r\n                dz = x * y - beta * z\r\n                \r\n            elif attractor == \"rossler\":\r\n                # Rössler attractor\r\n                dx = -y - z\r\n                dy = x + a * y\r\n                dz = b + z * (x - c)\r\n                \r\n            elif attractor == \"aizawa\":\r\n                # Aizawa attractor\r\n                dx = (z - b) * x - d * y\r\n                dy = d * x + (z - b) * y\r\n                dz = c + a * z - z**3/3 - (x**2 + y**2) * (1 + e * z) + f * z * x**3\r\n                \r\n            elif attractor == \"chen\":\r\n                # Chen attractor\r\n                dx = a * (y - x)\r\n                dy = (c - a) * x - x * z + c * y\r\n                dz = x * y - b * z\r\n                \r\n            elif attractor == \"thomas\":\r\n                # Thomas attractor\r\n                dx = -b * x + torch.sin(y)\r\n                dy = -b * y + torch.sin(z)\r\n                dz = -b * z + torch.sin(x)\r\n                \r\n            # Update state\r\n            x = x + dt * dx\r\n            y = y + dt * dy\r\n            z = z + dt * dz\r\n            \r\n        # Select component\r\n        if component == \"x\":\r\n            result = x\r\n        elif component == \"y\":\r\n            result = y\r\n        elif component == \"z\":\r\n            result = z\r\n        elif component == \"magnitude\":\r\n            result = torch.sqrt(x**2 + y**2 + z**2)\r\n            \r\n        # Normalize output if requested\r\n        if normalize_output:\r\n            result = ((result - result.min()) / (result.max() - result.min())) * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n            \r\n        return (result,)\r\n\r\n# ----- Catmull-Rom Spline -----\r\nclass sigmas_catmull_rom:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"tension\": (\"FLOAT\", {\"default\": 0.5, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01}),\r\n                \"points\": (\"INT\", {\"default\": 100, \"min\": 5, \"max\": 1000, \"step\": 5}),\r\n                \"boundary_condition\": ([\"repeat\", \"clamp\", \"mirror\"], {\"default\": \"clamp\"})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, tension, points, boundary_condition):\r\n        n = len(sigmas)\r\n        \r\n        # Need at least 4 points for Catmull-Rom interpolation\r\n        if n < 4:\r\n            # If we have fewer, just use linear interpolation\r\n            t = torch.linspace(0, 1, points, device=sigmas.device)\r\n            result = torch.zeros(points, device=sigmas.device, dtype=sigmas.dtype)\r\n            \r\n            for i in range(points):\r\n                idx = min(int(i * (n - 1) / (points - 1)), n - 2)\r\n                alpha = (i * (n - 1) / (points - 1)) - idx\r\n                result[i] = (1 - alpha) * sigmas[idx] + alpha * sigmas[idx + 1]\r\n                \r\n            return (result,)\r\n        \r\n        # Handle boundary conditions for control points\r\n        if boundary_condition == \"repeat\":\r\n            # Repeat endpoints\r\n            p0 = sigmas[0]\r\n            p3 = sigmas[-1]\r\n        elif boundary_condition == \"clamp\":\r\n            # Extrapolate\r\n            p0 = 2 * sigmas[0] - sigmas[1]\r\n            p3 = 2 * sigmas[-1] - sigmas[-2]\r\n        elif boundary_condition == \"mirror\":\r\n            # Mirror\r\n            p0 = sigmas[1]\r\n            p3 = sigmas[-2]\r\n            \r\n        # Create extended control points\r\n        control_points = torch.cat([torch.tensor([p0], device=sigmas.device), sigmas, torch.tensor([p3], device=sigmas.device)])\r\n        \r\n        # Compute spline\r\n        result = torch.zeros(points, device=sigmas.device, dtype=sigmas.dtype)\r\n        \r\n        # Parameter to adjust curve tension (0 = Catmull-Rom, 1 = Linear)\r\n        alpha = 1.0 - tension\r\n        \r\n        for i in range(points):\r\n            # Determine which segment we're in\r\n            t = i / (points - 1) * (n - 1)\r\n            idx = min(int(t), n - 2)\r\n            \r\n            # Normalized parameter within the segment [0, 1]\r\n            t_local = t - idx\r\n            \r\n            # Get control points for this segment\r\n            p0 = control_points[idx]\r\n            p1 = control_points[idx + 1]\r\n            p2 = control_points[idx + 2]\r\n            p3 = control_points[idx + 3]\r\n            \r\n            # Catmull-Rom basis functions\r\n            t2 = t_local * t_local\r\n            t3 = t2 * t_local\r\n            \r\n            # Compute spline point\r\n            result[i] = (\r\n                (-alpha * t3 + 2 * alpha * t2 - alpha * t_local) * p0 +\r\n                ((2 - alpha) * t3 + (alpha - 3) * t2 + 1) * p1 +\r\n                ((alpha - 2) * t3 + (3 - 2 * alpha) * t2 + alpha * t_local) * p2 +\r\n                (alpha * t3 - alpha * t2) * p3\r\n            ) * 0.5\r\n            \r\n        return (result,)\r\n\r\n# ----- Lambert W-Function -----\r\nclass sigmas_lambert_w:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"branch\": ([\"principal\", \"secondary\"], {\"default\": \"principal\"}),\r\n                \"scale\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.01, \"max\": 10.0, \"step\": 0.01}),\r\n                \"normalize_output\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"max_iterations\": (\"INT\", {\"default\": 20, \"min\": 5, \"max\": 100, \"step\": 1})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, branch, scale, normalize_output, max_iterations):\r\n        # Apply scaling\r\n        x = sigmas * scale\r\n        \r\n        # Lambert W function (numerically approximated)\r\n        result = torch.zeros_like(x)\r\n        \r\n        # Process each value separately (since Lambert W is non-vectorized)\r\n        for i in range(len(x)):\r\n            xi = x[i].item()\r\n            \r\n            # Initial guess varies by branch\r\n            if branch == \"principal\":\r\n                # Valid for x >= -1/e\r\n                if xi < -1/math.e:\r\n                    xi = -1/math.e  # Clamp to domain\r\n                \r\n                # Initial guess for W₀(x)\r\n                if xi < 0:\r\n                    w = 0.0\r\n                elif xi < 1:\r\n                    w = xi * (1 - xi * (1 - 0.5 * xi))\r\n                else:\r\n                    w = math.log(xi)\r\n                    \r\n            else:  # secondary branch\r\n                # Valid for -1/e <= x < 0\r\n                if xi < -1/math.e:\r\n                    xi = -1/math.e  # Clamp to lower bound\r\n                elif xi >= 0:\r\n                    xi = -0.01  # Clamp to upper bound\r\n                \r\n                # Initial guess for W₋₁(x)\r\n                w = math.log(-xi)\r\n                \r\n            # Halley's method for numerical approximation\r\n            for _ in range(max_iterations):\r\n                ew = math.exp(w)\r\n                wew = w * ew\r\n                \r\n                # If we've converged, break\r\n                if abs(wew - xi) < 1e-10:\r\n                    break\r\n                \r\n                # Halley's update\r\n                wpe = w + 1  # w plus 1\r\n                div = ew * wpe - (ew * w - xi) * wpe / (2 * wpe * ew)\r\n                w_next = w - (wew - xi) / div\r\n                \r\n                # Check for convergence\r\n                if abs(w_next - w) < 1e-10:\r\n                    w = w_next\r\n                    break\r\n                    \r\n                w = w_next\r\n                \r\n            result[i] = w\r\n            \r\n        # Normalize output if requested\r\n        if normalize_output:\r\n            result = ((result - result.min()) / (result.max() - result.min())) * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n            \r\n        return (result,)\r\n\r\n# ----- Zeta & Eta Functions -----\r\nclass sigmas_zeta_eta:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"function\": ([\"riemann_zeta\", \"dirichlet_eta\", \"lerch_phi\"], {\"default\": \"riemann_zeta\"}),\r\n                \"offset\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10.0, \"max\": 10.0, \"step\": 0.1}),\r\n                \"scale\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.01, \"max\": 10.0, \"step\": 0.01}),\r\n                \"normalize_output\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"approx_terms\": (\"INT\", {\"default\": 100, \"min\": 10, \"max\": 1000, \"step\": 10})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, function, offset, scale, normalize_output, approx_terms):\r\n        # Apply offset and scaling\r\n        s = sigmas * scale + offset\r\n        \r\n        # Process based on function type\r\n        if function == \"riemann_zeta\":\r\n            # Riemann zeta function\r\n            # For Re(s) > 1, ζ(s) = sum(1/n^s, n=1 to infinity)\r\n            # For performance reasons, we'll use scipy's implementation for CPU\r\n            # and a truncated series approximation for GPU\r\n            \r\n            # Move to CPU for scipy\r\n            s_cpu = s.cpu().numpy()\r\n            \r\n            # Apply zeta function\r\n            result_np = np.zeros_like(s_cpu)\r\n            \r\n            for i, si in enumerate(s_cpu):\r\n                # Handle special values\r\n                if si == 1.0:\r\n                    # ζ(1) is the harmonic series, which diverges to infinity\r\n                    result_np[i] = float('inf')\r\n                elif si < 0 and si == int(si) and int(si) % 2 == 0:\r\n                    # ζ(-2n) = 0 for n > 0\r\n                    result_np[i] = 0.0\r\n                else:\r\n                    try:\r\n                        # Use scipy for computation\r\n                        result_np[i] = float(special.zeta(si))\r\n                    except (ValueError, OverflowError):\r\n                        # Fall back to approximation for problematic values\r\n                        if si > 1:\r\n                            # Truncated series for Re(s) > 1\r\n                            result_np[i] = sum(1.0 / np.power(n, si) for n in range(1, approx_terms))\r\n                        else:\r\n                            # Use functional equation for Re(s) < 0\r\n                            if si < 0:\r\n                                # ζ(s) = 2^s π^(s-1) sin(πs/2) Γ(1-s) ζ(1-s)\r\n                                # Gamma function blows up at negative integers, so use the fact that\r\n                                # ζ(-n) = -B_{n+1}/(n+1) for n > 0, where B is a Bernoulli number\r\n                                # However, as this gets complex, we'll use a simpler approximation\r\n                                result_np[i] = 0.0  # Default for problematic values\r\n            \r\n            # Convert back to tensor\r\n            result = torch.tensor(result_np, device=sigmas.device, dtype=sigmas.dtype)\r\n            \r\n        elif function == \"dirichlet_eta\":\r\n            # Dirichlet eta function (alternating zeta function)\r\n            # η(s) = sum((-1)^(n+1)/n^s, n=1 to infinity)\r\n            \r\n            # For GPU efficiency, compute directly using alternating series\r\n            result = torch.zeros_like(s)\r\n            \r\n            # Use a fixed number of terms for approximation\r\n            for i in range(1, approx_terms + 1):\r\n                term = torch.pow(i, -s) * (1 if i % 2 == 1 else -1)\r\n                result += term\r\n                \r\n        elif function == \"lerch_phi\":\r\n            # Lerch transcendent with fixed parameters\r\n            # Φ(z, s, a) = sum(z^n / (n+a)^s, n=0 to infinity)\r\n            # We'll use z=0.5, a=1 for simplicity\r\n            z, a = 0.5, 1.0\r\n            \r\n            result = torch.zeros_like(s)\r\n            for i in range(approx_terms):\r\n                term = torch.pow(z, i) / torch.pow(i + a, s)\r\n                result += term\r\n            \r\n        # Replace infinities and NaNs with large or small values\r\n        result = torch.where(torch.isfinite(result), result, torch.sign(result) * 1e10)\r\n        \r\n        # Normalize output if requested\r\n        if normalize_output:\r\n            result = ((result - result.min()) / (result.max() - result.min())) * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n            \r\n        return (result,)\r\n\r\n# ----- Gamma & Beta Functions -----\r\nclass sigmas_gamma_beta:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"function\": ([\"gamma\", \"beta\", \"incomplete_gamma\", \"incomplete_beta\", \"log_gamma\"], {\"default\": \"gamma\"}),\r\n                \"offset\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10.0, \"max\": 10.0, \"step\": 0.1}),\r\n                \"scale\": (\"FLOAT\", {\"default\": 0.1, \"min\": 0.01, \"max\": 10.0, \"step\": 0.01}),\r\n                \"parameter_a\": (\"FLOAT\", {\"default\": 0.5, \"min\": 0.1, \"max\": 10.0, \"step\": 0.1}),\r\n                \"parameter_b\": (\"FLOAT\", {\"default\": 0.5, \"min\": 0.1, \"max\": 10.0, \"step\": 0.1}),\r\n                \"normalize_output\": (\"BOOLEAN\", {\"default\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, function, offset, scale, parameter_a, parameter_b, normalize_output):\r\n        # Apply offset and scaling\r\n        x = sigmas * scale + offset\r\n        \r\n        # Convert to numpy for special functions\r\n        x_np = x.cpu().numpy()\r\n        \r\n        # Apply function\r\n        if function == \"gamma\":\r\n            # Gamma function Γ(x)\r\n            # For performance and stability, use scipy\r\n            result_np = np.zeros_like(x_np)\r\n            \r\n            for i, xi in enumerate(x_np):\r\n                # Handle special cases\r\n                if xi <= 0 and xi == int(xi):\r\n                    # Gamma has poles at non-positive integers\r\n                    result_np[i] = float('inf')\r\n                else:\r\n                    try:\r\n                        result_np[i] = float(special.gamma(xi))\r\n                    except (ValueError, OverflowError):\r\n                        # Use approximation for large values\r\n                        result_np[i] = float('inf')\r\n                        \r\n        elif function == \"log_gamma\":\r\n            # Log Gamma function log(Γ(x))\r\n            # More numerically stable for large values\r\n            result_np = np.zeros_like(x_np)\r\n            \r\n            for i, xi in enumerate(x_np):\r\n                # Handle special cases\r\n                if xi <= 0 and xi == int(xi):\r\n                    # log(Γ(x)) is undefined for non-positive integers\r\n                    result_np[i] = float('inf')\r\n                else:\r\n                    try:\r\n                        result_np[i] = float(special.gammaln(xi))\r\n                    except (ValueError, OverflowError):\r\n                        # Use approximation for large values\r\n                        result_np[i] = float('inf')\r\n                    \r\n        elif function == \"beta\":\r\n            # Beta function B(a, x)\r\n            result_np = np.zeros_like(x_np)\r\n            \r\n            for i, xi in enumerate(x_np):\r\n                try:\r\n                    result_np[i] = float(special.beta(parameter_a, xi))\r\n                except (ValueError, OverflowError):\r\n                    # Handle cases where beta is undefined\r\n                    result_np[i] = float('inf')\r\n                    \r\n        elif function == \"incomplete_gamma\":\r\n            # Regularized incomplete gamma function P(a, x)\r\n            result_np = np.zeros_like(x_np)\r\n            \r\n            for i, xi in enumerate(x_np):\r\n                if xi < 0:\r\n                    # Undefined for negative x\r\n                    result_np[i] = 0.0\r\n                else:\r\n                    try:\r\n                        result_np[i] = float(special.gammainc(parameter_a, xi))\r\n                    except (ValueError, OverflowError):\r\n                        result_np[i] = 1.0  # Approach 1 for large x\r\n                    \r\n        elif function == \"incomplete_beta\":\r\n            # Regularized incomplete beta function I(x; a, b)\r\n            result_np = np.zeros_like(x_np)\r\n            \r\n            for i, xi in enumerate(x_np):\r\n                # Clamp to [0,1] for domain of incomplete beta\r\n                xi_clamped = min(max(xi, 0), 1)\r\n                \r\n                try:\r\n                    result_np[i] = float(special.betainc(parameter_a, parameter_b, xi_clamped))\r\n                except (ValueError, OverflowError):\r\n                    result_np[i] = 0.5  # Default for errors\r\n                    \r\n        # Convert back to tensor\r\n        result = torch.tensor(result_np, device=sigmas.device, dtype=sigmas.dtype)\r\n        \r\n        # Replace infinities and NaNs\r\n        result = torch.where(torch.isfinite(result), result, torch.sign(result) * 1e10)\r\n        \r\n        # Normalize output if requested\r\n        if normalize_output:\r\n            # Handle cases where result has infinities\r\n            if torch.isinf(result).any() or torch.isnan(result).any():\r\n                # Replace inf/nan with max/min finite values\r\n                max_val = torch.max(result[torch.isfinite(result)]) if torch.any(torch.isfinite(result)) else 1e10\r\n                min_val = torch.min(result[torch.isfinite(result)]) if torch.any(torch.isfinite(result)) else -1e10\r\n                \r\n                result = torch.where(torch.isinf(result) & (result > 0), max_val, result)\r\n                result = torch.where(torch.isinf(result) & (result < 0), min_val, result)\r\n                result = torch.where(torch.isnan(result), (max_val + min_val) / 2, result)\r\n            \r\n            # Now normalize\r\n            result = ((result - result.min()) / (result.max() - result.min())) * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n            \r\n        return (result,)\r\n\r\n# ----- Sigma Lerp -----\r\nclass sigmas_lerp:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas_a\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_b\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"t\": (\"FLOAT\", {\"default\": 0.5, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01}),\r\n                \"ensure_length\": (\"BOOLEAN\", {\"default\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas_a, sigmas_b, t, ensure_length):\r\n        if ensure_length and len(sigmas_a) != len(sigmas_b):\r\n            # Resize the smaller one to match the larger one\r\n            if len(sigmas_a) < len(sigmas_b):\r\n                sigmas_a = torch.nn.functional.interpolate(\r\n                    sigmas_a.unsqueeze(0).unsqueeze(0), \r\n                    size=len(sigmas_b), \r\n                    mode='linear'\r\n                ).squeeze(0).squeeze(0)\r\n            else:\r\n                sigmas_b = torch.nn.functional.interpolate(\r\n                    sigmas_b.unsqueeze(0).unsqueeze(0), \r\n                    size=len(sigmas_a), \r\n                    mode='linear'\r\n                ).squeeze(0).squeeze(0)\r\n        \r\n        return ((1 - t) * sigmas_a + t * sigmas_b,)\r\n\r\n# ----- Sigma InvLerp -----\r\nclass sigmas_invlerp:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"min_value\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"max_value\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, min_value, max_value):\r\n        # Clamp values to avoid division by zero\r\n        if min_value == max_value:\r\n            max_value = min_value + 1e-5\r\n            \r\n        normalized = (sigmas - min_value) / (max_value - min_value)\r\n        # Clamp the values to be in [0, 1]\r\n        normalized = torch.clamp(normalized, 0.0, 1.0)\r\n        return (normalized,)\r\n\r\n# ----- Sigma ArcSine -----\r\nclass sigmas_arcsine:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"normalize_input\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"scale_output\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"out_min\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"out_max\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, normalize_input, scale_output, out_min, out_max):\r\n        if normalize_input:\r\n            sigmas = torch.clamp(sigmas, -1.0, 1.0)\r\n        else:\r\n            # Ensure values are in valid arcsin domain\r\n            sigmas = torch.clamp(sigmas, -1.0, 1.0)\r\n            \r\n        result = torch.asin(sigmas)\r\n        \r\n        if scale_output:\r\n            # ArcSine output is in range [-π/2, π/2]\r\n            # Normalize to [0, 1] and then scale to [out_min, out_max]\r\n            result = (result + math.pi/2) / math.pi\r\n            result = result * (out_max - out_min) + out_min\r\n            \r\n        return (result,)\r\n\r\n# ----- Sigma LinearSine -----\r\nclass sigmas_linearsine:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"amplitude\": (\"FLOAT\", {\"default\": 0.5, \"min\": 0.0, \"max\": 10.0, \"step\": 0.01}),\r\n                \"frequency\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0, \"max\": 10.0, \"step\": 0.01}),\r\n                \"phase\": (\"FLOAT\", {\"default\": 0.0, \"min\": -6.28, \"max\": 6.28, \"step\": 0.01}), # -2π to 2π\r\n                \"linear_weight\": (\"FLOAT\", {\"default\": 0.5, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, amplitude, frequency, phase, linear_weight):\r\n        # Create indices for the sine function\r\n        indices = torch.linspace(0, 1, len(sigmas), device=sigmas.device)\r\n        \r\n        # Calculate sine component\r\n        sine_component = amplitude * torch.sin(2 * math.pi * frequency * indices + phase)\r\n        \r\n        # Blend linear and sine components\r\n        step_indices = torch.linspace(0, 1, len(sigmas), device=sigmas.device)\r\n        result = linear_weight * sigmas + (1 - linear_weight) * (step_indices.unsqueeze(0) * sine_component)\r\n        \r\n        return (result.squeeze(0),)\r\n\r\n# ----- Sigmas Append -----\r\nclass sigmas_append:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"value\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"count\": (\"INT\", {\"default\": 1, \"min\": 1, \"max\": 100, \"step\": 1})\r\n            },\r\n            \"optional\": {\r\n                \"additional_sigmas\": (\"SIGMAS\", {\"forceInput\": False})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, value, count, additional_sigmas=None):\r\n        # Create tensor of the value to append\r\n        append_values = torch.full((count,), value, device=sigmas.device, dtype=sigmas.dtype)\r\n        \r\n        # Append the values\r\n        result = torch.cat([sigmas, append_values], dim=0)\r\n        \r\n        # If additional sigmas provided, append those as well\r\n        if additional_sigmas is not None:\r\n            result = torch.cat([result, additional_sigmas], dim=0)\r\n            \r\n        return (result,)\r\n\r\n# ----- Sigma Arccosine -----\r\nclass sigmas_arccosine:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"normalize_input\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"scale_output\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"out_min\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"out_max\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, normalize_input, scale_output, out_min, out_max):\r\n        if normalize_input:\r\n            sigmas = torch.clamp(sigmas, -1.0, 1.0)\r\n        else:\r\n            # Ensure values are in valid arccos domain\r\n            sigmas = torch.clamp(sigmas, -1.0, 1.0)\r\n            \r\n        result = torch.acos(sigmas)\r\n        \r\n        if scale_output:\r\n            # ArcCosine output is in range [0, π]\r\n            # Normalize to [0, 1] and then scale to [out_min, out_max]\r\n            result = result / math.pi\r\n            result = result * (out_max - out_min) + out_min\r\n            \r\n        return (result,)\r\n\r\n# ----- Sigma Arctangent -----\r\nclass sigmas_arctangent:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"scale_output\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"out_min\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"out_max\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, scale_output, out_min, out_max):\r\n        result = torch.atan(sigmas)\r\n        \r\n        if scale_output:\r\n            # ArcTangent output is in range [-π/2, π/2]\r\n            # Normalize to [0, 1] and then scale to [out_min, out_max]\r\n            result = (result + math.pi/2) / math.pi\r\n            result = result * (out_max - out_min) + out_min\r\n            \r\n        return (result,)\r\n\r\n# ----- Sigma CrossProduct -----\r\nclass sigmas_crossproduct:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas_a\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_b\": (\"SIGMAS\", {\"forceInput\": True}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas_a, sigmas_b):\r\n        # Ensure we have at least 3 elements in each tensor\r\n        # If not, pad with zeros or truncate\r\n        if len(sigmas_a) < 3:\r\n            sigmas_a = torch.nn.functional.pad(sigmas_a, (0, 3 - len(sigmas_a)))\r\n        if len(sigmas_b) < 3:\r\n            sigmas_b = torch.nn.functional.pad(sigmas_b, (0, 3 - len(sigmas_b)))\r\n        \r\n        # Take the first 3 elements of each tensor\r\n        a = sigmas_a[:3]\r\n        b = sigmas_b[:3]\r\n        \r\n        # Compute cross product\r\n        c = torch.zeros(3, device=sigmas_a.device, dtype=sigmas_a.dtype)\r\n        c[0] = a[1] * b[2] - a[2] * b[1]\r\n        c[1] = a[2] * b[0] - a[0] * b[2]\r\n        c[2] = a[0] * b[1] - a[1] * b[0]\r\n        \r\n        return (c,)\r\n\r\n# ----- Sigma DotProduct -----\r\nclass sigmas_dotproduct:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas_a\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"sigmas_b\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"normalize\": (\"BOOLEAN\", {\"default\": False})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas_a, sigmas_b, normalize):\r\n        # Ensure equal lengths by taking the minimum\r\n        min_length = min(len(sigmas_a), len(sigmas_b))\r\n        a = sigmas_a[:min_length]\r\n        b = sigmas_b[:min_length]\r\n        \r\n        if normalize:\r\n            a_norm = torch.norm(a)\r\n            b_norm = torch.norm(b)\r\n            # Avoid division by zero\r\n            if a_norm > 0 and b_norm > 0:\r\n                a = a / a_norm\r\n                b = b / b_norm\r\n        \r\n        # Compute dot product\r\n        result = torch.sum(a * b)\r\n        \r\n        # Return as a single-element tensor\r\n        return (torch.tensor([result], device=sigmas_a.device, dtype=sigmas_a.dtype),)\r\n\r\n# ----- Sigma Fmod -----\r\nclass sigmas_fmod:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"divisor\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.0001, \"max\": 10000.0, \"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, divisor):\r\n        # Ensure divisor is not zero\r\n        if divisor == 0:\r\n            divisor = 0.0001\r\n            \r\n        result = torch.fmod(sigmas, divisor)\r\n        return (result,)\r\n\r\n# ----- Sigma Frac -----\r\nclass sigmas_frac:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas):\r\n        # Get the fractional part (x - floor(x))\r\n        result = sigmas - torch.floor(sigmas)\r\n        return (result,)\r\n\r\n# ----- Sigma If -----\r\nclass sigmas_if:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"condition_sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"true_sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"false_sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"threshold\": (\"FLOAT\", {\"default\": 0.5, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"comp_type\": ([\"greater\", \"less\", \"equal\", \"not_equal\"], {\"default\": \"greater\"})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, condition_sigmas, true_sigmas, false_sigmas, threshold, comp_type):\r\n        # Make sure we have values to compare\r\n        max_length = max(len(condition_sigmas), len(true_sigmas), len(false_sigmas))\r\n        \r\n        # Extend all tensors to the maximum length using interpolation\r\n        if len(condition_sigmas) != max_length:\r\n            condition_sigmas = torch.nn.functional.interpolate(\r\n                condition_sigmas.unsqueeze(0).unsqueeze(0), \r\n                size=max_length, \r\n                mode='linear'\r\n            ).squeeze(0).squeeze(0)\r\n            \r\n        if len(true_sigmas) != max_length:\r\n            true_sigmas = torch.nn.functional.interpolate(\r\n                true_sigmas.unsqueeze(0).unsqueeze(0), \r\n                size=max_length, \r\n                mode='linear'\r\n            ).squeeze(0).squeeze(0)\r\n            \r\n        if len(false_sigmas) != max_length:\r\n            false_sigmas = torch.nn.functional.interpolate(\r\n                false_sigmas.unsqueeze(0).unsqueeze(0), \r\n                size=max_length, \r\n                mode='linear'\r\n            ).squeeze(0).squeeze(0)\r\n            \r\n        # Create mask based on comparison type\r\n        if comp_type == \"greater\":\r\n            mask = condition_sigmas > threshold\r\n        elif comp_type == \"less\":\r\n            mask = condition_sigmas < threshold\r\n        elif comp_type == \"equal\":\r\n            mask = torch.isclose(condition_sigmas, torch.tensor(threshold, device=condition_sigmas.device))\r\n        elif comp_type == \"not_equal\":\r\n            mask = ~torch.isclose(condition_sigmas, torch.tensor(threshold, device=condition_sigmas.device))\r\n        \r\n        # Apply the mask to select values\r\n        result = torch.where(mask, true_sigmas, false_sigmas)\r\n        \r\n        return (result,)\r\n\r\n# ----- Sigma Logarithm2 -----\r\nclass sigmas_logarithm2:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"handle_negative\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"epsilon\": (\"FLOAT\", {\"default\": 1e-10, \"min\": 1e-15, \"max\": 0.1, \"step\": 1e-10})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, handle_negative, epsilon):\r\n        if handle_negative:\r\n            # For negative values, compute -log2(-x) and negate the result\r\n            mask_negative = sigmas < 0\r\n            mask_positive = ~mask_negative\r\n            \r\n            # Prepare positive and negative parts\r\n            pos_part = torch.log2(torch.clamp(sigmas[mask_positive], min=epsilon))\r\n            neg_part = -torch.log2(torch.clamp(-sigmas[mask_negative], min=epsilon))\r\n            \r\n            # Create result tensor\r\n            result = torch.zeros_like(sigmas)\r\n            result[mask_positive] = pos_part\r\n            result[mask_negative] = neg_part\r\n        else:\r\n            # Simply compute log2, clamping values to avoid log(0)\r\n            result = torch.log2(torch.clamp(sigmas, min=epsilon))\r\n            \r\n        return (result,)\r\n\r\n# ----- Sigma SmoothStep -----\r\nclass sigmas_smoothstep:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"edge0\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"edge1\": (\"FLOAT\", {\"default\": 1.0, \"min\": -10000.0, \"max\": 10000.0, \"step\": 0.01}),\r\n                \"mode\": ([\"smoothstep\", \"smootherstep\"], {\"default\": \"smoothstep\"})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, edge0, edge1, mode):\r\n        # Normalize the values to the range [0, 1]\r\n        t = torch.clamp((sigmas - edge0) / (edge1 - edge0), 0.0, 1.0)\r\n        \r\n        if mode == \"smoothstep\":\r\n            # Smooth step: 3t^2 - 2t^3\r\n            result = t * t * (3.0 - 2.0 * t)\r\n        else:  # smootherstep\r\n            # Smoother step: 6t^5 - 15t^4 + 10t^3\r\n            result = t * t * t * (t * (t * 6.0 - 15.0) + 10.0)\r\n            \r\n        # Scale back to the original range\r\n        result = result * (edge1 - edge0) + edge0\r\n        \r\n        return (result,)\r\n\r\n# ----- Sigma SquareRoot -----\r\nclass sigmas_squareroot:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"handle_negative\": (\"BOOLEAN\", {\"default\": False})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, handle_negative):\r\n        if handle_negative:\r\n            # For negative values, compute sqrt(-x) and negate the result\r\n            mask_negative = sigmas < 0\r\n            mask_positive = ~mask_negative\r\n            \r\n            # Prepare positive and negative parts\r\n            pos_part = torch.sqrt(sigmas[mask_positive])\r\n            neg_part = -torch.sqrt(-sigmas[mask_negative])\r\n            \r\n            # Create result tensor\r\n            result = torch.zeros_like(sigmas)\r\n            result[mask_positive] = pos_part\r\n            result[mask_negative] = neg_part\r\n        else:\r\n            # Only compute square root for non-negative values\r\n            # Negative values will be set to 0\r\n            result = torch.sqrt(torch.clamp(sigmas, min=0))\r\n            \r\n        return (result,)\r\n\r\n# ----- Sigma TimeStep -----\r\nclass sigmas_timestep:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"dt\": (\"FLOAT\", {\"default\": 0.1, \"min\": 0.0001, \"max\": 10.0, \"step\": 0.01}),\r\n                \"scaling\": ([\"linear\", \"quadratic\", \"sqrt\", \"log\"], {\"default\": \"linear\"}),\r\n                \"decay\": (\"FLOAT\", {\"default\": 0.0, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, dt, scaling, decay):\r\n        # Create time steps\r\n        timesteps = torch.arange(len(sigmas), device=sigmas.device, dtype=sigmas.dtype) * dt\r\n        \r\n        # Apply scaling\r\n        if scaling == \"quadratic\":\r\n            timesteps = timesteps ** 2\r\n        elif scaling == \"sqrt\":\r\n            timesteps = torch.sqrt(timesteps)\r\n        elif scaling == \"log\":\r\n            # Add small epsilon to avoid log(0)\r\n            timesteps = torch.log(timesteps + 1e-10)\r\n            \r\n        # Apply decay\r\n        if decay > 0:\r\n            decay_factor = torch.exp(-decay * timesteps)\r\n            timesteps = timesteps * decay_factor\r\n            \r\n        # Normalize to match the range of sigmas\r\n        timesteps = ((timesteps - timesteps.min()) / \r\n                     (timesteps.max() - timesteps.min())) * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n            \r\n        return (timesteps,)\r\n\r\nclass sigmas_gaussian_cdf:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"mu\": (\"FLOAT\", {\"default\": 0.0, \"min\": -10.0, \"max\": 10.0, \"step\": 0.01}),\r\n                \"sigma\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.01, \"max\": 10.0, \"step\": 0.01}),\r\n                \"normalize_output\": (\"BOOLEAN\", {\"default\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, mu, sigma, normalize_output):\r\n        # Apply Gaussian CDF transformation\r\n        result = 0.5 * (1 + torch.erf((sigmas - mu) / (sigma * math.sqrt(2))))\r\n        \r\n        # Normalize output if requested\r\n        if normalize_output:\r\n            result = ((result - result.min()) / (result.max() - result.min())) * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n            \r\n        return (result,)\r\n\r\nclass sigmas_stepwise_multirate:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 30, \"min\": 1, \"max\": 1000, \"step\": 1}),\r\n                \"rates\": (\"STRING\", {\"default\": \"1.0,0.5,0.25\", \"multiline\": False}),\r\n                \"boundaries\": (\"STRING\", {\"default\": \"0.3,0.7\", \"multiline\": False}),\r\n                \"start_value\": (\"FLOAT\", {\"default\": 10.0, \"min\": 0.0, \"max\": 100.0, \"step\": 0.1}),\r\n                \"end_value\": (\"FLOAT\", {\"default\": 0.01, \"min\": 0.0, \"max\": 100.0, \"step\": 0.01}),\r\n                \"pad_end\": (\"BOOLEAN\", {\"default\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, steps, rates, boundaries, start_value, end_value, pad_end):\r\n        # Parse rates and boundaries\r\n        rates_list = [float(r) for r in rates.split(',')]\r\n        if len(rates_list) < 1:\r\n            rates_list = [1.0]\r\n            \r\n        boundaries_list = [float(b) for b in boundaries.split(',')]\r\n        if len(boundaries_list) != len(rates_list) - 1:\r\n            # Create equal size segments if boundaries don't match rates\r\n            boundaries_list = [i / len(rates_list) for i in range(1, len(rates_list))]\r\n        \r\n        # Convert boundaries to step indices\r\n        boundary_indices = [int(b * steps) for b in boundaries_list]\r\n        \r\n        # Create steps array\r\n        result = torch.zeros(steps)\r\n        \r\n        # Fill segments with different rates\r\n        current_idx = 0\r\n        for i, rate in enumerate(rates_list):\r\n            next_idx = boundary_indices[i] if i < len(boundary_indices) else steps\r\n            segment_length = next_idx - current_idx\r\n            if segment_length <= 0:\r\n                continue\r\n                \r\n            segment_start = start_value if i == 0 else result[current_idx-1]\r\n            segment_end = end_value if i == len(rates_list) - 1 else start_value * (1 - boundaries_list[i])\r\n            \r\n            # Apply rate to the segment\r\n            t = torch.linspace(0, 1, segment_length)\r\n            segment = segment_start + (segment_end - segment_start) * (t ** rate)\r\n            \r\n            result[current_idx:next_idx] = segment\r\n            current_idx = next_idx\r\n        \r\n        # Add padding zero at the end if requested\r\n        if pad_end:\r\n            result = torch.cat([result, torch.tensor([0.0])])\r\n            \r\n        return (result,)\r\n\r\nclass sigmas_harmonic_decay:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 30, \"min\": 1, \"max\": 1000, \"step\": 1}),\r\n                \"start_value\": (\"FLOAT\", {\"default\": 10.0, \"min\": 0.0, \"max\": 100.0, \"step\": 0.1}),\r\n                \"end_value\": (\"FLOAT\", {\"default\": 0.01, \"min\": 0.0, \"max\": 100.0, \"step\": 0.01}),\r\n                \"harmonic_offset\": (\"FLOAT\", {\"default\": 0.0, \"min\": 0.0, \"max\": 10.0, \"step\": 0.01}),\r\n                \"decay_rate\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.1, \"max\": 10.0, \"step\": 0.1}),\r\n                \"pad_end\": (\"BOOLEAN\", {\"default\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, steps, start_value, end_value, harmonic_offset, decay_rate, pad_end):\r\n        # Create harmonic series: 1/(n+offset)^rate\r\n        n = torch.arange(1, steps + 1, dtype=torch.float32)\r\n        harmonic_values = 1.0 / torch.pow(n + harmonic_offset, decay_rate)\r\n        \r\n        # Normalize to [0, 1]\r\n        normalized = (harmonic_values - harmonic_values.min()) / (harmonic_values.max() - harmonic_values.min())\r\n        \r\n        # Scale to [end_value, start_value] and reverse (higher values first)\r\n        result = start_value - (start_value - end_value) * normalized\r\n        result = torch.flip(result, [0])\r\n        \r\n        # Add padding zero at the end if requested\r\n        if pad_end:\r\n            result = torch.cat([result, torch.tensor([0.0])])\r\n            \r\n        return (result,)\r\n\r\nclass sigmas_adaptive_noise_floor:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"min_noise_level\": (\"FLOAT\", {\"default\": 0.01, \"min\": 0.0, \"max\": 1.0, \"step\": 0.001}),\r\n                \"adaptation_factor\": (\"FLOAT\", {\"default\": 0.5, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01}),\r\n                \"window_size\": (\"INT\", {\"default\": 3, \"min\": 1, \"max\": 10, \"step\": 1})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, min_noise_level, adaptation_factor, window_size):\r\n        # Initialize result with original sigmas\r\n        result = sigmas.clone()\r\n        \r\n        # Apply adaptive noise floor\r\n        for i in range(window_size, len(sigmas)):\r\n            # Calculate local statistics in the window\r\n            window = sigmas[i-window_size:i]\r\n            local_mean = torch.mean(window)\r\n            local_var = torch.var(window)\r\n            \r\n            # Adapt the noise floor based on local statistics\r\n            adaptive_floor = min_noise_level + adaptation_factor * local_var / (local_mean + 1e-6)\r\n            \r\n            # Apply the floor if needed\r\n            if result[i] < adaptive_floor:\r\n                result[i] = adaptive_floor\r\n        \r\n        return (result,)\r\n\r\nclass sigmas_collatz_iteration:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"iterations\": (\"INT\", {\"default\": 3, \"min\": 1, \"max\": 20, \"step\": 1}),\r\n                \"scaling_factor\": (\"FLOAT\", {\"default\": 0.1, \"min\": 0.0001, \"max\": 10.0, \"step\": 0.01}),\r\n                \"normalize_output\": (\"BOOLEAN\", {\"default\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, iterations, scaling_factor, normalize_output):\r\n        # Scale input to reasonable range for Collatz\r\n        scaled_input = sigmas * scaling_factor\r\n        \r\n        # Apply Collatz iterations\r\n        result = scaled_input.clone()\r\n        \r\n        for _ in range(iterations):\r\n            # Create masks for even and odd values\r\n            even_mask = (result % 2 == 0)\r\n            odd_mask = ~even_mask\r\n            \r\n            # Apply Collatz function: n/2 for even, 3n+1 for odd\r\n            result[even_mask] = result[even_mask] / 2\r\n            result[odd_mask] = 3 * result[odd_mask] + 1\r\n        \r\n        # Normalize output if requested\r\n        if normalize_output:\r\n            result = ((result - result.min()) / (result.max() - result.min())) * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n            \r\n        return (result,)\r\n\r\nclass sigmas_conway_sequence:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 20, \"min\": 1, \"max\": 50, \"step\": 1}),\r\n                \"sequence_type\": ([\"look_and_say\", \"audioactive\", \"paperfolding\", \"thue_morse\"], {\"default\": \"look_and_say\"}),\r\n                \"normalize_range\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"min_value\": (\"FLOAT\", {\"default\": 0.01, \"min\": 0.0, \"max\": 10.0, \"step\": 0.01}),\r\n                \"max_value\": (\"FLOAT\", {\"default\": 10.0, \"min\": 0.0, \"max\": 50.0, \"step\": 0.1})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, steps, sequence_type, normalize_range, min_value, max_value):\r\n        if sequence_type == \"look_and_say\":\r\n            # Start with \"1\"\r\n            s = \"1\"\r\n            lengths = [1]  # Length of first term is 1\r\n            \r\n            # Generate look-and-say sequence\r\n            for _ in range(min(steps - 1, 25)):  # Limit to prevent excessive computation\r\n                next_s = \"\"\r\n                i = 0\r\n                while i < len(s):\r\n                    count = 1\r\n                    while i + 1 < len(s) and s[i] == s[i + 1]:\r\n                        i += 1\r\n                        count += 1\r\n                    next_s += str(count) + s[i]\r\n                    i += 1\r\n                s = next_s\r\n                lengths.append(len(s))\r\n            \r\n            # Convert to tensor\r\n            result = torch.tensor(lengths, dtype=torch.float32)\r\n            \r\n        elif sequence_type == \"audioactive\":\r\n            # Audioactive sequence (similar to look-and-say but counts digits)\r\n            a = [1]\r\n            for _ in range(min(steps - 1, 30)):\r\n                b = []\r\n                digit_count = {}\r\n                for digit in a:\r\n                    digit_count[digit] = digit_count.get(digit, 0) + 1\r\n                \r\n                for digit in sorted(digit_count.keys()):\r\n                    b.append(digit_count[digit])\r\n                    b.append(digit)\r\n                a = b\r\n            \r\n            result = torch.tensor(a, dtype=torch.float32)\r\n            if len(result) > steps:\r\n                result = result[:steps]\r\n            \r\n        elif sequence_type == \"paperfolding\":\r\n            # Paper folding sequence (dragon curve)\r\n            sequence = []\r\n            for i in range(min(steps, 30)):\r\n                sequence.append(1 if (i & (i + 1)) % 2 == 0 else 0)\r\n            \r\n            result = torch.tensor(sequence, dtype=torch.float32)\r\n            \r\n        elif sequence_type == \"thue_morse\":\r\n            # Thue-Morse sequence\r\n            sequence = [0]\r\n            while len(sequence) < steps:\r\n                sequence.extend([1 - x for x in sequence])\r\n            \r\n            result = torch.tensor(sequence, dtype=torch.float32)[:steps]\r\n        \r\n        # Normalize to desired range\r\n        if normalize_range:\r\n            if result.max() > result.min():\r\n                result = (result - result.min()) / (result.max() - result.min())\r\n                result = result * (max_value - min_value) + min_value\r\n            else:\r\n                result = torch.ones_like(result) * min_value\r\n        \r\n        return (result,)\r\n\r\nclass sigmas_gilbreath_sequence:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 30, \"min\": 10, \"max\": 100, \"step\": 1}),\r\n                \"levels\": (\"INT\", {\"default\": 3, \"min\": 1, \"max\": 10, \"step\": 1}),\r\n                \"normalize_range\": (\"BOOLEAN\", {\"default\": True}),\r\n                \"min_value\": (\"FLOAT\", {\"default\": 0.01, \"min\": 0.0, \"max\": 10.0, \"step\": 0.01}),\r\n                \"max_value\": (\"FLOAT\", {\"default\": 10.0, \"min\": 0.0, \"max\": 50.0, \"step\": 0.1})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, steps, levels, normalize_range, min_value, max_value):\r\n        # Generate first few prime numbers\r\n        def sieve_of_eratosthenes(limit):\r\n            sieve = [True] * (limit + 1)\r\n            sieve[0] = sieve[1] = False\r\n            for i in range(2, int(limit**0.5) + 1):\r\n                if sieve[i]:\r\n                    for j in range(i*i, limit + 1, i):\r\n                        sieve[j] = False\r\n            return [i for i in range(limit + 1) if sieve[i]]\r\n        \r\n        # Get primes\r\n        primes = sieve_of_eratosthenes(steps * 6)  # Get enough primes\r\n        primes = primes[:steps]\r\n        \r\n        # Generate Gilbreath sequence levels\r\n        sequences = [primes]\r\n        for level in range(1, levels):\r\n            prev_seq = sequences[level-1]\r\n            new_seq = [abs(prev_seq[i] - prev_seq[i+1]) for i in range(len(prev_seq)-1)]\r\n            sequences.append(new_seq)\r\n        \r\n        # Select the requested level\r\n        selected_level = min(levels-1, len(sequences)-1)\r\n        result_list = sequences[selected_level]\r\n        \r\n        # Ensure we have enough values\r\n        while len(result_list) < steps:\r\n            result_list.append(1)  # Gilbreath conjecture: eventually all 1s\r\n        \r\n        # Convert to tensor\r\n        result = torch.tensor(result_list[:steps], dtype=torch.float32)\r\n        \r\n        # Normalize to desired range\r\n        if normalize_range:\r\n            if result.max() > result.min():\r\n                result = (result - result.min()) / (result.max() - result.min())\r\n                result = result * (max_value - min_value) + min_value\r\n            else:\r\n                result = torch.ones_like(result) * min_value\r\n        \r\n        return (result,)\r\n\r\nclass sigmas_cnf_inverse:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\", {\"forceInput\": True}),\r\n                \"time_steps\": (\"INT\", {\"default\": 20, \"min\": 5, \"max\": 100, \"step\": 1}),\r\n                \"flow_type\": ([\"linear\", \"quadratic\", \"sigmoid\", \"exponential\"], {\"default\": \"sigmoid\"}),\r\n                \"reverse\": (\"BOOLEAN\", {\"default\": True})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, sigmas, time_steps, flow_type, reverse):\r\n        # Create normalized time steps\r\n        t = torch.linspace(0, 1, time_steps)\r\n        \r\n        # Apply CNF flow transformation\r\n        if flow_type == \"linear\":\r\n            flow = t\r\n        elif flow_type == \"quadratic\":\r\n            flow = t**2\r\n        elif flow_type == \"sigmoid\":\r\n            flow = 1 / (1 + torch.exp(-10 * (t - 0.5)))\r\n        elif flow_type == \"exponential\":\r\n            flow = torch.exp(3 * t) - 1\r\n            flow = flow / flow.max()  # Normalize to [0,1]\r\n        \r\n        # Reverse flow if requested\r\n        if reverse:\r\n            flow = 1 - flow\r\n        \r\n        # Interpolate sigmas according to flow\r\n        # First normalize sigmas to [0,1] for interpolation\r\n        normalized_sigmas = (sigmas - sigmas.min()) / (sigmas.max() - sigmas.min())\r\n        \r\n        # Create indices for interpolation\r\n        indices = flow * (len(sigmas) - 1)\r\n        \r\n        # Linear interpolation\r\n        result = torch.zeros(time_steps, device=sigmas.device, dtype=sigmas.dtype)\r\n        for i in range(time_steps):\r\n            idx_low = int(indices[i])\r\n            idx_high = min(idx_low + 1, len(sigmas) - 1)\r\n            frac = indices[i] - idx_low\r\n            \r\n            result[i] = (1 - frac) * normalized_sigmas[idx_low] + frac * normalized_sigmas[idx_high]\r\n        \r\n        # Scale back to original sigma range\r\n        result = result * (sigmas.max() - sigmas.min()) + sigmas.min()\r\n        \r\n        return (result,)\r\n\r\nclass sigmas_riemannian_flow:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 30, \"min\": 5, \"max\": 100, \"step\": 1}),\r\n                \"metric_type\": ([\"euclidean\", \"hyperbolic\", \"spherical\", \"lorentzian\"], {\"default\": \"hyperbolic\"}),\r\n                \"curvature\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.1, \"max\": 10.0, \"step\": 0.1}),\r\n                \"start_value\": (\"FLOAT\", {\"default\": 10.0, \"min\": 0.1, \"max\": 50.0, \"step\": 0.1}),\r\n                \"end_value\": (\"FLOAT\", {\"default\": 0.01, \"min\": 0.0, \"max\": 10.0, \"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, steps, metric_type, curvature, start_value, end_value):\r\n        # Create parameter t in [0, 1]\r\n        t = torch.linspace(0, 1, steps)\r\n        \r\n        # Apply different Riemannian metrics\r\n        if metric_type == \"euclidean\":\r\n            # Simple linear interpolation in Euclidean space\r\n            result = start_value * (1 - t) + end_value * t\r\n            \r\n        elif metric_type == \"hyperbolic\":\r\n            # Hyperbolic space geodesic\r\n            K = -curvature  # Negative curvature for hyperbolic space\r\n            \r\n            # Convert to hyperbolic coordinates (using Poincaré disk model)\r\n            x_start = torch.tanh(start_value / 2)\r\n            x_end = torch.tanh(end_value / 2)\r\n            \r\n            # Distance in hyperbolic space\r\n            d = torch.acosh(1 + 2 * ((x_start - x_end)**2) / ((1 - x_start**2) * (1 - x_end**2)))\r\n            \r\n            # Geodesic interpolation\r\n            lambda_t = torch.sinh(t * d) / torch.sinh(d)\r\n            result = 2 * torch.atanh((1 - lambda_t) * x_start + lambda_t * x_end)\r\n            \r\n        elif metric_type == \"spherical\":\r\n            # Spherical space geodesic (great circle)\r\n            K = curvature  # Positive curvature for spherical space\r\n            \r\n            # Convert to angular coordinates\r\n            theta_start = start_value * torch.sqrt(K)\r\n            theta_end = end_value * torch.sqrt(K)\r\n            \r\n            # Geodesic interpolation along great circle\r\n            result = torch.sin((1 - t) * theta_start + t * theta_end) / torch.sqrt(K)\r\n            \r\n        elif metric_type == \"lorentzian\":\r\n            # Lorentzian spacetime-inspired metric (time dilation effect)\r\n            gamma = 1 / torch.sqrt(1 - curvature * t**2)  # Lorentz factor\r\n            result = start_value * (1 - t) + end_value * t\r\n            result = result * gamma  # Apply time dilation\r\n        \r\n        # Ensure the values are in the desired range\r\n        result = torch.clamp(result, min=min(start_value, end_value), max=max(start_value, end_value))\r\n        \r\n        # Ensure result is decreasing if start_value > end_value\r\n        if start_value > end_value and result[0] < result[-1]:\r\n            result = torch.flip(result, [0])\r\n            \r\n        return (result,)\r\n\r\nclass sigmas_langevin_dynamics:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 30, \"min\": 5, \"max\": 100, \"step\": 1}),\r\n                \"start_value\": (\"FLOAT\", {\"default\": 10.0, \"min\": 0.1, \"max\": 50.0, \"step\": 0.1}),\r\n                \"end_value\": (\"FLOAT\", {\"default\": 0.01, \"min\": 0.0, \"max\": 10.0, \"step\": 0.01}),\r\n                \"temperature\": (\"FLOAT\", {\"default\": 0.5, \"min\": 0.01, \"max\": 10.0, \"step\": 0.01}),\r\n                \"friction\": (\"FLOAT\", {\"default\": 1.0, \"min\": 0.1, \"max\": 10.0, \"step\": 0.1}),\r\n                \"seed\": (\"INT\", {\"default\": 42, \"min\": 0, \"max\": 99999, \"step\": 1})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, steps, start_value, end_value, temperature, friction, seed):\r\n        # Set random seed for reproducibility\r\n        torch.manual_seed(seed)\r\n        \r\n        # Potential function (quadratic well centered at end_value)\r\n        def U(x):\r\n            return 0.5 * (x - end_value)**2\r\n        \r\n        # Gradient of the potential\r\n        def grad_U(x):\r\n            return x - end_value\r\n        \r\n        # Initialize state\r\n        x = torch.tensor([start_value], dtype=torch.float32)\r\n        v = torch.zeros(1)  # Initial velocity\r\n        \r\n        # Discretization parameters\r\n        dt = 1.0 / steps\r\n        sqrt_2dt = math.sqrt(2 * dt)\r\n        \r\n        # Storage for trajectory\r\n        trajectory = [start_value]\r\n        \r\n        # Langevin dynamics integration (velocity Verlet with Langevin thermostat)\r\n        for _ in range(steps - 1):\r\n            # Half step in velocity\r\n            v = v - dt * friction * v - dt * grad_U(x) / 2\r\n            \r\n            # Full step in position\r\n            x = x + dt * v\r\n            \r\n            # Random force (thermal noise)\r\n            noise = torch.randn(1) * sqrt_2dt * temperature\r\n            \r\n            # Another half step in velocity with noise\r\n            v = v - dt * friction * v - dt * grad_U(x) / 2 + noise\r\n            \r\n            # Store current position\r\n            trajectory.append(x.item())\r\n        \r\n        # Convert to tensor\r\n        result = torch.tensor(trajectory, dtype=torch.float32)\r\n        \r\n        # Ensure we reach the end value\r\n        result[-1] = end_value\r\n        \r\n        return (result,)\r\n\r\nclass sigmas_persistent_homology:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 30, \"min\": 5, \"max\": 100, \"step\": 1}),\r\n                \"start_value\": (\"FLOAT\", {\"default\": 10.0, \"min\": 0.1, \"max\": 50.0, \"step\": 0.1}),\r\n                \"end_value\": (\"FLOAT\", {\"default\": 0.01, \"min\": 0.0, \"max\": 10.0, \"step\": 0.01}),\r\n                \"persistence_type\": ([\"linear\", \"exponential\", \"logarithmic\", \"sigmoidal\"], {\"default\": \"exponential\"}),\r\n                \"birth_density\": (\"FLOAT\", {\"default\": 0.3, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01}),\r\n                \"death_density\": (\"FLOAT\", {\"default\": 0.7, \"min\": 0.0, \"max\": 1.0, \"step\": 0.01})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, steps, start_value, end_value, persistence_type, birth_density, death_density):\r\n        # Basic filtration function (linear by default)\r\n        t = torch.linspace(0, 1, steps)\r\n        \r\n        # Persistence diagram simulation\r\n        # Create birth and death times\r\n        birth_points = int(steps * birth_density)\r\n        death_points = int(steps * death_density)\r\n        \r\n        # Filtration function based on selected type\r\n        if persistence_type == \"linear\":\r\n            filtration = t\r\n        elif persistence_type == \"exponential\":\r\n            filtration = 1 - torch.exp(-5 * t)\r\n        elif persistence_type == \"logarithmic\":\r\n            filtration = torch.log(1 + 9 * t) / torch.log(torch.tensor([10.0]))\r\n        elif persistence_type == \"sigmoidal\":\r\n            filtration = 1 / (1 + torch.exp(-10 * (t - 0.5)))\r\n        \r\n        # Generate birth-death pairs\r\n        birth_indices = torch.linspace(0, steps // 2, birth_points).long()\r\n        death_indices = torch.linspace(steps // 2, steps - 1, death_points).long()\r\n        \r\n        # Create persistence barcode\r\n        barcode = torch.zeros(steps)\r\n        for b_idx in birth_indices:\r\n            for d_idx in death_indices:\r\n                if b_idx < d_idx:\r\n                    # Add a persistence feature from birth to death\r\n                    barcode[b_idx:d_idx] += 1\r\n        \r\n        # Normalize and weight the barcode\r\n        if barcode.max() > 0:\r\n            barcode = barcode / barcode.max()\r\n        \r\n        # Modulate the filtration function with the persistence barcode\r\n        result = filtration * (0.7 + 0.3 * barcode)\r\n        \r\n        # Scale to desired range\r\n        result = start_value + (end_value - start_value) * result\r\n        \r\n        return (result,)\r\n\r\nclass sigmas_normalizing_flows:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"steps\": (\"INT\", {\"default\": 30, \"min\": 5, \"max\": 100, \"step\": 1}),\r\n                \"start_value\": (\"FLOAT\", {\"default\": 10.0, \"min\": 0.1, \"max\": 50.0, \"step\": 0.1}),\r\n                \"end_value\": (\"FLOAT\", {\"default\": 0.01, \"min\": 0.0, \"max\": 10.0, \"step\": 0.01}),\r\n                \"flow_type\": ([\"affine\", \"planar\", \"radial\", \"realnvp\"], {\"default\": \"realnvp\"}),\r\n                \"num_transforms\": (\"INT\", {\"default\": 3, \"min\": 1, \"max\": 10, \"step\": 1}),\r\n                \"seed\": (\"INT\", {\"default\": 42, \"min\": 0, \"max\": 99999, \"step\": 1})\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\",)\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    \r\n    def main(self, steps, start_value, end_value, flow_type, num_transforms, seed):\r\n        # Set random seed for reproducibility\r\n        torch.manual_seed(seed)\r\n        \r\n        # Create base linear schedule from start_value to end_value\r\n        base_schedule = torch.linspace(start_value, end_value, steps)\r\n        \r\n        # Apply different normalizing flow transformations\r\n        if flow_type == \"affine\":\r\n            # Affine transformation: f(x) = a*x + b\r\n            result = base_schedule.clone()\r\n            for _ in range(num_transforms):\r\n                a = torch.rand(1) * 0.5 + 0.75  # Scale in [0.75, 1.25]\r\n                b = (torch.rand(1) - 0.5) * 0.2  # Shift in [-0.1, 0.1]\r\n                result = a * result + b\r\n                \r\n        elif flow_type == \"planar\":\r\n            # Planar flow: f(x) = x + u * tanh(w * x + b)\r\n            result = base_schedule.clone()\r\n            for _ in range(num_transforms):\r\n                u = torch.rand(1) * 0.4 - 0.2  # in [-0.2, 0.2]\r\n                w = torch.rand(1) * 2 - 1  # in [-1, 1]\r\n                b = torch.rand(1) * 0.2 - 0.1  # in [-0.1, 0.1]\r\n                result = result + u * torch.tanh(w * result + b)\r\n                \r\n        elif flow_type == \"radial\":\r\n            # Radial flow: f(x) = x + beta * (x - x0) / (alpha + |x - x0|)\r\n            result = base_schedule.clone()\r\n            for _ in range(num_transforms):\r\n                # Pick a random reference point within the range\r\n                idx = torch.randint(0, steps, (1,))\r\n                x0 = result[idx]\r\n                \r\n                alpha = torch.rand(1) * 0.5 + 0.5  # in [0.5, 1.0]\r\n                beta = torch.rand(1) * 0.4 - 0.2  # in [-0.2, 0.2]\r\n                \r\n                # Apply radial flow\r\n                diff = result - x0\r\n                r = torch.abs(diff)\r\n                result = result + beta * diff / (alpha + r)\r\n                \r\n        elif flow_type == \"realnvp\":\r\n            # Simplified RealNVP-inspired flow with masking\r\n            result = base_schedule.clone()\r\n            \r\n            for _ in range(num_transforms):\r\n                # Create alternating mask\r\n                mask = torch.zeros(steps)\r\n                mask[::2] = 1  # Mask even indices\r\n                \r\n                # Generate scale and shift parameters\r\n                log_scale = torch.rand(steps) * 0.2 - 0.1  # in [-0.1, 0.1]\r\n                shift = torch.rand(steps) * 0.2 - 0.1  # in [-0.1, 0.1]\r\n                \r\n                # Apply affine coupling transformation\r\n                scale = torch.exp(log_scale * mask)\r\n                masked_shift = shift * mask\r\n                \r\n                # Transform\r\n                result = result * scale + masked_shift\r\n        \r\n        # Rescale to ensure we maintain start_value and end_value\r\n        if result[0] != start_value or result[-1] != end_value:\r\n            result = (result - result[0]) / (result[-1] - result[0]) * (end_value - start_value) + start_value\r\n        \r\n        return (result,)\r\n\r\n\r\nclass sigmas_split_value:\r\n    def __init__(self):\r\n        pass\r\n    \r\n    @classmethod\r\n    def INPUT_TYPES(s):\r\n        return {\r\n            \"required\": {\r\n                \"sigmas\": (\"SIGMAS\",),\r\n                \"split_value\": (\"FLOAT\", {\"default\": 0.875, \"min\": 0.0, \"max\": 80085.0, \"step\": 0.001}),\r\n                \"bias_split_up\": (\"BOOLEAN\", {\"default\": False, \"tooltip\": \"If True, split happens above the split value, so high_sigmas includes the split point.\"}),\r\n            }\r\n        }\r\n\r\n    FUNCTION = \"main\"\r\n    RETURN_TYPES = (\"SIGMAS\", \"SIGMAS\")\r\n    RETURN_NAMES = (\"high_sigmas\", \"low_sigmas\")\r\n    CATEGORY = \"RES4LYF/sigmas\"\r\n    DESCRIPTION = (\"Splits sigma schedule at a specific sigma value.\")\r\n\r\n    def main(self, sigmas, split_value, bias_split_up):\r\n        if len(sigmas) == 0:\r\n            return (sigmas, sigmas)\r\n        \r\n        # Find the split index\r\n        if bias_split_up:\r\n            # Find first sigma <= split_value\r\n            split_idx = None\r\n            for i, sigma in enumerate(sigmas):\r\n                if sigma <= split_value:\r\n                    split_idx = i\r\n                    break\r\n            \r\n            if split_idx is None:\r\n                # All sigmas are above split_value\r\n                return (sigmas, torch.tensor([], device=sigmas.device, dtype=sigmas.dtype))\r\n            \r\n            # high_sigmas: from start to split_idx (inclusive)\r\n            # low_sigmas: from split_idx to end\r\n            high_sigmas = sigmas[:split_idx + 1]\r\n            low_sigmas = sigmas[split_idx:]\r\n            \r\n        else:\r\n            # Find first sigma < split_value\r\n            split_idx = None\r\n            for i, sigma in enumerate(sigmas):\r\n                if sigma < split_value:\r\n                    split_idx = i\r\n                    break\r\n            \r\n            if split_idx is None:\r\n                # All sigmas are >= split_value\r\n                return (torch.tensor([], device=sigmas.device, dtype=sigmas.dtype), sigmas)\r\n            \r\n            # high_sigmas: from start to split_idx (exclusive)\r\n            # low_sigmas: from split_idx-1 to end (includes the boundary point)\r\n            high_sigmas = sigmas[:split_idx]\r\n            low_sigmas = sigmas[split_idx - 1:]\r\n        \r\n        return (high_sigmas, low_sigmas)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\ndef get_bong_tangent_sigmas(steps, slope, pivot, start, end):\r\n    smax = ((2/pi)*atan(-slope*(0-pivot))+1)/2\r\n    smin = ((2/pi)*atan(-slope*((steps-1)-pivot))+1)/2\r\n\r\n    srange = smax-smin\r\n    sscale = start - end\r\n\r\n    sigmas = [  ( (((2/pi)*atan(-slope*(x-pivot))+1)/2) - smin) * (1/srange) * sscale + end    for x in range(steps)]\r\n    \r\n    return sigmas\r\n\r\ndef bong_tangent_scheduler(model_sampling, steps, start=1.0, middle=0.5, end=0.0, pivot_1=0.6, pivot_2=0.6, slope_1=0.2, slope_2=0.2, pad=False):\r\n    steps += 2\r\n\r\n    midpoint = int( (steps*pivot_1 + steps*pivot_2) / 2 )\r\n    pivot_1 = int(steps * pivot_1)\r\n    pivot_2 = int(steps * pivot_2)\r\n\r\n    slope_1 = slope_1 / (steps/40)\r\n    slope_2 = slope_2 / (steps/40)\r\n\r\n    stage_2_len = steps - midpoint\r\n    stage_1_len = steps - stage_2_len\r\n\r\n    tan_sigmas_1 = get_bong_tangent_sigmas(stage_1_len, slope_1, pivot_1, start, middle)\r\n    tan_sigmas_2 = get_bong_tangent_sigmas(stage_2_len, slope_2, pivot_2 - stage_1_len, middle, end)\r\n    \r\n    tan_sigmas_1 = tan_sigmas_1[:-1]\r\n    if pad:\r\n        tan_sigmas_2 = tan_sigmas_2+[0]\r\n\r\n    tan_sigmas = torch.tensor(tan_sigmas_1 + tan_sigmas_2)\r\n\r\n    return tan_sigmas\r\n\r\n"
  },
  {
    "path": "style_transfer.py",
    "content": "\nimport torch\nimport torch.nn.functional as F\nimport torch.nn as nn\nfrom torch import Tensor, FloatTensor\nfrom typing import Optional, Callable, Tuple, Dict, List, Any, Union\n\nimport einops \nfrom einops import rearrange\nimport copy\nimport comfy\n\n\nfrom .latents import gaussian_blur_2d, median_blur_2d\n\n# WIP... not yet in use...\nclass StyleTransfer:  \n    def __init__(self,\n        style_method  = \"WCT\",\n        embedder_method = None,\n        patch_size    = 1,\n        pinv_dtype    = torch.float64,\n        dtype         = torch.float64,\n    ):\n        self.style_method  = style_method\n        \n        self.embedder_method   = None\n        self.unembedder_method = None\n\n        if embedder_method is not None:\n            self.set_embedder_method(embedder_method)\n        \n        self.patch_size    = patch_size\n        \n        #if embedder_type == \"conv2d\":\n        #    self.unembedder = self.invert_conv2d\n        self.pinv_dtype = pinv_dtype\n        self.dtype      = dtype\n        \n        self.patchify   = None\n        self.unpatchify = None\n        \n        self.orig_shape = None\n        self.grid_sizes = None\n        \n        #self.x_embed_ndim = 0\n        \n        \n\n    def set_patchify_method(self, patchify_method=None):\n        self.patchify_method = patchify_method\n\n    def set_unpatchify_method(self, unpatchify_method=None):\n        self.unpatchify_method = unpatchify_method\n        \n    def set_embedder_method(self, embedder_method):\n        self.embedder_method = copy.deepcopy(embedder_method).to(self.pinv_dtype)\n        self.W = self.embedder_method.weight\n        self.B = self.embedder_method.bias    \n        \n        if   isinstance(embedder_method, nn.Linear):\n            self.unembedder_method = self.invert_linear\n        \n        elif isinstance(embedder_method, nn.Conv2d):\n            self.unembedder_method = self.invert_conv2d\n            \n        elif isinstance(embedder_method, nn.Conv3d):\n            self.unembedder_method = self.invert_conv3d\n            \n    def set_patch_size(self, patch_size):\n        self.patch_size = patch_size\n\n    def unpatchify(self, x: Tensor) -> List[Tensor]:\n        x_arr = []\n        for i, img_size in enumerate(self.img_sizes):   #  [[64,64]]   , img_sizes: List[Tuple[int, int]]\n            pH, pW = img_size\n            x_arr.append(\n                einops.rearrange(x[i, :pH*pW].reshape(1, pH, pW, -1), 'B H W (p1 p2 C) -> B C (H p1) (W p2)',\n                    p1=self.patch_size, p2=self.patch_size)\n            )\n        x = torch.cat(x_arr, dim=0)\n        return x\n\n    def patchify(self, x: Tensor):\n        x = comfy.ldm.common_dit.pad_to_patch_size(x, (self.patch_size, self.patch_size))\n        \n        pH, pW         = x.shape[-2] // self.patch_size, x.shape[-1] // self.patch_size\n        self.img_sizes = [[pH, pW]] * x.shape[0]\n        x              = einops.rearrange(x, 'B C (H p1) (W p2) -> B (H W) (p1 p2 C)', p1=self.patch_size, p2=self.patch_size)\n        return x\n        \n        \n    def embedder(self, x):\n        if isinstance(self.embedder_method, nn.Linear):\n            x = self.patchify(x)\n        \n        self.orig_shape = x.shape\n        x = self.embedder_method(x)\n        self.grid_sizes = x.shape[2:]\n        \n        #self.x_embed_ndim = x.ndim\n        #if x.ndim > 3:\n        #    x = einops.rearrange(x, \"B C H W -> B (H W) C\")\n        \n        return x\n        \n    def unembedder(self, x):\n        #if self.x_embed_ndim > 3:\n        #    x = einops.rearrange(x, \"B (H W) C -> B C H W\", W=self.orig_shape[-1])\n        \n        x = self.unembedder_method(x)\n        return x\n        \n        \n    def invert_linear(self, x : torch.Tensor,) -> torch.Tensor:\n        x = x.to(self.pinv_dtype)\n        #x = (x - self.B.to(self.dtype)) @ torch.linalg.pinv(self.W.to(self.pinv_dtype)).T.to(self.dtype)\n        x = (x - self.B) @ torch.linalg.pinv(self.W).T\n        \n        return x.to(self.dtype)\n\n        \n        \n    def invert_conv2d(self, z: torch.Tensor,) -> torch.Tensor:\n        z = z.to(self.pinv_dtype)\n        conv = self.embedder_method\n        \n        B, C_in, H, W      = self.orig_shape\n        C_out, _, kH, kW   = conv.weight.shape\n        stride_h, stride_w = conv.stride\n        pad_h,    pad_w    = conv.padding\n\n        b = conv.bias.view(1, C_out, 1, 1).to(z)\n        z_nobias = z - b\n\n        W_flat = conv.weight.view(C_out, -1).to(z)  \n        W_pinv = torch.linalg.pinv(W_flat)    \n\n        Bz, Co, Hp, Wp = z_nobias.shape\n        z_flat = z_nobias.reshape(Bz, Co, -1)  \n\n        x_patches = W_pinv @ z_flat   \n\n        x_sum = F.fold(\n            x_patches,\n            output_size=(H + 2*pad_h, W + 2*pad_w),\n            kernel_size=(kH, kW),\n            stride=(stride_h, stride_w),\n        )\n        ones = torch.ones_like(x_patches)\n        count = F.fold(\n            ones,\n            output_size=(H + 2*pad_h, W + 2*pad_w),\n            kernel_size=(kH, kW),\n            stride=(stride_h, stride_w),\n        )  \n\n        x_recon = x_sum / count.clamp(min=1e-6)\n        if pad_h > 0 or pad_w > 0:\n            x_recon = x_recon[..., pad_h:pad_h+H, pad_w:pad_w+W]\n\n        return x_recon.to(self.dtype)\n\n\n\n    def invert_conv3d(self, z: torch.Tensor, ) -> torch.Tensor:\n        z = z.to(self.pinv_dtype)\n        conv = self.embedder_method\n        grid_sizes = self.grid_sizes\n\n        B, C_in, D, H, W = self.orig_shape\n        pD, pH, pW = self.patch_size\n        sD, sH, sW = pD, pH, pW\n\n        if z.ndim == 3:\n            # [B, S, C_out] -> reshape to [B, C_out, D', H', W']   \n            S = z.shape[1]\n            if grid_sizes is None:\n                Dp = D // pD\n                Hp = H // pH   # getting actual patchified dims\n                Wp = W // pW\n            else:\n                Dp, Hp, Wp = grid_sizes\n            C_out = z.shape[2]\n            z = z.transpose(1, 2).reshape(B, C_out, Dp, Hp, Wp)\n        else:\n            B2, C_out, Dp, Hp, Wp = z.shape\n            assert B2 == B, \"Batch size mismatch... ya sharked it.\"\n\n        b = conv.bias.view(1, C_out, 1, 1, 1)         # need to kncokout bias to invert via weight\n        z_nobias = z - b\n\n        # 2D filter -> pinv\n        w3 = conv.weight         # [C_out, C_in, 1, pH, pW]\n        w2 = w3.squeeze(2)                       # [C_out, C_in, pH, pW]\n        out_ch, in_ch, kH, kW = w2.shape\n        W_flat = w2.view(out_ch, -1)            # [C_out, in_ch*pH*pW]\n        W_pinv = torch.linalg.pinv(W_flat)      # [in_ch*pH*pW, C_out]\n\n        # merge depth for 2D unfold wackiness\n        z2 = z_nobias.permute(0,2,1,3,4).reshape(B*Dp, C_out, Hp, Wp)\n\n        # apply pinv ... get patch vectors\n        z_flat    = z2.reshape(B*Dp, C_out, -1)  # [B*Dp, C_out, L]\n        x_patches = W_pinv @ z_flat              # [B*Dp, in_ch*pH*pW, L]\n\n        # fold -> restore spatial frames\n        x2 = F.fold(\n            x_patches,\n            output_size=(H, W),\n            kernel_size=(pH, pW),\n            stride=(sH, sW)\n        )  # → [B*Dp, C_in, H, W]\n\n        # unmerge depth (de-depth charge)\n        x2 = x2.reshape(B, Dp, in_ch, H, W)           # [B, Dp,  C_in, H, W]\n        x_recon = x2.permute(0,2,1,3,4).contiguous()  # [B, C_in,   D, H, W]\n        return x_recon.to(self.dtype)\n\n\n\n    def adain_seq_inplace(self, content: torch.Tensor, style: torch.Tensor, eps: float = 1e-7) -> torch.Tensor:\n        mean_c = content.mean(1, keepdim=True)\n        std_c  = content.std (1, keepdim=True).add_(eps) \n        mean_s = style.mean  (1, keepdim=True)\n        std_s  = style.std   (1, keepdim=True).add_(eps)\n\n        content.sub_(mean_c).div_(std_c).mul_(std_s).add_(mean_s)\n        return content\n\n\n\n\n\n\nclass StyleWCT:  \n    def __init__(self, dtype=torch.float64, use_svd=False,):\n        self.dtype          = dtype\n        self.use_svd        = use_svd\n        self.y0_adain_embed = None\n        self.mu_s           = None\n        self.y0_color       = None\n        self.spatial_shape  = None\n        \n    def whiten(self, f_s_centered: torch.Tensor, set=False):\n        cov = (f_s_centered.T.double() @ f_s_centered.double()) / (f_s_centered.size(0) - 1)\n\n        if self.use_svd:\n            U_svd, S_svd, Vh_svd = torch.linalg.svd(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n            S_eig = S_svd\n            U_eig = U_svd\n        else:\n            S_eig, U_eig = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n        \n        if set:\n            S_eig_root = S_eig.clamp(min=0).sqrt() # eigenvalues -> singular values\n        else:\n            S_eig_root = S_eig.clamp(min=0).rsqrt() # inverse square root\n        \n        whiten = U_eig @ torch.diag(S_eig_root) @ U_eig.T\n        return whiten.to(f_s_centered)\n\n    def set(self, y0_adain_embed: torch.Tensor, spatial_shape=None):\n        if self.y0_adain_embed is None or self.y0_adain_embed.shape != y0_adain_embed.shape or torch.norm(self.y0_adain_embed - y0_adain_embed) > 0:\n            self.y0_adain_embed = y0_adain_embed.clone()\n            if spatial_shape is not None:\n                self.spatial_shape = spatial_shape\n            \n            f_s          = y0_adain_embed[0] # if y0_adain_embed.ndim > 4 else y0_adain_embed\n            self.mu_s    = f_s.mean(dim=0, keepdim=True)\n            f_s_centered = f_s - self.mu_s\n            \n            self.y0_color = self.whiten(f_s_centered, set=True)\n            \n    def get(self, denoised_embed: torch.Tensor):\n        for wct_i in range(denoised_embed.shape[0]):\n            f_c          = denoised_embed[wct_i]\n            mu_c         = f_c.mean(dim=0, keepdim=True)\n            f_c_centered = f_c - mu_c\n\n            whiten = self.whiten(f_c_centered)\n\n            f_c_whitened = f_c_centered @ whiten.T\n            f_cs         = f_c_whitened @ self.y0_color.T + self.mu_s\n            \n            denoised_embed[wct_i] = f_cs\n            \n        return denoised_embed\n\n\n\n\nclass WaveletStyleWCT(StyleWCT):\n    def set(self, y0_adain_embed: torch.Tensor, h_len, w_len):\n        if self.y0_adain_embed is None or self.y0_adain_embed.shape != y0_adain_embed.shape or torch.norm(self.y0_adain_embed - y0_adain_embed) > 0:\n            self.y0_adain_embed = y0_adain_embed.clone()\n            \n            B, HW, C = y0_adain_embed.shape\n            LL, _, _, _ = haar_wavelet_decompose(y0_adain_embed.contiguous().view(B, C, h_len, w_len))\n\n            B_LL, C_LL, H_LL, W_LL = LL.shape\n            #flat = rearrange(LL, 'b c h w -> b (h w) c')\n            flat = LL.contiguous().view(B_LL, H_LL * W_LL, C_LL)\n\n            f_s = flat[0]  # assuming batch size 1 or using only the first\n            self.mu_s = f_s.mean(dim=0, keepdim=True)\n            f_s_centered = f_s - self.mu_s\n            self.y0_color = self.whiten(f_s_centered, set=True)\n            #self.y0_adain_embed = flat  # cache if needed\n    \n    def get(self, denoised_embed: torch.Tensor, h_len, w_len, stylize_highfreq=False):\n\n        B, HW, C = denoised_embed.shape\n        \n        denoised_embed = denoised_embed.contiguous().view(B, C, h_len, w_len)\n        \n        for i in range(B):\n            x = denoised_embed[i:i+1]  # [1, C, H, W]\n            LL, LH, HL, HH = haar_wavelet_decompose(x)\n\n            def process_band(band):\n                Bc, Cc, Hc, Wc = band.shape\n                flat = band.contiguous().view(Bc, Hc * Wc, Cc)\n                \n                styled = super(WaveletStyleWCT, self).get(flat)\n                return styled.contiguous().view(Bc, Cc, Hc, Wc)\n\n            LL_styled = process_band(LL)\n\n            if stylize_highfreq:\n                LH_styled = process_band(LH)\n                HL_styled = process_band(HL)\n                HH_styled = process_band(HH)\n            else:\n                LH_styled, HL_styled, HH_styled = LH, HL, HH\n\n            recon = haar_wavelet_reconstruct(LL_styled, LH_styled, HL_styled, HH_styled)\n            denoised_embed[i] = recon.squeeze(0)\n\n        return denoised_embed.view(B, HW, C)\n\n\n\ndef haar_wavelet_decompose(x):\n    \"\"\"\n    Orthonormal Haar decomposition.\n    Input:  [B, C, H, W]\n    Output: LL, LH, HL, HH with shape [B, C, H//2, W//2]\n    \"\"\"\n    if x.dtype != torch.float32:\n        x = x.float()\n    \n    B, C, H, W = x.shape\n    assert H % 2 == 0 and W % 2 == 0, \"Input must have even H, W\"\n\n    # Precompute\n    norm = 1 / 2**0.5\n\n    x00 = x[:, :, 0::2, 0::2]\n    x01 = x[:, :, 0::2, 1::2]\n    x10 = x[:, :, 1::2, 0::2]\n    x11 = x[:, :, 1::2, 1::2]\n\n    LL = (x00 + x01 + x10 + x11) * norm * 0.5\n    LH = (x00 - x01 + x10 - x11) * norm * 0.5\n    HL = (x00 + x01 - x10 - x11) * norm * 0.5\n    HH = (x00 - x01 - x10 + x11) * norm * 0.5\n\n    return LL, LH, HL, HH\n\ndef haar_wavelet_reconstruct(LL, LH, HL, HH):\n    \"\"\"\n    Orthonormal inverse Haar reconstruction.\n    Input:  LL, LH, HL, HH [B, C, H, W]\n    Output: Reconstructed [B, C, H*2, W*2]\n    \"\"\"\n    norm = 1 / 2**0.5\n    B, C, H, W = LL.shape\n\n    x00 = (LL + LH + HL + HH) * norm\n    x01 = (LL - LH + HL - HH) * norm\n    x10 = (LL + LH - HL - HH) * norm\n    x11 = (LL - LH - HL + HH) * norm\n\n    out = torch.zeros(B, C, H * 2, W * 2, device=LL.device, dtype=LL.dtype)\n    out[:, :, 0::2, 0::2] = x00\n    out[:, :, 0::2, 1::2] = x01\n    out[:, :, 1::2, 0::2] = x10\n    out[:, :, 1::2, 1::2] = x11\n\n    return out\n\n\n\n\n\n\n\n\n\"\"\"\n\nclass StyleFeatures:  \n    def __init__(self, dtype=torch.float64,):\n        self.dtype = dtype\n\n    def set(self, y0_adain_embed: torch.Tensor):\n            \n    def get(self, denoised_embed: torch.Tensor):\n\n        return \"Norpity McNerp\"\n\n\"\"\"\n\n\n\n\nclass Retrojector:  \n    def __init__(self, proj=None, patch_size=2, pinv_dtype=torch.float64, dtype=torch.float64, ENDO=False):\n        self.proj       = proj\n        self.patch_size = patch_size\n        self.pinv_dtype = pinv_dtype\n        self.dtype      = dtype\n        \n        self.LINEAR     = isinstance(proj, nn.Linear)\n        self.CONV2D     = isinstance(proj, nn.Conv2d)\n        self.CONV3D     = isinstance(proj, nn.Conv3d)\n        self.ENDO       = ENDO\n        self.W          = proj.weight.data.to(dtype=pinv_dtype).cuda()\n        \n        if self.LINEAR:\n            self.W_inv = torch.linalg.pinv(self.W.cuda())\n        elif self.CONV2D:\n            C_out, _, kH, kW = proj.weight.shape\n            W_flat = proj.weight.view(C_out, -1).to(dtype=pinv_dtype)\n            self.W_inv = torch.linalg.pinv(W_flat.cuda())\n        \n        if proj.bias is None:\n            if self.LINEAR:\n                bias_size = proj.out_features\n            else:\n                bias_size = proj.out_channels\n            self.b = torch.zeros(bias_size, dtype=pinv_dtype, device=self.W_inv.device)\n        else:\n            self.b = proj.bias.data.to(dtype=pinv_dtype).to(self.W_inv.device)\n        \n    def embed(self, img: torch.Tensor):\n        self.h = img.shape[-2] // self.patch_size\n        self.w = img.shape[-1] // self.patch_size\n        \n        img = comfy.ldm.common_dit.pad_to_patch_size(img, (self.patch_size, self.patch_size))\n        \n        if   self.CONV2D:\n            self.orig_shape = img.shape  # for unembed\n            img_embed = F.conv2d(\n                img.to(self.W), \n                weight=self.W, \n                bias=self.b, \n                stride=self.proj.stride, \n                padding=self.proj.padding\n            )\n            #img_embed = rearrange(img_embed, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=self.patch_size, pw=self.patch_size) \n            img_embed = rearrange(img_embed, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=1, pw=1) \n        \n        elif self.LINEAR:\n            if img.ndim == 4:\n                img = rearrange(img, \"b c (h ph) (w pw) -> b (h w) (c ph pw)\", ph=self.patch_size, pw=self.patch_size) \n            if self.ENDO:\n                img_embed = F.linear(img.to(self.b) - self.b, self.W_inv)\n            else:\n                img_embed = F.linear(img.to(self.W), self.W, self.b)\n        \n        return img_embed.to(img)\n    \n    def unembed(self, img_embed: torch.Tensor):\n        if   self.CONV2D:\n            #img_embed = rearrange(img_embed, \"b (h w) (c ph pw) -> b c (h ph) (w pw)\", h=self.h, w=self.w, ph=self.patch_size, pw=self.patch_size)\n            img_embed = rearrange(img_embed, \"b (h w) (c ph pw) -> b c (h ph) (w pw)\", h=self.h, w=self.w, ph=1, pw=1)\n            img = self.invert_conv2d(img_embed)\n        \n        elif self.LINEAR:\n            if self.ENDO:\n                img = F.linear(img_embed.to(self.W), self.W, self.b)\n            else:\n                img = F.linear(img_embed.to(self.b) - self.b, self.W_inv)\n            if img.ndim == 3:\n                img = rearrange(img, \"b (h w) (c ph pw) -> b c (h ph) (w pw)\", h=self.h, w=self.w, ph=self.patch_size, pw=self.patch_size)\n        \n        return img.to(img_embed)\n    \n    def invert_conv2d(self, z: torch.Tensor,) -> torch.Tensor:\n        z_dtype = z.dtype\n        z = z.to(self.pinv_dtype)\n        conv = self.proj\n        \n        B, C_in, H, W      = self.orig_shape\n        C_out, _, kH, kW   = conv.weight.shape\n        stride_h, stride_w = conv.stride\n        pad_h,    pad_w    = conv.padding\n\n        b = conv.bias.view(1, C_out, 1, 1).to(z)\n        z_nobias = z - b\n\n        #W_flat = conv.weight.view(C_out, -1).to(z)  \n        #W_pinv = torch.linalg.pinv(W_flat)    \n\n        Bz, Co, Hp, Wp = z_nobias.shape\n        z_flat = z_nobias.reshape(Bz, Co, -1)  \n\n        x_patches = self.W_inv @ z_flat   \n\n        x_sum = F.fold(\n            x_patches,\n            output_size=(H + 2*pad_h, W+ 2*pad_w),\n            kernel_size=(kH, kW),\n            stride=(stride_h, stride_w),\n        )\n        ones = torch.ones_like(x_patches)\n        count = F.fold(\n            ones,\n            output_size=(H + 2*pad_h, W + 2*pad_w),\n            kernel_size=(kH, kW),\n            stride=(stride_h, stride_w),\n        )  \n\n        x_recon = x_sum / count.clamp(min=1e-6)\n        if pad_h > 0 or pad_w > 0:\n            x_recon = x_recon[..., pad_h:pad_h+H, pad_w:pad_w+W]\n\n        return x_recon.to(z_dtype)\n    \n    def invert_patch_embedding(self, z: torch.Tensor, original_shape: torch.Size, grid_sizes: Optional[Tuple[int,int,int]] = None) -> torch.Tensor:\n\n        B, C_in, D, H, W = original_shape\n        pD, pH, pW = self.patch_size\n        sD, sH, sW = pD, pH, pW\n\n        if z.ndim == 3:\n            # [B, S, C_out] -> reshape to [B, C_out, D', H', W']\n            S = z.shape[1]\n            if grid_sizes is None:\n                Dp = D // pD\n                Hp = H // pH\n                Wp = W // pW\n            else:\n                Dp, Hp, Wp = grid_sizes\n            C_out = z.shape[2]\n            z = z.transpose(1, 2).reshape(B, C_out, Dp, Hp, Wp)\n        else:\n            B2, C_out, Dp, Hp, Wp = z.shape\n            assert B2 == B, \"Batch size mismatch... ya sharked it.\"\n\n        # kncokout bias\n        b = self.patch_embedding.bias.view(1, C_out, 1, 1, 1)\n        z_nobias = z - b\n\n        # 2D filter -> pinv\n        w3 = self.patch_embedding.weight         # [C_out, C_in, 1, pH, pW]\n        w2 = w3.squeeze(2)                       # [C_out, C_in, pH, pW]\n        out_ch, in_ch, kH, kW = w2.shape\n        W_flat = w2.view(out_ch, -1)            # [C_out, in_ch*pH*pW]\n        W_pinv = torch.linalg.pinv(W_flat)      # [in_ch*pH*pW, C_out]\n\n        # merge depth for 2D unfold wackiness\n        z2 = z_nobias.permute(0,2,1,3,4).reshape(B*Dp, C_out, Hp, Wp)\n\n        # apply pinv ... get patch vectors\n        z_flat    = z2.reshape(B*Dp, C_out, -1)  # [B*Dp, C_out, L]\n        x_patches = W_pinv @ z_flat              # [B*Dp, in_ch*pH*pW, L]\n\n        # fold -> spatial frames\n        x2 = F.fold(\n            x_patches,\n            output_size=(H, W),\n            kernel_size=(pH, pW),\n            stride=(sH, sW)\n        )  # → [B*Dp, C_in, H, W]\n\n        # un-merge depth\n        x2 = x2.reshape(B, Dp, in_ch, H, W)           # [B, Dp,  C_in, H, W]\n        x_recon = x2.permute(0,2,1,3,4).contiguous()  # [B, C_in,   D, H, W]\n        return x_recon\n\n\n\n\n\n\ndef invert_conv2d(\n    conv: torch.nn.Conv2d,\n    z:    torch.Tensor,\n    original_shape: torch.Size,\n) -> torch.Tensor:\n    import torch.nn.functional as F\n\n    B, C_in, H, W = original_shape\n    C_out, _, kH, kW = conv.weight.shape\n    stride_h, stride_w = conv.stride\n    pad_h,    pad_w    = conv.padding\n\n    if conv.bias is not None:\n        b = conv.bias.view(1, C_out, 1, 1).to(z)\n        z_nobias = z - b\n    else:\n        z_nobias = z\n\n    W_flat = conv.weight.view(C_out, -1).to(z)  \n    W_pinv = torch.linalg.pinv(W_flat)    \n\n    Bz, Co, Hp, Wp = z_nobias.shape\n    z_flat = z_nobias.reshape(Bz, Co, -1)  \n\n    x_patches = W_pinv @ z_flat   \n\n    x_sum = F.fold(\n        x_patches,\n        output_size=(H + 2*pad_h, W + 2*pad_w),\n        kernel_size=(kH, kW),\n        stride=(stride_h, stride_w),\n    )\n    ones = torch.ones_like(x_patches)\n    count = F.fold(\n        ones,\n        output_size=(H + 2*pad_h, W + 2*pad_w),\n        kernel_size=(kH, kW),\n        stride=(stride_h, stride_w),\n    )  \n\n    x_recon = x_sum / count.clamp(min=1e-6)\n    if pad_h > 0 or pad_w > 0:\n        x_recon = x_recon[..., pad_h:pad_h+H, pad_w:pad_w+W]\n\n    return x_recon\n\n\n\ndef adain_seq_inplace(content: torch.Tensor, style: torch.Tensor, dim=1, eps: float = 1e-7) -> torch.Tensor:\n    mean_c = content.mean(dim, keepdim=True)\n    std_c  = content.std (dim, keepdim=True).add_(eps)  # in-place add\n    mean_s = style.mean  (dim, keepdim=True)\n    std_s  = style.std   (dim, keepdim=True).add_(eps)\n\n    content.sub_(mean_c).div_(std_c).mul_(std_s).add_(mean_s)  # in-place chain\n    return content\n\ndef adain_seq(content: torch.Tensor, style: torch.Tensor, eps: float = 1e-7) -> torch.Tensor:\n    return ((content - content.mean(1, keepdim=True)) / (content.std(1, keepdim=True) + eps)) * (style.std(1, keepdim=True) + eps) + style.mean(1, keepdim=True)\n\n\n\n\n\n\n\n\n\ndef apply_scattersort_tiled(\n    denoised_spatial : torch.Tensor, \n    y0_adain_spatial : torch.Tensor, \n    tile_h           : int, \n    tile_w           : int, \n    pad              : int,\n):\n    \"\"\"\n    Apply spatial scattersort between denoised_spatial and y0_adain_spatial\n    using local tile-wise sorted value matching.\n\n    Args:\n        denoised_spatial (Tensor): (B, C, H, W) tensor.\n        y0_adain_spatial  (Tensor): (B, C, H, W) reference tensor.\n        tile_h (int): tile height.\n        tile_w (int): tile width.\n        pad    (int): padding size to apply around tiles.\n\n    Returns:\n        denoised_embed (Tensor): (B, H*W, C) tensor after sortmatch.\n    \"\"\"\n    denoised_padded = F.pad(denoised_spatial, (pad, pad, pad, pad), mode='reflect')\n    y0_padded       = F.pad(y0_adain_spatial, (pad, pad, pad, pad), mode='reflect')\n\n    denoised_padded_out = denoised_padded.clone()\n    _, _, h_len, w_len = denoised_spatial.shape\n\n    for ix in range(pad, h_len, tile_h):\n        for jx in range(pad, w_len, tile_w):\n            tile    = denoised_padded[:, :, ix - pad:ix + tile_h + pad, jx - pad:jx + tile_w + pad]\n            y0_tile = y0_padded[:, :, ix - pad:ix + tile_h + pad, jx - pad:jx + tile_w + pad]\n\n            tile    = rearrange(tile,    \"b c h w -> b c (h w)\", h=tile_h + pad * 2, w=tile_w + pad * 2)\n            y0_tile = rearrange(y0_tile, \"b c h w -> b c (h w)\", h=tile_h + pad * 2, w=tile_w + pad * 2)\n\n            src_sorted, src_idx =    tile.sort(dim=-1)\n            ref_sorted, ref_idx = y0_tile.sort(dim=-1)\n\n            new_tile = tile.scatter(dim=-1, index=src_idx, src=ref_sorted.expand(src_sorted.shape))\n            new_tile = rearrange(new_tile, \"b c (h w) -> b c h w\", h=tile_h + pad * 2, w=tile_w + pad * 2)\n\n            denoised_padded_out[:, :, ix:ix + tile_h, jx:jx + tile_w] = (\n                new_tile if pad == 0 else new_tile[:, :, pad:-pad, pad:-pad]\n            )\n\n    denoised_padded_out = denoised_padded_out if pad == 0 else denoised_padded_out[:, :, pad:-pad, pad:-pad]\n    return denoised_padded_out\n\n\n\ndef apply_scattersort_masked(\n    denoised_embed         : torch.Tensor,\n    y0_adain_embed         : torch.Tensor,\n    y0_style_pos_mask      : torch.Tensor | None,\n    y0_style_pos_mask_edge : torch.Tensor | None,\n    h_len                  : int,\n    w_len                  : int\n):\n    if y0_style_pos_mask is None:\n        flatmask = torch.ones((1,1,h_len,w_len)).bool().flatten().bool()\n    else:\n        flatmask   = F.interpolate(y0_style_pos_mask, size=(h_len, w_len)).bool().flatten().cpu()\n    flatunmask = ~flatmask\n\n    if y0_style_pos_mask_edge is not None:\n        edgemask = F.interpolate(\n            y0_style_pos_mask_edge.unsqueeze(0), size=(h_len, w_len)\n        ).bool().flatten()\n        flatmask   = flatmask   & (~edgemask)\n        flatunmask = flatunmask & (~edgemask)\n\n    denoised_masked = denoised_embed[:, flatmask, :].clone()\n    y0_adain_masked = y0_adain_embed[:, flatmask, :].clone()\n\n    src_sorted, src_idx = denoised_masked.sort(dim=-2)\n    ref_sorted, ref_idx = y0_adain_masked.sort(dim=-2)\n\n    denoised_embed[:, flatmask, :] = src_sorted.scatter(dim=-2, index=src_idx, src=ref_sorted.expand(src_sorted.shape))\n\n    if (flatunmask == True).any():\n        denoised_unmasked = denoised_embed[:, flatunmask, :].clone()\n        y0_adain_unmasked = y0_adain_embed[:, flatunmask, :].clone()\n\n        src_sorted, src_idx = denoised_unmasked.sort(dim=-2)\n        ref_sorted, ref_idx = y0_adain_unmasked.sort(dim=-2)\n\n        denoised_embed[:, flatunmask, :] = src_sorted.scatter(dim=-2, index=src_idx, src=ref_sorted.expand(src_sorted.shape))\n\n    if y0_style_pos_mask_edge is not None:\n        denoised_edgemasked = denoised_embed[:, edgemask, :].clone()\n        y0_adain_edgemasked = y0_adain_embed[:, edgemask, :].clone()\n\n        src_sorted, src_idx = denoised_edgemasked.sort(dim=-2)\n        ref_sorted, ref_idx = y0_adain_edgemasked.sort(dim=-2)\n\n        denoised_embed[:, edgemask, :] = src_sorted.scatter(dim=-2, index=src_idx, src=ref_sorted.expand(src_sorted.shape))\n\n    return denoised_embed\n\n\n\n\ndef apply_scattersort(\n    denoised_embed         : torch.Tensor,\n    y0_adain_embed         : torch.Tensor,\n):\n    #src_sorted, src_idx = denoised_embed.cpu().sort(dim=-2)\n    src_idx    = denoised_embed.argsort(dim=-2)\n    ref_sorted = y0_adain_embed.sort(dim=-2)[0]\n\n    denoised_embed.scatter_(dim=-2, index=src_idx, src=ref_sorted.expand(ref_sorted.shape))\n\n    return denoised_embed\n\ndef apply_scattersort_spatial(\n    denoised_spatial         : torch.Tensor,\n    y0_adain_spatial         : torch.Tensor,\n):\n    denoised_embed = rearrange(denoised_spatial, \"b c h w -> b (h w) c\")\n    y0_adain_embed = rearrange(y0_adain_spatial, \"b c h w -> b (h w) c\")\n    src_sorted, src_idx = denoised_embed.sort(dim=-2)\n    ref_sorted, ref_idx = y0_adain_embed.sort(dim=-2)\n\n    denoised_embed = src_sorted.scatter(dim=-2, index=src_idx, src=ref_sorted.expand(src_sorted.shape))\n    \n    return rearrange(denoised_embed, \"b (h w) c -> b c h w\", h=denoised_spatial.shape[-2], w=denoised_spatial.shape[-1])\n\n\n\n\n\ndef apply_scattersort_spatial(\n    x_spatial : torch.Tensor,\n    y_spatial : torch.Tensor,\n):\n    x_emb = rearrange(x_spatial, \"b c h w -> b (h w) c\")\n    y_emb = rearrange(y_spatial, \"b c h w -> b (h w) c\")\n    \n    x_sorted, x_idx = x_emb.sort(dim=-2)\n    y_sorted, y_idx = y_emb.sort(dim=-2)\n\n    x_emb = x_sorted.scatter(dim=-2, index=x_idx, src=y_sorted.expand(x_sorted.shape))\n    \n    return rearrange(x_emb, \"b (h w) c -> b c h w\", h=x_spatial.shape[-2], w=x_spatial.shape[-1])\n\n\n\n\ndef apply_adain_spatial(\n    x_spatial : torch.Tensor,\n    y_spatial : torch.Tensor,\n):\n    x_emb = rearrange(x_spatial, \"b c h w -> b (h w) c\")\n    y_emb = rearrange(y_spatial, \"b c h w -> b (h w) c\")\n    \n    x_mean = x_emb.mean(-2, keepdim=True)\n    x_std  = x_emb.std (-2, keepdim=True)\n    y_mean = y_emb.mean(-2, keepdim=True)\n    y_std  = y_emb.std (-2, keepdim=True)\n\n    assert (x_std == 0).any() == 0, \"Target tensor has no variance!\"\n    assert (y_std == 0).any() == 0, \"Reference tensor has no variance!\"\n    \n    x_emb_adain = (x_emb - x_mean) / x_std\n    x_emb_adain = (x_emb_adain * y_std) + y_mean\n    \n    return x_emb_adain.reshape_as(x_spatial)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ndef adain_patchwise(content: torch.Tensor, style: torch.Tensor, sigma: float = 1.0, kernel_size: int = None, eps: float = 1e-5) -> torch.Tensor:\n    # this one is really slow\n    B, C, H, W = content.shape\n    device     = content.device\n    dtype      = content.dtype\n\n    if kernel_size is None:\n        kernel_size = int(2 * math.ceil(3 * sigma) + 1)\n    if kernel_size % 2 == 0:\n        kernel_size += 1\n\n    pad    = kernel_size // 2\n    coords = torch.arange(kernel_size, dtype=torch.float64, device=device) - pad\n    gauss  = torch.exp(-0.5 * (coords / sigma) ** 2)\n    gauss /= gauss.sum()\n    kernel_2d = (gauss[:, None] * gauss[None, :]).to(dtype=dtype)\n\n    weight = kernel_2d.view(1, 1, kernel_size, kernel_size)\n\n    content_padded = F.pad(content, (pad, pad, pad, pad), mode='reflect')\n    style_padded   = F.pad(style,   (pad, pad, pad, pad), mode='reflect')\n    result = torch.zeros_like(content)\n\n    for i in range(H):\n        for j in range(W):\n            c_patch = content_padded[:, :, i:i + kernel_size, j:j + kernel_size]\n            s_patch =   style_padded[:, :, i:i + kernel_size, j:j + kernel_size]\n            w = weight.expand_as(c_patch)\n\n            c_mean =  (c_patch              * w).sum(dim=(-1, -2), keepdim=True)\n            c_std  = ((c_patch - c_mean)**2 * w).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n            s_mean =  (s_patch              * w).sum(dim=(-1, -2), keepdim=True)\n            s_std  = ((s_patch - s_mean)**2 * w).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n\n            normed =  (c_patch[:, :, pad:pad+1, pad:pad+1] - c_mean) / c_std\n            stylized = normed * s_std + s_mean\n            result[:, :, i, j] = stylized.squeeze(-1).squeeze(-1)\n\n    return result\n\n\ndef adain_patchwise_row_batch(content: torch.Tensor, style: torch.Tensor, sigma: float = 1.0, kernel_size: int = None, eps: float = 1e-5) -> torch.Tensor:\n\n    B, C, H, W = content.shape\n    device, dtype = content.device, content.dtype\n\n    if kernel_size is None:\n        kernel_size = int(2 * math.ceil(3 * sigma) + 1)\n    if kernel_size % 2 == 0:\n        kernel_size += 1\n\n    pad = kernel_size // 2\n    coords = torch.arange(kernel_size, dtype=torch.float64, device=device) - pad\n    gauss = torch.exp(-0.5 * (coords / sigma) ** 2)\n    gauss = (gauss / gauss.sum()).to(dtype)\n    kernel_2d = (gauss[:, None] * gauss[None, :])\n\n    weight = kernel_2d.view(1, 1, kernel_size, kernel_size)\n\n    content_padded = F.pad(content, (pad, pad, pad, pad), mode='reflect')\n    style_padded = F.pad(style, (pad, pad, pad, pad), mode='reflect')\n    result = torch.zeros_like(content)\n\n    for i in range(H):\n        c_row_patches = torch.stack([\n            content_padded[:, :, i:i+kernel_size, j:j+kernel_size]\n            for j in range(W)\n        ], dim=0)  # [W, B, C, k, k]\n\n        s_row_patches = torch.stack([\n            style_padded[:, :, i:i+kernel_size, j:j+kernel_size]\n            for j in range(W)\n        ], dim=0)\n\n        w = weight.expand_as(c_row_patches[0])\n\n        c_mean = (c_row_patches * w).sum(dim=(-1, -2), keepdim=True)\n        c_std  = ((c_row_patches - c_mean) ** 2 * w).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n        s_mean = (s_row_patches * w).sum(dim=(-1, -2), keepdim=True)\n        s_std  = ((s_row_patches - s_mean) ** 2 * w).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n\n        center = kernel_size // 2\n        central = c_row_patches[:, :, :, center:center+1, center:center+1]\n        normed = (central - c_mean) / c_std\n        stylized = normed * s_std + s_mean\n\n        result[:, :, i, :] = stylized.squeeze(-1).squeeze(-1).permute(1, 2, 0)  # [B,C,W]\n\n    return result\n\n\n\ndef adain_patchwise_row_batch_med(content: torch.Tensor, style: torch.Tensor, sigma: float = 1.0, kernel_size: int = None, eps: float = 1e-5, mask: torch.Tensor = None, use_median_blur: bool = False, lowpass_weight=1.0, highpass_weight=1.0) -> torch.Tensor:\n    B, C, H, W = content.shape\n    device, dtype = content.device, content.dtype\n\n    if kernel_size is None:\n        kernel_size = int(2 * math.ceil(3 * abs(sigma)) + 1)\n    if kernel_size % 2 == 0:\n        kernel_size += 1\n\n    pad = kernel_size // 2\n\n    content_padded = F.pad(content, (pad, pad, pad, pad), mode='reflect')\n    style_padded = F.pad(style, (pad, pad, pad, pad), mode='reflect')\n    result = torch.zeros_like(content)\n\n    scaling = torch.ones((B, 1, H, W), device=device, dtype=dtype)\n    sigma_scale = torch.ones((H, W), device=device, dtype=torch.float32)\n    if mask is not None:\n        with torch.no_grad():\n            padded_mask = F.pad(mask.float(), (pad, pad, pad, pad), mode=\"reflect\")\n            blurred_mask = F.avg_pool2d(padded_mask, kernel_size=kernel_size, stride=1, padding=pad)\n            blurred_mask = blurred_mask[..., pad:-pad, pad:-pad]\n            edge_proximity = blurred_mask * (1.0 - blurred_mask)\n            scaling = 1.0 - (edge_proximity / 0.25).clamp(0.0, 1.0)\n            sigma_scale = scaling[0, 0]  # assuming single-channel mask broadcasted across B, C\n\n    if not use_median_blur:\n        coords = torch.arange(kernel_size, dtype=torch.float64, device=device) - pad\n        base_gauss = torch.exp(-0.5 * (coords / sigma) ** 2)\n        base_gauss = (base_gauss / base_gauss.sum()).to(dtype)\n        gaussian_table = {}\n        for s in sigma_scale.unique():\n            sig = float((sigma * s + eps).clamp(min=1e-3))\n            gauss_local = torch.exp(-0.5 * (coords / sig) ** 2)\n            gauss_local = (gauss_local / gauss_local.sum()).to(dtype)\n            kernel_2d = gauss_local[:, None] * gauss_local[None, :]\n            gaussian_table[s.item()] = kernel_2d\n\n    for i in range(H):\n        row_result = torch.zeros(B, C, W, dtype=dtype, device=device)\n        for j in range(W):\n            c_patch = content_padded[:, :, i:i+kernel_size, j:j+kernel_size]\n            s_patch = style_padded[:, :, i:i+kernel_size, j:j+kernel_size]\n\n            if use_median_blur:\n                # Median blur with residual restoration\n                unfolded_c = c_patch.reshape(B, C, -1)\n                unfolded_s = s_patch.reshape(B, C, -1)\n\n                c_median = unfolded_c.median(dim=-1, keepdim=True).values\n                s_median = unfolded_s.median(dim=-1, keepdim=True).values\n\n                center = kernel_size // 2\n                central = c_patch[:, :, center, center].view(B, C, 1)\n                residual = central - c_median\n                stylized = lowpass_weight * s_median + residual * highpass_weight\n            else:\n                k = gaussian_table[float(sigma_scale[i, j].item())]\n                local_weight = k.view(1, 1, kernel_size, kernel_size).expand(B, C, kernel_size, kernel_size)\n\n                c_mean = (c_patch * local_weight).sum(dim=(-1, -2), keepdim=True)\n                c_std = ((c_patch - c_mean) ** 2 * local_weight).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n                s_mean = (s_patch * local_weight).sum(dim=(-1, -2), keepdim=True)\n                s_std = ((s_patch - s_mean) ** 2 * local_weight).sum(dim=(-1, -2), keepdim=True).sqrt() + eps\n\n                center = kernel_size // 2\n                central = c_patch[:, :, center:center+1, center:center+1]\n                normed = (central - c_mean) / c_std\n                stylized = normed * s_std + s_mean\n\n            local_scaling = scaling[:, :, i, j].view(B, 1, 1)\n            stylized = central * (1 - local_scaling) + stylized * local_scaling\n\n            row_result[:, :, j] = stylized.squeeze(-1)\n        result[:, :, i, :] = row_result\n\n    return result\n\n\n\n\n\n\n\ndef weighted_mix_n(tensor_list, weight_list, dim=-1, offset=0):\n    assert all(t.shape == tensor_list[0].shape for t in tensor_list)\n    assert len(tensor_list) == len(weight_list)\n\n    total_weight = sum(weight_list)\n    ratios = [w / total_weight for w in weight_list]\n\n    length = tensor_list[0].shape[dim]\n    idx = torch.arange(length)\n\n    # Create a bin index tensor based on weighted slots\n    float_bins = (idx + offset) * len(ratios) / length\n    bin_idx = torch.floor(float_bins).long() % len(ratios)\n\n    # Allocate slots based on ratio using a cyclic pattern\n    counters = [0.0 for _ in ratios]\n    slots = torch.empty_like(idx)\n\n    for i in range(length):\n        # Assign to the group that's most under-allocated\n        expected = [r * (i + 1) for r in ratios]\n        errors = [expected[j] - counters[j] for j in range(len(ratios))]\n        k = max(range(len(errors)), key=lambda j: errors[j])\n        slots[i] = k\n        counters[k] += 1\n\n    # Create mask for each tensor\n    out = tensor_list[0].clone()\n    for i, tensor in enumerate(tensor_list):\n        mask = slots == i\n        while mask.dim() < tensor.dim():\n            mask = mask.unsqueeze(0)\n        mask = mask.expand_as(tensor)\n        out = torch.where(mask, tensor, out)\n    \n    return out\n\n\n\n\n\n\nfrom torch import vmap\n\nBLOCK_NAMES = {\"double_blocks\", \"single_blocks\", \"up_blocks\", \"middle_blocks\", \"down_blocks\", \"input_blocks\", \"output_blocks\"}\n\nDEFAULT_BLOCK_WEIGHTS_MMDIT = {\n    \"attn_norm\"    : 0.0,\n    \"attn_norm_mod\": 0.0,\n    \"attn\"         : 1.0,\n    \"attn_gated\"   : 0.0,\n    \"attn_res\"     : 1.0,\n    \"ff_norm\"      : 0.0,\n    \"ff_norm_mod\"  : 0.0,\n    \"ff\"           : 1.0,\n    \"ff_gated\"     : 0.0,\n    \"ff_res\"       : 1.0,\n    \n    \"h_tile\"       : 8,\n    \"w_tile\"       : 8,\n}\n\nDEFAULT_ATTN_WEIGHTS_MMDIT = {\n    \"q_proj\": 0.0,\n    \"k_proj\": 0.0,\n    \"v_proj\": 1.0,\n    \"q_norm\": 0.0,\n    \"k_norm\": 0.0,\n    \"out\"   : 1.0,\n    \n    \"h_tile\": 8,\n    \"w_tile\": 8,\n}\n\nDEFAULT_BASE_WEIGHTS_MMDIT = {\n    \"proj_in\" : 1.0,\n    \"proj_out\": 1.0,\n    \n    \"h_tile\"  : 8,\n    \"w_tile\"  : 8,\n}\n\nclass Stylizer:\n    buffer = {}\n    \n    CLS_WCT = StyleWCT()\n    \n    CLS_WCT2 = WaveletStyleWCT()\n    \n    def __init__(self, dtype=torch.float64, device=torch.device(\"cuda\")):\n        self.dtype = dtype\n        self.device = device\n        self.mask  = [None]\n        self.apply_to = [\"\"]\n        self.method = [\"passthrough\"]\n        self.h_tile = [-1]\n        self.w_tile = [-1]\n        \n        self.w_len   = 0\n        self.h_len   = 0\n        self.img_len = 0\n        \n        self.IMG_1ST = True\n        self.HEADS = 0\n        self.KONTEXT = 0\n    def set_mode(self, mode):\n        self.method = [mode] #[getattr(self, mode)]\n    \n    def set_weights(self, **kwargs):\n        for k, v in kwargs.items():\n            if hasattr(self, k):\n                setattr(self, k, [v])\n    \n    def set_weights_recursive(self, **kwargs):\n        for name, val in kwargs.items():\n            if hasattr(self, name):\n                setattr(self, name, [val])\n\n        for attr_name, attr_val in vars(self).items():\n            if isinstance(attr_val, Stylizer):\n                attr_val.set_weights_recursive(**kwargs)\n\n        for list_name in BLOCK_NAMES:\n            lst = getattr(self, list_name, None)\n            if isinstance(lst, list):\n                for element in lst:\n                    if isinstance(element, Stylizer):\n                        element.set_weights_recursive(**kwargs)\n    \n    def merge_weights(self, other):\n        def recursive_merge(a, b, path):\n            if isinstance(a, list) and isinstance(b, list):\n                if path in BLOCK_NAMES:\n                    out = []\n                    for i in range(max(len(a), len(b))):\n                        if i < len(a) and i < len(b):\n                            out.append(recursive_merge(a[i], b[i], path=None))\n                        elif i < len(a):\n                            out.append(a[i])\n                        else:\n                            out.append(b[i])\n                    return out\n                return a + b\n\n            if isinstance(a, dict) and isinstance(b, dict):\n                merged = dict(a)\n                for k, v_b in b.items():\n                    if k in merged:\n                        merged[k] = recursive_merge(merged[k], v_b, path=None)\n                    else:\n                        merged[k] = v_b\n                return merged\n\n            if hasattr(a, \"__dict__\") and hasattr(b, \"__dict__\"):\n                for attr, val_b in vars(b).items():\n                    val_a = getattr(a, attr, None)\n                    if val_a is not None:\n                        setattr(a, attr, recursive_merge(val_a, val_b, path=attr))\n                    else:\n                        setattr(a, attr, val_b)\n                return a\n            return b\n\n        for attr in vars(self):\n            if attr in BLOCK_NAMES:\n                merged = recursive_merge(getattr(self, attr), getattr(other, attr, []), path=attr)\n            elif hasattr(other, attr):\n                merged = recursive_merge(getattr(self, attr), getattr(other, attr), path=attr)\n            else:\n                continue\n            setattr(self, attr, merged)\n    \n    def set_len(self, h_len, w_len, img_slice, txt_slice, HEADS):\n        self.h_len  = h_len\n        self.w_len  = w_len\n        self.img_slice = img_slice\n        self.txt_slice = txt_slice\n        self.img_len = h_len * w_len\n        self.HEADS = HEADS\n\n    @staticmethod\n    def middle_slice(length, weight):\n        \"\"\"\n        Returns a slice object that selects the middle `weight` fraction of a dimension.\n        Example: weight=1.0 → full slice; weight=0.5 → middle 50%\n        \"\"\"\n        if weight >= 1.0:\n            return slice(None)\n        wr = int((length * (1 - weight)) // 2)\n        return slice(wr, -wr if wr > 0 else None)\n\n    @staticmethod\n    def get_outer_slice(x, weight):\n        if weight >= 0.0:\n            return x\n        length = x.shape[-2]\n        wr = int((length * (1 - (-weight))) // 2)\n        \n        return torch.cat([x[...,:wr,:], x[...,-wr:,:]], dim=-2)\n\n    @staticmethod\n    def restore_outer_slice(x, x_outer, weight):\n        if weight >= 0.0:\n            return x\n        length = x.shape[-2]\n        wr = int((length * (1 - (-weight))) // 2)\n        \n        x[...,:wr,:]  = x_outer[...,:wr,:]\n        x[...,-wr:,:] = x_outer[...,-wr:,:]\n        return x\n\n    def __call__(self, x, attr):\n        if x.shape[0] == 1 and not self.KONTEXT:\n            return x\n        \n        weight_list = getattr(self, attr)\n        weights_all_zero = all(weight == 0.0 for weight in weight_list)\n        if weights_all_zero:\n            return x\n        \n        #self.HEADS=24\n        #x_ndim = x.ndim\n        #if x_ndim == 3:\n        #    B, HW, C = x.shape\n        #    if x.shape[-2] != self.HEADS and self.HEADS != 0:\n        #        x = x.reshape(B,self.HEADS,HW,-1)\n        \n        HEAD_DIM = x.shape[1]\n        if HEAD_DIM == self.HEADS:\n            B, HEAD_DIM, HW, C = x.shape\n            x = x.reshape(B, HW, C*HEAD_DIM)\n            \n        if hasattr(self, \"KONTEXT\") and self.KONTEXT == 1:\n            x = x.reshape(2, x.shape[1] // 2, x.shape[2])\n        \n        txt_slice, img_slice, ktx_slice = self.txt_slice, self.img_slice, None\n        if hasattr(self, \"KONTEXT\") and self.KONTEXT == 2:\n            ktx_slice = self.img_slice # slice(2 * self.img_slice.start, None)\n            img_slice = slice(2 * self.img_slice.start, self.img_slice.start)\n            txt_slice = slice(None, 2 * self.txt_slice.stop)\n        \n        weights_all_one         = all(weight == 1.0           for weight in weight_list)\n        methods_all_scattersort = all(name   == \"scattersort\" for name   in self.method)\n        masks_all_none = all(mask is None for mask in self.mask)\n        \n        if weights_all_one and methods_all_scattersort and len(weight_list) > 1 and masks_all_none:\n            buf = Stylizer.buffer\n            buf['src_idx']   = x[0:1].argsort(dim=-2)\n            buf['ref_sorted'], buf['ref_idx'] = x[1:].reshape(1, -1, x.shape[-1]).sort(dim=-2)\n            buf['src'] = buf['ref_sorted'][:,::len(weight_list)].expand_as(buf['src_idx'])    #            interleave_stride = len(weight_list)\n            \n            x[0:1] = x[0:1].scatter_(dim=-2, index=buf['src_idx'], src=buf['src'],)\n        \n        else:\n            for i, (weight, mask) in enumerate(zip(weight_list, self.mask)):\n                if mask is not None:\n                    x01 = x[0:1].clone()\n                slc = Stylizer.middle_slice(x.shape[-2], weight)\n                #slc = slice(None)\n                    \n                txt_method_name = self.method[i].removeprefix(\"tiled_\")\n                txt_method = getattr(self, txt_method_name)\n                \n                method_name = self.method[i].removeprefix(\"tiled_\") if self.img_len > x.shape[-2] or self.h_len < 0 else self.method[i]\n                method = getattr(self, method_name)\n                apply_to = self.apply_to[i]\n                if   weight == 0.0:\n                    continue\n                else: # if weight == 1.0:\n                    if weight > 0 and weight < 1:\n                        x_clone = x.clone()\n                    if   self.img_len == x.shape[-2]  or apply_to == \"img+txt\" or self.h_len < 0:\n                        x = method(x, idx=i+1, slc=slc)\n                    elif   self.img_len < x.shape[-2]:\n                        if \"img\" in apply_to:\n                            x[...,img_slice,:] = method(x[...,img_slice,:], idx=i+1, slc=slc)\n                            #if ktx_slice is not None:\n                            #    x[...,ktx_slice,:] = method(x[...,ktx_slice,:], idx=i+1)\n                            #x[:,:self.img_len,:] = method(x[:,:self.img_len,:], idx=i+1)\n                        if \"txt\" in apply_to:\n                            x[...,txt_slice,:] = txt_method(x[...,txt_slice,:], idx=i+1, slc=slc)\n                            #x[:,self.img_len:,:] = method(x[:,self.img_len:,:], idx=i+1)\n                        if not \"img\" in apply_to and not \"txt\" in apply_to:\n                            pass\n                    else:\n                        x = method(x, idx=i+1, slc=slc)\n                    if weight > 0 and weight < 1 and txt_method_name != \"scattersort\":\n                        x = torch.lerp(x_clone, x, weight)\n                #else:\n                #    x = torch.lerp(x, method(x.clone(), idx=i+1), weight)\n                \n                if mask is not None:\n                    x[0:1,...,img_slice,:] = torch.lerp(x01[...,img_slice,:], x[0:1,...,img_slice,:], mask.view(1, -1, 1))  \n                    if ktx_slice is not None:\n                        x[0:1,...,ktx_slice,:] = torch.lerp(x01[...,ktx_slice,:], x[0:1,...,ktx_slice,:], mask.view(1, -1, 1))  \n                    #x[0:1,:self.img_len] = torch.lerp(x01[:,:self.img_len], x[0:1,:self.img_len], mask.view(1, -1, 1))\n        \n        #if x_ndim == 3:\n        #    return x.view(B,HW,C)\n        if hasattr(self, \"KONTEXT\") and  self.KONTEXT == 1:\n            x = x.reshape(1, x.shape[1] * 2, x.shape[2])\n        \n        if HEAD_DIM == self.HEADS:\n            return x.reshape(B, HEAD_DIM, HW, C)\n        else:\n            return x\n\n\n\n    def WCT(self, x, idx=1):\n        Stylizer.CLS_WCT.set(x[idx:idx+1])\n        x[0:1] = Stylizer.CLS_WCT.get(x[0:1])\n        return x\n    \n    def WCT2(self, x, idx=1):\n        Stylizer.CLS_WCT2.set(x[idx:idx+1], self.h_len, self.w_len)\n        x[0:1] = Stylizer.CLS_WCT2.get(x[0:1], self.h_len, self.w_len)\n        return x\n\n    @staticmethod\n    def AdaIN_(x, y, eps: float = 1e-7) -> torch.Tensor:\n        mean_c = x.mean(-2, keepdim=True)\n        std_c  = x.std (-2, keepdim=True).add_(eps)  # in-place add\n        mean_s = y.mean  (-2, keepdim=True)\n        std_s  = y.std   (-2, keepdim=True).add_(eps)\n        x.sub_(mean_c).div_(std_c).mul_(std_s).add_(mean_s)  # in-place chain\n        return x\n\n    def AdaIN(self, x, idx=1, eps: float = 1e-7) -> torch.Tensor:\n        mean_c = x[0:1].mean(-2, keepdim=True)\n        std_c  = x[0:1].std (-2, keepdim=True).add_(eps)  # in-place add\n        mean_s = x[idx:idx+1].mean  (-2, keepdim=True)\n        std_s  = x[idx:idx+1].std   (-2, keepdim=True).add_(eps)\n        x[0:1].sub_(mean_c).div_(std_c).mul_(std_s).add_(mean_s)  # in-place chain\n        return x\n\n    def injection(self, x:torch.Tensor, idx=1) -> torch.Tensor:\n        x[0:1] = x[idx:idx+1]\n        return x\n    \n    @staticmethod\n    def injection_(x:torch.Tensor, y:torch.Tensor) -> torch.Tensor:\n        return y\n    \n    @staticmethod\n    def passthrough(x:torch.Tensor, idx=1) -> torch.Tensor:\n        return x\n    \n    @staticmethod\n    def decompose_magnitude_direction(x, dim=-1, eps=1e-8):\n        magnitude = x.norm(p=2, dim=dim, keepdim=True)\n        direction = x / (magnitude + eps)\n        return magnitude, direction\n\n    @staticmethod\n    def scattersort_dir_(x, y, dim=-2):\n        #buf = Stylizer.buffer\n        #buf['src_sorted'], buf['src_idx'] = x.sort(dim=-2)\n        #buf['ref_sorted'], buf['ref_idx'] = y.sort(dim=-2)\n        #mag, _ = Stylizer.decompose_magnitude_direction(buf['src_sorted'], dim)\n        #_, dir = Stylizer.decompose_magnitude_direction(buf['ref_sorted'], dim)\n        mag, _ = Stylizer.decompose_magnitude_direction(x.to(torch.float64), dim)\n        \n        buf = Stylizer.buffer\n        buf['src_idx']                    = x.argsort(dim=-2)\n        buf['ref_sorted'], buf['ref_idx'] = y   .sort(dim=-2)\n        x.scatter_(dim=-2, index=buf['src_idx'], src=buf['ref_sorted'].expand_as(buf['src_idx']))\n        \n        \n        _, dir = Stylizer.decompose_magnitude_direction(x.to(torch.float64), dim)\n        \n        return (mag * dir).to(x)\n\n\n    @staticmethod\n    def scattersort_dir2_(x, y, dim=-2):\n        #buf = Stylizer.buffer\n        #buf['src_sorted'], buf['src_idx'] = x.sort(dim=-2)\n        #buf['ref_sorted'], buf['ref_idx'] = y.sort(dim=-2)\n        #mag, _ = Stylizer.decompose_magnitude_direction(buf['src_sorted'], dim)\n        #_, dir = Stylizer.decompose_magnitude_direction(buf['ref_sorted'], dim)\n        \n        \n        buf = Stylizer.buffer\n        buf['src_sorted'], buf['src_idx'] = x.sort(dim=dim)\n        buf['ref_sorted'], buf['ref_idx'] = y.sort(dim=dim)\n        \n\n\n\n        buf['x_sub'], buf['x_sub_idx'] = buf['src_sorted'].sort(dim=-1)\n        buf['y_sub'], buf['y_sub_idx'] = buf['ref_sorted'].sort(dim=-1)\n        \n        mag, _ = Stylizer.decompose_magnitude_direction(buf['x_sub'].to(torch.float64), -1)\n        _, dir = Stylizer.decompose_magnitude_direction(buf['y_sub'].to(torch.float64), -1)\n        \n        buf['y_sub'] = (mag * dir).to(x)\n        \n        buf['ref_sorted'].scatter_(dim=-1, index=buf['y_sub_idx'], src=buf['y_sub'].expand_as(buf['y_sub_idx']))\n\n\n\n        mag, _ = Stylizer.decompose_magnitude_direction(buf['src_sorted'].to(torch.float64), dim)\n        _, dir = Stylizer.decompose_magnitude_direction(buf['ref_sorted'].to(torch.float64), dim)\n        \n        buf['ref_sorted'] = (mag * dir).to(x)\n        \n        x.scatter_(dim=dim, index=buf['src_idx'], src=buf['ref_sorted'].expand_as(buf['src_idx']))\n\n\n        return x\n\n\n    @staticmethod\n    def scattersort_dir(x, idx=1):\n        x[0:1] = Stylizer.scattersort_dir_(x[0:1], x[idx:idx+1])\n        return x\n    \n\n    @staticmethod\n    def scattersort_dir2(x, idx=1):\n        x[0:1] = Stylizer.scattersort_dir2_(x[0:1], x[idx:idx+1])\n        return x\n\n    @staticmethod\n    def scattersort_(x, y, slc=slice(None)):\n        buf = Stylizer.buffer\n        buf['src_idx']                    = x.argsort(dim=-2)\n        buf['ref_sorted'], buf['ref_idx'] = y   .sort(dim=-2)\n\n        return x.scatter_(dim=-2, index=buf['src_idx'][...,slc,:], src=buf['ref_sorted'][...,slc,:].expand_as(buf['src_idx'][...,slc,:]))\n    \n\n    @staticmethod\n    def scattersort_double(x, y):\n        buf = Stylizer.buffer\n        buf['src_sorted'], buf['src_idx'] = x.sort(dim=-2)\n        buf['ref_sorted'], buf['ref_idx'] = y.sort(dim=-2)\n        \n        buf['x_sub_idx']               = buf['src_sorted'].argsort(dim=-1)\n        buf['y_sub'], buf['y_sub_idx'] = buf['ref_sorted'].sort(dim=-1)\n        \n        x.scatter_(dim=-1, index=buf['x_sub_idx'], src=buf['y_sub'].expand_as(buf['x_sub_idx']))\n\n        return x.scatter_(dim=-2, index=buf['src_idx'], src=buf['ref_sorted'].expand_as(buf['src_idx']))\n    \n    \n    def scattersort_aoeu(self, x, idx=1, slc=slice(None)):\n        x[0:1] = Stylizer.scattersort_(x[0:1], x[idx:idx+1], slc)\n        return x\n    \n    def scattersort(self, x, idx=1, slc=slice(None)):\n        if x.shape[0] != 2:\n            x[0:1] = Stylizer.scattersort_(x[0:1], x[idx:idx+1], slc)\n            return x\n        \n        buf = Stylizer.buffer\n        buf['sorted'], buf['idx'] = x.sort(dim=-2)\n\n        return x.scatter_(dim=-2, index=buf['idx'][0:1][...,slc,:], src=buf['sorted'][1:2][...,slc,:].expand_as(buf['idx'][0:1][...,slc,:]))\n    \n\n    \n    \n    def tiled_scattersort(self, x, idx=1): #, h_tile=None, w_tile=None):\n        #if HDModel.RECON_MODE:\n        #    return denoised_embed\n        #den   = x[0:1]      [:,:self.img_len,:].view(-1, 2560, self.h_len, self.w_len)\n        #style = x[idx:idx+1][:,:self.img_len,:].view(-1, 2560, self.h_len, self.w_len)\n        #h_tile = self.h_tile[idx-1] if h_tile is None else h_tile\n        #w_tile = self.w_tile[idx-1] if w_tile is None else w_tile\n        \n        C = x.shape[-1]\n        den   = x[0:1]      [:,self.img_slice,:].reshape(-1, C, self.h_len, self.w_len)\n        style = x[idx:idx+1][:,self.img_slice,:].reshape(-1, C, self.h_len, self.w_len)\n        \n        tiles     = Stylizer.get_tiles_as_strided(den,   self.h_tile[idx-1], self.w_tile[idx-1])\n        ref_tile  = Stylizer.get_tiles_as_strided(style, self.h_tile[idx-1], self.w_tile[idx-1])\n\n        # rearrange for vmap to run on (nH, nW) ( as outer axes)\n        tiles_v    = tiles   .permute(2, 3, 0, 1, 4, 5) # (nH, nW, B, C, tile_h, tile_w)\n        ref_tile_v = ref_tile.permute(2, 3, 0, 1, 4, 5) # (nH, nW, B, C, tile_h, tile_w)\n\n        # vmap over spatial dimms (nH, nW)... num of tiles high, num tiles wide\n        vmap2   = torch.vmap(torch.vmap(Stylizer.apply_scattersort_per_tile, in_dims=0), in_dims=0)\n        result  = vmap2(tiles_v, ref_tile_v)  # (nH, nW, B, C, tile_h, tile_w)\n\n        # --> (B, C, nH, nW, tile_h, tile_w)\n        result = result.permute(2, 3, 0, 1, 4, 5)  #( B, C, nH, nW, tile_h, tile_w)\n\n        # in-place copy, werx if result has same shape/strides as tiles... overwrites same mem location \"content\" is using\n        tiles.copy_(result)\n\n        return x\n    \n    \n    def tiled_AdaIN(self, x, idx=1):\n        #if HDModel.RECON_MODE:\n        #    return denoised_embed\n        #den   = x[0:1]      [:,:self.img_len,:].view(-1, 2560, self.h_len, self.w_len)\n        #style = x[idx:idx+1][:,:self.img_len,:].view(-1, 2560, self.h_len, self.w_len)\n        C = x.shape[-1]\n        den   = x[0:1]      [:,self.img_slice,:].reshape(-1, C, self.h_len, self.w_len)\n        style = x[idx:idx+1][:,self.img_slice,:].reshape(-1, C, self.h_len, self.w_len)\n        \n        tiles     = Stylizer.get_tiles_as_strided(den,   self.h_tile[idx-1], self.w_tile[idx-1])\n        ref_tile  = Stylizer.get_tiles_as_strided(style, self.h_tile[idx-1], self.w_tile[idx-1])\n\n        # rearrange for vmap to run on (nH, nW) ( as outer axes)\n        tiles_v    = tiles   .permute(2, 3, 0, 1, 4, 5) # (nH, nW, B, C, tile_h, tile_w)\n        ref_tile_v = ref_tile.permute(2, 3, 0, 1, 4, 5) # (nH, nW, B, C, tile_h, tile_w)\n\n        # vmap over spatial dimms (nH, nW)... num of tiles high, num tiles wide\n        vmap2   = torch.vmap(torch.vmap(Stylizer.apply_AdaIN_per_tile, in_dims=0), in_dims=0)\n        result  = vmap2(tiles_v, ref_tile_v)  # (nH, nW, B, C, tile_h, tile_w)\n\n        # --> (B, C, nH, nW, tile_h, tile_w)\n        result = result.permute(2, 3, 0, 1, 4, 5)  #( B, C, nH, nW, tile_h, tile_w)\n\n        # in-place copy, werx if result has same shape/strides as tiles... overwrites same mem location \"content\" is using\n        tiles.copy_(result)\n\n        return x\n    \n    \n    @staticmethod\n    def get_tiles_as_strided(x, tile_h, tile_w):\n        B, C, H, W = x.shape\n        stride = x.stride()\n        nH = H // tile_h\n        nW = W // tile_w\n\n        tiles = x.as_strided(\n            size=(B, C, nH, nW, tile_h, tile_w),\n            stride=(stride[0], stride[1], stride[2] * tile_h, stride[3] * tile_w, stride[2], stride[3])\n        )\n        return tiles  # shape: (B, C, nH, nW, tile_h, tile_w)\n\n    @staticmethod\n    def apply_scattersort_per_tile(tile, ref_tile):\n        flat     = tile    .flatten(-2, -1)\n        ref_flat = ref_tile.flatten(-2, -1)\n\n        sorted_ref, _ = ref_flat  .sort(dim=-1)\n        src_sorted, src_idx = flat.sort(dim=-1)\n        \n        out = flat.scatter(dim=-1, index=src_idx, src=sorted_ref)\n        return out.view_as(tile)\n\n    @staticmethod\n    def apply_AdaIN_per_tile(tile, ref_tile, eps: float = 1e-7):\n        mean_c = tile.mean(-2, keepdim=True)\n        std_c  = tile.std (-2, keepdim=True).add_(eps)  # in-place add\n        mean_s = ref_tile.mean  (-2, keepdim=True)\n        std_s  = ref_tile.std   (-2, keepdim=True).add_(eps)\n        tile.sub_(mean_c).div_(std_c).mul_(std_s).add_(mean_s)  # in-place chain\n        return tile\n\nclass StyleMMDiT_Attn(Stylizer):\n    def __init__(self, mode):\n        super().__init__()\n        \n        self.q_proj = [0.0]\n        self.k_proj = [0.0]\n        self.v_proj = [0.0]\n\n        self.q_norm = [0.0]\n        self.k_norm = [0.0]\n        \n        self.out    = [0.0]\n\nclass StyleMMDiT_FF(Stylizer): # these hit img or joint only, never txt\n    def __init__(self, mode):\n        super().__init__()\n    \n        self.ff_1      = [0.0]\n        self.ff_1_silu = [0.0]\n        self.ff_3      = [0.0]\n        self.ff_13     = [0.0]\n        self.ff_2      = [0.0]\n        \nclass StyleMMDiT_MoE(Stylizer): # these hit img or joint only, never txt\n    def __init__(self, mode):\n        super().__init__()\n        \n        self.FF_SHARED   = StyleMMDiT_FF(mode)\n        self.FF_SEPARATE = StyleMMDiT_FF(mode)\n        \n        self.shared      = [0.0]\n        self.gate        = [False]\n        self.topk_weight = [0.0]\n\n        self.separate    = [0.0]\n        self.sum         = [0.0]\n        self.out         = [0.0]\n\n\n\n\n\nclass StyleMMDiT_SubBlock(Stylizer):\n    def __init__(self, mode):\n        super().__init__()\n        \n        self.ATTN = StyleMMDiT_Attn(mode)  # options for attn itself: qkv proj, qk norm, attn out\n\n        self.attn_norm     = [0.0]\n        self.attn_norm_mod = [0.0]\n        self.attn          = [0.0]\n        self.attn_gated    = [0.0]\n        self.attn_res      = [0.0]\n        \n        self.ff_norm       = [0.0]\n        self.ff_norm_mod   = [0.0]\n        self.ff            = [0.0]\n        self.ff_gated      = [0.0]\n        self.ff_res        = [0.0]\n        \n        self.mask = [None]\n        \n    def set_len(self, h_len, w_len, img_slice, txt_slice, HEADS):\n        super().set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        self.ATTN.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n\nclass StyleMMDiT_IMG_Block(StyleMMDiT_SubBlock):  # img or joint\n    def __init__(self, mode):\n        super().__init__(mode)\n        self.FF = StyleMMDiT_MoE(mode)  # options for MoE if img or joint\n    \n    def set_len(self, h_len, w_len, img_slice, txt_slice, HEADS):\n        super().set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        self.FF.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        \nclass StyleMMDiT_TXT_Block(StyleMMDiT_SubBlock):   # txt only\n    def __init__(self, mode):\n        super().__init__(mode)\n        self.FF  = StyleMMDiT_FF(mode)   # options for FF within MoE for img or joint; or for txt alone\n    \n    def set_len(self, h_len, w_len, img_slice, txt_slice, HEADS):\n        super().set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        self.FF.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n\n\n\n\n\nclass StyleMMDiT_BaseBlock:\n    def __init__(self, mode=\"passthrough\"):\n\n        self.img = StyleMMDiT_IMG_Block(mode)\n        self.txt = StyleMMDiT_TXT_Block(mode)\n        \n        self.mask      = [None]\n        self.attn_mask = [None]\n    \n    def set_len(self, h_len, w_len, img_slice, txt_slice, HEADS):\n        self.h_len  = h_len\n        self.w_len  = w_len\n        self.img_len = h_len * w_len\n        \n        self.img_slice = img_slice\n        self.txt_slice = txt_slice\n        self.HEADS = HEADS\n        \n        self.img.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        self.txt.set_len(-1, -1, img_slice, txt_slice, HEADS)\n        \n        for i, mask in enumerate(self.mask):\n            if mask is not None and mask.ndim > 1:\n                self.mask[i] = F.interpolate(mask.unsqueeze(0), size=(h_len, w_len)).flatten().to(torch.bfloat16).cuda()\n            self.img.mask = self.mask\n        for i, mask in enumerate(self.attn_mask):\n            if mask is not None and mask.ndim > 1:\n                self.attn_mask[i] = F.interpolate(mask.unsqueeze(0), size=(h_len, w_len)).flatten().to(torch.bfloat16).cuda()\n            self.img.ATTN.mask = self.attn_mask      \n\nclass StyleMMDiT_DoubleBlock(StyleMMDiT_BaseBlock):\n    def __init__(self, mode=\"passthrough\"):\n        super().__init__(mode)\n        self.txt = StyleMMDiT_TXT_Block(mode)\n    \n    def set_len(self, h_len, w_len, img_slice, txt_slice, HEADS):\n        super().set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        self.txt.set_len(-1, -1, img_slice, txt_slice, HEADS)\n\nclass StyleMMDiT_SingleBlock(StyleMMDiT_BaseBlock):\n    def __init__(self, mode=\"passthrough\"):\n        super().__init__(mode)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nclass StyleUNet_Resample(Stylizer):\n    def __init__(self, mode):\n        super().__init__()\n        self.conv = [0.0]\n\nclass StyleUNet_Attn(Stylizer):\n    def __init__(self, mode):\n        super().__init__()\n        self.q_proj = [0.0]\n        self.k_proj = [0.0]\n        self.v_proj = [0.0]\n        self.out    = [0.0]\n\nclass StyleUNet_FF(Stylizer):\n    def __init__(self, mode):\n        super().__init__()\n        self.proj   = [0.0]\n        self.geglu  = [0.0]\n        self.linear = [0.0]\n        \nclass StyleUNet_TransformerBlock(Stylizer): \n    def __init__(self, mode):\n        super().__init__()\n        \n        self.ATTN1 = StyleUNet_Attn(mode)  # self-attn\n        self.FF    = StyleUNet_FF  (mode)  \n        self.ATTN2 = StyleUNet_Attn(mode)  # cross-attn\n\n        self.self_attn  = [0.0]\n        self.ff         = [0.0]\n        self.cross_attn = [0.0]\n        \n        self.self_attn_res  = [0.0]\n        self.cross_attn_res = [0.0]\n        self.ff_res = [0.0]\n        \n        self.norm1 = [0.0]\n        self.norm2 = [0.0]\n        self.norm3 = [0.0]\n        \n    def set_len(self, h_len, w_len, img_slice, txt_slice, HEADS):\n        super().set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        self.ATTN1.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        self.ATTN2.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n\nclass StyleUNet_SpatialTransformer(Stylizer): \n    def __init__(self, mode):\n        super().__init__()\n        \n        self.TFMR = StyleUNet_TransformerBlock(mode)\n\n        self.spatial_norm_in     = [0.0]\n        self.spatial_proj_in     = [0.0]\n        self.spatial_transformer_block = [0.0]\n        self.spatial_transformer = [0.0]\n        self.spatial_proj_out    = [0.0]\n        self.spatial_res         = [0.0]\n        \n    def set_len(self, h_len, w_len, img_slice, txt_slice, HEADS):\n        super().set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        self.TFMR.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n\nclass StyleUNet_ResBlock(Stylizer):\n    def __init__(self, mode):\n        super().__init__()\n\n        self.in_norm    = [0.0]\n        self.in_silu    = [0.0]\n        self.in_conv    = [0.0]\n\n        self.emb_silu   = [0.0]\n        self.emb_linear = [0.0]\n        self.emb_res    = [0.0]\n\n        self.out_norm   = [0.0]\n        self.out_silu   = [0.0]\n        self.out_conv   = [0.0]\n        \n        self.residual   = [0.0]\n\n\nclass StyleUNet_BaseBlock(Stylizer):\n    def __init__(self, mode=\"passthrough\"):\n\n        self.resample_block = StyleUNet_Resample(mode)\n        self.res_block      = StyleUNet_ResBlock(mode)\n        self.spatial_block  = StyleUNet_SpatialTransformer(mode)\n        \n        self.resample = [0.0]\n        self.res      = [0.0]\n        self.spatial  = [0.0]\n        \n        self.mask      = [None]\n        self.attn_mask = [None]\n        \n        self.KONTEXT = 0\n\n    \n    def set_len(self, h_len, w_len, img_slice, txt_slice, HEADS):\n        self.h_len  = h_len\n        self.w_len  = w_len\n        self.img_len = h_len * w_len\n        \n        self.img_slice = img_slice\n        self.txt_slice = txt_slice\n        self.HEADS = HEADS\n        \n        self.resample_block.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        self.res_block     .set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        self.spatial_block .set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        \n        for i, mask in enumerate(self.mask):\n            if mask is not None and mask.ndim > 1:\n                self.mask[i] = F.interpolate(mask.unsqueeze(0), size=(h_len, w_len)).flatten().to(torch.bfloat16).cuda()\n            self.resample_block.mask = self.mask\n            self.res_block.mask      = self.mask\n            self.spatial_block.mask  = self.mask\n            self.spatial_block.TFMR.mask  = self.mask\n            \n        for i, mask in enumerate(self.attn_mask):\n            if mask is not None and mask.ndim > 1:\n                self.attn_mask[i] = F.interpolate(mask.unsqueeze(0), size=(h_len, w_len)).flatten().to(torch.bfloat16).cuda()\n            self.spatial_block.TFMR.ATTN1.mask = self.attn_mask     \n            \n    def __call__(self, x, attr):\n        B, C, H, W = x.shape\n        x = super().__call__(x.reshape(B, H*W, C), attr)\n        return x.reshape(B,C,H,W)\n        \n\nclass StyleUNet_InputBlock(StyleUNet_BaseBlock):\n    def __init__(self, mode=\"passthrough\"):\n        super().__init__(mode)    \n\nclass StyleUNet_MiddleBlock(StyleUNet_BaseBlock):\n    def __init__(self, mode=\"passthrough\"):\n        super().__init__(mode)\n\nclass StyleUNet_OutputBlock(StyleUNet_BaseBlock):\n    def __init__(self, mode=\"passthrough\"):\n        super().__init__(mode)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nclass Style_Model(Stylizer):\n\n    def __init__(self, dtype=torch.float64, device=torch.device(\"cuda\")):\n        super().__init__(dtype, device)\n        self.guides = []\n        self.GUIDES_INITIALIZED = False\n        \n        #self.double_blocks = [StyleMMDiT_DoubleBlock() for _ in range(100)]\n        #self.single_blocks = [StyleMMDiT_SingleBlock() for _ in range(100)]\n        \n        self.h_len   = -1\n        self.w_len   = -1\n        self.img_len = -1\n        self.h_tile  = [-1]\n        self.w_tile  = [-1]\n        \n        self.proj_in  = [0.0]  # these are for img only! not sliced\n        self.proj_out = [0.0]\n        \n        self.cond_pos = [None]\n        self.cond_neg = [None]\n        \n        self.noise_mode = \"update\"\n        self.recon_lure = \"none\"\n        self.data_shock = \"none\"\n        \n        self.data_shock_start_step = 0\n        self.data_shock_end_step   = 0\n        \n        self.Retrojector = None\n        self.Endojector  = None\n        \n        self.IMG_1ST = True\n        self.HEADS = 0\n        self.KONTEXT = 0\n    def __call__(self, x, attr):\n        if x.shape[0] == 1 and not self.KONTEXT:\n            return x\n        \n        weight_list = getattr(self, attr)\n        weights_all_zero = all(weight == 0.0 for weight in weight_list)\n        if weights_all_zero:\n            return x\n        \n        \"\"\"x_ndim = x.ndim\n        if x_ndim == 4:\n            B, HEAD, HW, C = x.shape\n            \n        if x_ndim == 3:\n            B, HW, C = x.shape\n            if x.shape[-2] != self.HEADS and self.HEADS != 0:\n                x = x.reshape(B,self.HEADS,HW,-1)\"\"\"\n        \n        HEAD_DIM = x.shape[1]\n        if HEAD_DIM == self.HEADS:\n            B, HEAD_DIM, HW, C = x.shape\n            x = x.reshape(B, HW, C*HEAD_DIM)\n            \n        if self.KONTEXT == 1:\n            x = x.reshape(2, x.shape[1] // 2, x.shape[2])\n            \n        weights_all_one         = all(weight == 1.0           for weight in weight_list)\n        methods_all_scattersort = all(name   == \"scattersort\" for name   in self.method)\n        masks_all_none = all(mask is None for mask in self.mask)\n        \n        if weights_all_one and methods_all_scattersort and len(weight_list) > 1 and masks_all_none:\n            buf = Stylizer.buffer\n            buf['src_idx']   = x[0:1].argsort(dim=-2)\n            buf['ref_sorted'], buf['ref_idx'] = x[1:].reshape(1, -1, x.shape[-1]).sort(dim=-2)\n            buf['src'] = buf['ref_sorted'][:,::len(weight_list)].expand_as(buf['src_idx'])    #            interleave_stride = len(weight_list)\n            \n            x[0:1] = x[0:1].scatter_(dim=-2, index=buf['src_idx'], src=buf['src'],)\n        else:\n            for i, (weight, mask) in enumerate(zip(weight_list, self.mask)):\n                if weight > 0 and weight < 1:\n                    x_clone = x.clone()\n                if mask is not None:\n                    x01 = x[0:1].clone()\n                slc = Stylizer.middle_slice(x.shape[-2], weight)\n                \n                method = getattr(self, self.method[i])\n                if   weight == 0.0:\n                    continue\n                elif weight == 1.0:\n                    x = method(x, idx=i+1)\n                else:\n                    x = method(x, idx=i+1, slc=slc)\n                if weight > 0 and weight < 1 and self.method[i] != \"scattersort\":\n                    x = torch.lerp(x_clone, x, weight)\n                    \n                #else:\n                #    x = torch.lerp(x, method(x.clone(), idx=i), weight)\n                \n                if mask is not None:\n                    x[0:1] = torch.lerp(x01, x[0:1], mask.view(1, -1, 1))\n        \n        #if x_ndim == 3:\n        #    return x.view(B,HW,C)\n        if self.KONTEXT == 1:\n            x = x.reshape(1, x.shape[1] * 2, x.shape[2])\n            \n        if HEAD_DIM == self.HEADS:\n            return x.reshape(B, HEAD_DIM, HW, C)\n        else:\n            return x\n\n    def set_len(self, h_len, w_len, img_slice, txt_slice, HEADS):\n        self.h_len  = h_len\n        self.w_len  = w_len\n        self.img_len = h_len * w_len\n        \n        self.img_slice = img_slice\n        self.txt_slice = txt_slice\n        self.HEADS = HEADS\n        \n        #for block in self.double_blocks:\n        #    block.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        #for block in self.single_blocks:\n        #    block.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        \n        for i, mask in enumerate(self.mask):\n            if mask is not None and mask.ndim > 1:\n                self.mask[i] = F.interpolate(mask.unsqueeze(0), size=(h_len, w_len)).flatten().to(torch.bfloat16).cuda()\n\n    def init_guides(self, model):\n        if not self.GUIDES_INITIALIZED:\n            if self.guides == []:\n                self.guides = None\n            elif self.guides is not None:\n                for i, latent in enumerate(self.guides):\n                    if type(latent) is dict:\n                        latent = model.inner_model.inner_model.process_latent_in(latent['samples']).to(dtype=self.dtype, device=self.device)\n                    elif type(latent) is torch.Tensor:\n                        latent = latent.to(dtype=self.dtype, device=self.device)\n                    else:\n                        latent = None\n                        #raise ValueError(f\"Invalid latent type: {type(latent)}\")\n\n                    #if self.VIDEO and latent.shape[2] == 1:\n                    #    latent = latent.repeat(1, 1, x.shape[2], 1, 1)\n\n                    self.guides[i] = latent\n                if any(g is None for g in self.guides):\n                    self.guides = None\n                    print(\"Style guide nonetype set for Kontext.\")\n                else:\n                    self.guides = torch.cat(self.guides, dim=0)\n            self.GUIDES_INITIALIZED = True\n    \n    def set_conditioning(self, positive, negative):\n        self.cond_pos = [positive]\n        self.cond_neg = [negative] \n\n    def apply_style_conditioning(self, UNCOND, base_context, base_y=None, base_llama3=None):\n\n        def get_max_token_lengths(style_conditioning, base_context, base_y=None, base_llama3=None):\n            context_max_len = base_context.shape[-2]\n            llama3_max_len  = base_llama3.shape[-2]  if base_llama3 is not None else -1\n            y_max_len       = base_y.shape[-1]       if base_y      is not None else -1\n\n            for style_cond in style_conditioning:\n                if style_cond is None:\n                    continue\n                context_max_len = max(context_max_len, style_cond[0][0].shape[-2])\n                if base_llama3 is not None:\n                    llama3_max_len  = max(llama3_max_len,  style_cond[0][1]['conditioning_llama3'].shape[-2])\n                if base_y is not None:\n                    y_max_len       = max(y_max_len,       style_cond[0][1]['pooled_output'].shape[-1])\n\n            return context_max_len, llama3_max_len, y_max_len\n\n        def pad_to_len(x, target_len, pad_value=0.0, dim=1):\n            if target_len < 0:\n                return x\n            cur_len = x.shape[dim]\n            if cur_len == target_len:\n                return x\n            return F.pad(x, (0, 0, 0, target_len - cur_len), value=pad_value)\n\n        style_conditioning = self.cond_pos if not UNCOND else self.cond_neg\n        \n        context_max_len, llama3_max_len, y_max_len = get_max_token_lengths(\n            style_conditioning = style_conditioning,\n            base_context       = base_context,\n            base_y             = base_y,\n            base_llama3        = base_llama3,\n        )\n        \n        bsz_style = len(style_conditioning)\n        \n        context = base_context.repeat(bsz_style + 1, 1, 1)\n        y = base_y.repeat(bsz_style + 1, 1)                   if base_y      is not None else None\n        llama3  =  base_llama3.repeat(bsz_style + 1, 1, 1, 1) if base_llama3 is not None else None\n\n        context = pad_to_len(context, context_max_len, dim=-2)\n        llama3  = pad_to_len(llama3, llama3_max_len, dim=-2)   if base_llama3 is not None else None\n        y       = pad_to_len(y,      y_max_len, dim=-1)        if base_y      is not None else None\n        \n        for ci, style_cond in enumerate(style_conditioning):\n            if style_cond is None:\n                continue\n            context[ci+1:ci+2] = pad_to_len(style_cond[0][0], context_max_len, dim=-2).to(context)\n            if llama3 is not None:\n                llama3 [ci+1:ci+2] = pad_to_len(style_cond[0][1]['conditioning_llama3'], llama3_max_len, dim=-2).to(llama3)\n            if y is not None:\n                y      [ci+1:ci+2] = pad_to_len(style_cond[0][1]['pooled_output'],       y_max_len, dim=-1).to(y)\n        \n        return context, y, llama3\n    \n    def WCT_data(self, denoised_embed, y0_style_embed):\n        Stylizer.CLS_WCT.set(y0_style_embed.to(denoised_embed))\n        return Stylizer.CLS_WCT.get(denoised_embed)\n\n    def WCT2_data(self, denoised_embed, y0_style_embed):\n        Stylizer.CLS_WCT2.set(y0_style_embed.to(denoised_embed))\n        return Stylizer.CLS_WCT2.get(denoised_embed)\n\n    def apply_to_data(self, denoised, y0_style=None, mode=\"none\"):\n        if mode == \"none\":\n            return denoised\n        y0_style = self.guides if y0_style is None else y0_style\n        \n        y0_style_embed = self.Retrojector.embed(y0_style)\n        denoised_embed = self.Retrojector.embed(denoised)\n        B,HW,C = y0_style_embed.shape\n        embed  = torch.cat([denoised_embed, y0_style_embed.view(1,B*HW,C)[:,::B,:]], dim=0)\n        method = getattr(self, mode)\n        if mode == \"scattersort\":\n            slc = Stylizer.middle_slice(embed.shape[-2], self.data_shock_weight)\n            embed = method(embed, slc=slc)\n        else:\n            embed  = method(embed)\n        return self.Retrojector.unembed(embed[0:1])\n\n    def apply_recon_lure(self, denoised, y0_style):\n        if self.recon_lure == \"none\":\n            return denoised\n        for i in range(denoised.shape[0]):\n            denoised[i:i+1] = self.apply_to_data(denoised[i:i+1], y0_style, self.recon_lure)\n        return denoised\n\n    def apply_data_shock(self, denoised):\n        if self.data_shock == \"none\":\n            return denoised\n        datashock_ref = getattr(self, \"datashock_ref\", None)\n        if self.data_shock == \"scattersort\":\n            return self.apply_to_data(denoised, datashock_ref, self.data_shock)\n        else:\n            return torch.lerp(denoised, self.apply_to_data(denoised, datashock_ref, self.data_shock), torch.Tensor([self.data_shock_weight]).double().cuda())\n\n\n\n\nclass StyleMMDiT_Model(Style_Model):\n\n    def __init__(self, dtype=torch.float64, device=torch.device(\"cuda\")):\n        super().__init__(dtype, device)\n        self.double_blocks = [StyleMMDiT_DoubleBlock() for _ in range(100)]\n        self.single_blocks = [StyleMMDiT_SingleBlock() for _ in range(100)]\n\n    def set_len(self, h_len, w_len, img_slice, txt_slice, HEADS):\n        super().set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        for block in self.double_blocks:\n            block.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        for block in self.single_blocks:\n            block.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n\n\nclass StyleUNet_Model(Style_Model):\n\n    def __init__(self, dtype=torch.float64, device=torch.device(\"cuda\")):\n        super().__init__(dtype, device)\n        self.input_blocks  = [StyleUNet_InputBlock()  for _ in range(100)]\n        self.middle_blocks = [StyleUNet_MiddleBlock() for _ in range(100)]\n        self.output_blocks = [StyleUNet_OutputBlock() for _ in range(100)]\n\n    def set_len(self, h_len, w_len, img_slice, txt_slice, HEADS):\n        super().set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        for block in self.input_blocks:\n            block.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        for block in self.middle_blocks:\n            block.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n        for block in self.output_blocks:\n            block.set_len(h_len, w_len, img_slice, txt_slice, HEADS)\n\n    def __call__(self, x, attr):\n        B, C, H, W = x.shape\n        x = super().__call__(x.reshape(B, H*W, C), attr)\n        return x.reshape(B,C,H,W)\n        \n"
  },
  {
    "path": "wan/model.py",
    "content": "# original version: https://github.com/Wan-Video/Wan2.1/blob/main/wan/modules/model.py\n# Copyright 2024-2025 The Alibaba Wan Team Authors. All rights reserved.\nimport math\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom einops import repeat\n\nfrom comfy.ldm.modules.attention import optimized_attention, attention_pytorch\nfrom comfy.ldm.flux.layers import EmbedND\nfrom comfy.ldm.flux.math import apply_rope\nfrom comfy.ldm.modules.diffusionmodules.mmdit import RMSNorm\nimport comfy.ldm.common_dit\nimport comfy.model_management\n\nfrom ..latents import interpolate_spd\nfrom ..helper  import ExtraOptions\n\n\ndef sinusoidal_embedding_1d(dim, position):\n    # preprocess\n    assert dim % 2 == 0\n    half     = dim // 2\n    position = position.type(torch.float32)\n\n    # calculation\n    sinusoid = torch.outer(\n        position, torch.pow(10000, -torch.arange(half).to(position).div(half)))\n    x = torch.cat([torch.cos(sinusoid), torch.sin(sinusoid)], dim=1)\n    return x\n\n\n\nclass ReWanRawSelfAttention(nn.Module):\n\n    def __init__(self,\n                dim,\n                num_heads,\n                window_size        = (-1, -1),\n                qk_norm            = True,\n                eps                = 1e-6, \n                operation_settings = {}):\n        assert dim % num_heads == 0\n        super().__init__()\n        self.dim         = dim\n        self.num_heads   = num_heads\n        self.head_dim    = dim // num_heads\n        self.window_size = window_size\n        self.qk_norm     = qk_norm\n        self.eps         = eps\n\n        # layers\n        self.q = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.k = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.v = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.o = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.norm_q = RMSNorm(dim, eps=eps, elementwise_affine=True, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")) if qk_norm else nn.Identity()\n        self.norm_k = RMSNorm(dim, eps=eps, elementwise_affine=True, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")) if qk_norm else nn.Identity()\n\n    def forward(self, x, freqs, mask=None):\n        r\"\"\"\n        Args:\n            x(Tensor): Shape [B, L, num_heads, C / num_heads]\n            freqs(Tensor): Rope freqs, shape [1024, C / num_heads / 2]\n        \"\"\"\n        b, s, n, d = *x.shape[:2], self.num_heads, self.head_dim\n\n        # query, key, value function\n        def qkv_fn(x):\n            q = self.norm_q(self.q(x)).view(b, s, n, d)\n            k = self.norm_k(self.k(x)).view(b, s, n, d)\n            v = self.v(x).view(b, s, n * d)\n            return q, k, v\n\n        q, k, v = qkv_fn(x)\n        q, k    = apply_rope(q, k, freqs)\n        # q,k.shape = 2,14040,12,128      v.shape = 2,14040,1536\n\n        x = optimized_attention(\n            q.view(b, s, n * d),\n            k.view(b, s, n * d),\n            v,\n            heads=self.num_heads,\n        )\n\n        x = self.o(x)\n        return x\n\n\ndef attention_weights(q, k):\n    # implementation of in-place softmax to reduce memory req\n    scores = torch.matmul(q, k.transpose(-2, -1))\n    scores.div_(math.sqrt(q.size(-1)))\n    torch.exp(scores, out=scores)\n    summed = torch.sum(scores, dim=-1, keepdim=True)\n    scores /= summed\n    return scores.nan_to_num_(0.0, 65504., -65504.)\n\n\n\n\n\nclass ReWanSlidingSelfAttention(nn.Module):\n\n    def __init__(self,\n                dim,\n                num_heads,\n                window_size        = (-1, -1),\n                qk_norm            = True,\n                eps                = 1e-6, \n                operation_settings = {}):\n        assert dim % num_heads == 0\n        super().__init__()\n        self.dim         = dim\n        self.num_heads   = num_heads\n        self.head_dim    = dim // num_heads\n        self.window_size = window_size\n        self.qk_norm     = qk_norm\n        self.eps         = eps\n        self.winderz     = 15\n        self.winderz_type= \"standard\"\n\n        # layers\n        self.q = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.k = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.v = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.o = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.norm_q = RMSNorm(dim, eps=eps, elementwise_affine=True, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")) if qk_norm else nn.Identity()\n        self.norm_k = RMSNorm(dim, eps=eps, elementwise_affine=True, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")) if qk_norm else nn.Identity()\n        \n\n    def forward(self, x, freqs, mask=None, grid_sizes=None):\n        r\"\"\"\n        Args:\n            x(Tensor): Shape [B, L, num_heads, C / num_heads]\n            freqs(Tensor): Rope freqs, shape [1024, C / num_heads / 2]\n        \"\"\"\n        b, s, n, d = *x.shape[:2], self.num_heads, self.head_dim\n\n        # query, key, value function\n        def qkv_fn(x):\n            q = self.norm_q(self.q(x)).view(b, s, n, d)\n            k = self.norm_k(self.k(x)).view(b, s, n, d)\n            v = self.v(x).view(b, s, n * d)\n            return q, k, v\n\n        q, k, v = qkv_fn(x)\n        q, k    = apply_rope(q, k, freqs)\n        # q,k.shape = 2,14040,12,128      v.shape = 2,14040,1536\n    \n        img_len = grid_sizes[1] * grid_sizes[2]\n        total_frames = int(q.shape[1] // img_len)\n\n        window_size = self.winderz\n        half_window = window_size // 2\n\n        q_ = q.view(b, s, n * d)\n        k_ = k.view(b, s, n * d)\n        x_list = []\n\n        for i in range(total_frames):\n            q_start =  i      * img_len\n            q_end   = (i + 1) * img_len\n\n            # circular frame indices for key/value window\n            center = i\n            #window_indices = [(center + offset) % total_frames for offset in range(-half_window, half_window + 1)]\n            if self.winderz_type == \"standard\":\n                start = max(0, center - half_window)\n                end   = min(total_frames, center + half_window + 1)\n                # Shift window if it would be too short\n                if end - start < window_size:\n                    if start == 0:\n                        end = min(total_frames, start + window_size)\n                    elif end == total_frames:\n                        start = max(0, end - window_size)\n\n                window_indices = list(range(start, end))\n            elif self.winderz_type == \"circular\":\n                window_indices = [(center + offset) % total_frames for offset in range(-half_window, half_window + 1)]\n            \n            # frame indices to token indices\n            token_indices = []\n            for frame in window_indices:\n                start = frame * img_len\n                token_indices.extend(range(start, start + img_len))\n\n            token_indices = torch.tensor(token_indices, device=q.device)\n\n            x = optimized_attention(\n                q_[:, q_start:q_end, :],           # [B, img_len, C]\n                k_.index_select(1, token_indices), # [B, window_size * img_len, C]\n                v .index_select(1, token_indices),\n                heads=self.num_heads,\n            )\n\n            x_list.append(x)\n\n        x = torch.cat(x_list, dim=1)\n        del x_list, q, k, v, q_, k_\n\n        x = self.o(x)\n        return x\n\n\n\n\nclass ReWanT2VSlidingCrossAttention(ReWanSlidingSelfAttention):\n\n    def forward(self, x, context, context_clip=None, mask=None, grid_sizes=None):\n        r\"\"\"\n        Args:\n            x(Tensor): Shape [B, L1, C]\n            context(Tensor): Shape [B, L2, C]\n        \"\"\"\n        # compute query, key, value\n        q = self.norm_q(self.q(x))\n        k = self.norm_k(self.k(context))\n        v =             self.v(context)\n\n        img_len = grid_sizes[1] * grid_sizes[2]\n        total_frames = int(q.shape[1] // img_len)\n\n        window_size = self.winderz\n        half_window = window_size // 2\n        \n        b, s, n, d = *x.shape[:2], self.num_heads, self.head_dim\n        q_, k_ = q, k\n        #q_ = q.view(b, s, n * d)\n        #k_ = k.view(b, s, n * d)\n        x_list = []\n\n        for i in range(total_frames):\n            q_start =  i      * img_len\n            q_end   = (i + 1) * img_len\n\n            # circular frame indices for key/value window\n            center = i\n            #window_indices = [(center + offset) % total_frames for offset in range(-half_window, half_window + 1)]\n            if self.winderz_type == \"standard\":\n                start = max(0, center - half_window)\n                end   = min(total_frames, center + half_window + 1)\n                # Shift window if it would be too short\n                if end - start < window_size:\n                    if start == 0:\n                        end = min(total_frames, start + window_size)\n                    elif end == total_frames:\n                        start = max(0, end - window_size)\n\n                window_indices = list(range(start, end))\n            elif self.winderz_type == \"circular\":\n                window_indices = [(center + offset) % total_frames for offset in range(-half_window, half_window + 1)]\n            \n            # frame indices to token indices\n            token_indices = []\n            for frame in window_indices:\n                start = frame * img_len\n                token_indices.extend(range(start, start + img_len))\n\n            token_indices = torch.tensor(token_indices, device=q.device)\n\n            x = optimized_attention(\n                q_[:, q_start:q_end, :],           # [B, img_len, C]\n                k_, #.index_select(1, token_indices), # [B, window_size * img_len, C]\n                v , #.index_select(1, token_indices),\n                heads=self.num_heads,\n            )\n\n            x_list.append(x)\n\n        x = torch.cat(x_list, dim=1)\n        del x_list, q, k, v, q_, k_\n\n        x = self.o(x)\n        return x\n\n\n\n\nclass ReWanSelfAttention(nn.Module):\n\n    def __init__(self,\n                dim,\n                num_heads,\n                window_size        = (-1, -1),\n                qk_norm            = True,\n                eps                = 1e-6, \n                operation_settings = {}):\n        assert dim % num_heads == 0\n        super().__init__()\n        self.dim         = dim\n        self.num_heads   = num_heads\n        self.head_dim    = dim // num_heads\n        self.window_size = window_size\n        self.qk_norm     = qk_norm\n        self.eps         = eps\n\n        # layers\n        self.q = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.k = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.v = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.o = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.norm_q = RMSNorm(dim, eps=eps, elementwise_affine=True, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")) if qk_norm else nn.Identity()\n        self.norm_k = RMSNorm(dim, eps=eps, elementwise_affine=True, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")) if qk_norm else nn.Identity()\n        \n\n    def forward(self, x, freqs, mask=None, grid_sizes=None):\n        r\"\"\"\n        Args:\n            x(Tensor): Shape [B, L, num_heads, C / num_heads]\n            freqs(Tensor): Rope freqs, shape [1024, C / num_heads / 2]\n        \"\"\"\n        b, s, n, d = *x.shape[:2], self.num_heads, self.head_dim\n\n        # query, key, value function\n        def qkv_fn(x):\n            q = self.norm_q(self.q(x)).view(b, s, n, d)\n            k = self.norm_k(self.k(x)).view(b, s, n, d)\n            v = self.v(x).view(b, s, n * d)\n            return q, k, v\n\n        q, k, v = qkv_fn(x)\n        q, k    = apply_rope(q, k, freqs)\n        # q,k.shape = 2,14040,12,128      v.shape = 2,14040,1536\n\n        if mask is not None and mask.shape[-1] > 0:\n            #dtype = mask.dtype if mask.dtype == torch.bool else q.dtype\n            #txt_len = mask.shape[1] - mask.shape[0]\n            x = attention_pytorch(\n                q.view(b, s, n * d),\n                k.view(b, s, n * d),\n                v,\n                heads=self.num_heads,\n                mask=mask#[:,txt_len:].to(dtype)\n            )\n        else:\n            x = optimized_attention(\n                q.view(b, s, n * d),\n                k.view(b, s, n * d),\n                v,\n                heads=self.num_heads,\n            )\n\n        x = self.o(x)\n        return x\n\n\nclass ReWanT2VRawCrossAttention(ReWanSelfAttention):\n\n    def forward(self, x, context, context_clip=None, mask=None, grid_sizes=None):\n        r\"\"\"\n        Args:\n            x(Tensor): Shape [B, L1, C]\n            context(Tensor): Shape [B, L2, C]\n        \"\"\"\n        # compute query, key, value\n        q = self.norm_q(self.q(x))\n        k = self.norm_k(self.k(context))\n        v =             self.v(context)\n\n        x = optimized_attention(q, k, v, heads=self.num_heads, mask=None)\n\n        x = self.o(x)\n        return x\n\n\nclass ReWanT2VCrossAttention(ReWanSelfAttention):\n\n    def forward(self, x, context, context_clip=None, mask=None, grid_sizes=None):\n        r\"\"\"\n        Args:\n            x(Tensor): Shape [B, L1, C]\n            context(Tensor): Shape [B, L2, C]\n        \"\"\"\n        # compute query, key, value\n        q = self.norm_q(self.q(x))\n        k = self.norm_k(self.k(context))\n        v =             self.v(context)\n        #if mask is not None:\n        #    num_repeats = q.shape[1] // mask.shape[0]\n        #    mask = mask.repeat(num_repeats, 1)\n        # compute attention    # x.shape 2,14040,1536     q.shape 2,14040,1536     k,v.shape = 2,512,1536       mask = 14040,512   num_heads=12\n        if mask is not None: # and (mask.shape[-1] - mask.shape[-2]) == k.shape[-2]:  # need mask shape 11664,5120\n            #dtype = mask.dtype if mask.dtype == torch.bool else q.dtype\n            dtype = torch.bool\n            x = attention_pytorch(q, k, v, heads=self.num_heads, mask=mask.to(q.device).bool())\n\n            #x = attention_pytorch(q, k, v, heads=self.num_heads, mask=mask[:,:k.shape[-2]].to(q.device).bool())\n        else:\n            x = optimized_attention(q, k, v, heads=self.num_heads, mask=None)\n\n        x = self.o(x)\n        return x\n\n\nclass ReWanI2VCrossAttention(ReWanSelfAttention):   # image2video only\n\n    def __init__(self,\n                dim,\n                num_heads,\n                window_size=(-1, -1),\n                qk_norm=True,\n                eps=1e-6, operation_settings={}, ):\n        super().__init__(dim, num_heads, window_size, qk_norm, eps, operation_settings=operation_settings)\n\n        self.k_img = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.v_img = operation_settings.get(\"operations\").Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        # self.alpha = nn.Parameter(torch.zeros((1, )))\n        self.norm_k_img = RMSNorm(dim, eps=eps, elementwise_affine=True, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")) if qk_norm else nn.Identity()\n\n    def forward(self, x, context, context_clip=None, mask=None, grid_sizes=None):\n        r\"\"\"\n        Args:\n            x(Tensor): Shape [B, L1, C]\n            context(Tensor): Shape [B, L2, C]\n        \"\"\"\n        \"\"\"context_img = context[:, :257]\n        context     = context[:, 257:]\n        mask_clip = None\"\"\"\n        \n        context_img = context_clip\n        \n        mask_clip = None\n        if mask is not None:\n            mask_clip = F.interpolate(mask[None, None, ...].to(torch.float16), (mask.shape[0], 257 * mask.shape[1]//512), mode='nearest-exact').squeeze().to(mask.dtype)\n            \"\"\"mask_clip = []\n            for i in range(mask.shape[-1]//512):\n                mask_clip.append(mask[:,i*512:i*512 + 257])\n            mask_clip = torch.cat(mask_clip, dim=-1)\"\"\"\n\n        # compute query, key, value\n        q = self.norm_q(self.q(x))\n        k = self.norm_k(self.k(context))\n        v = self.v(context)\n        k_img = self.norm_k_img(self.k_img(context_img))\n        v_img = self.v_img(context_img)\n        img_x = optimized_attention(q, k_img, v_img, heads=self.num_heads, mask=mask_clip)\n        # compute attention\n        x = optimized_attention(q, k, v, heads=self.num_heads, mask=mask)\n\n        # output\n        x = x + img_x\n        x = self.o(x)\n        return x\n\n\nWAN_CROSSATTENTION_CLASSES = {\n    't2v_cross_attn': ReWanT2VCrossAttention,\n    'i2v_cross_attn': ReWanI2VCrossAttention,\n}\n\n\nclass ReWanAttentionBlock(nn.Module):\n\n    def __init__(self,\n                cross_attn_type,\n                dim,\n                ffn_dim,\n                num_heads,\n                window_size        =  (-1, -1),\n                qk_norm            =  True,\n                cross_attn_norm    =  False,\n                eps                =  1e-6, \n                operation_settings = {}):\n        super().__init__()\n        self.dim             = dim\n        self.ffn_dim         = ffn_dim\n        self.num_heads       = num_heads\n        self.window_size     = window_size\n        self.qk_norm         = qk_norm\n        self.cross_attn_norm = cross_attn_norm\n        self.eps             = eps\n\n        # layers\n        self.norm1     = operation_settings.get(\"operations\").LayerNorm(dim, eps, elementwise_affine=False, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.self_attn = ReWanSelfAttention(  dim, num_heads, window_size, qk_norm,\n                                            eps, operation_settings=operation_settings)\n        self.norm3     = operation_settings.get(\"operations\").LayerNorm(\n            dim, eps,\n            elementwise_affine=True, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")) if cross_attn_norm else nn.Identity()\n        \n        self.cross_attn = WAN_CROSSATTENTION_CLASSES[cross_attn_type](\n                                                                        dim,\n                                                                        num_heads,\n                                                                        (-1, -1),\n                                                                        qk_norm,\n                                                                        eps, \n                                                                        operation_settings=operation_settings)\n        \n        self.norm2 = operation_settings.get(\"operations\").LayerNorm(dim, eps, elementwise_affine=False, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.ffn = nn.Sequential(\n            operation_settings.get(\"operations\").Linear(dim, ffn_dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")), nn.GELU(approximate='tanh'),\n            operation_settings.get(\"operations\").Linear(ffn_dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")))\n\n        # modulation\n        self.modulation = nn.Parameter(torch.empty(1, 6, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")))\n\n    def forward(\n        self,\n        x,\n        e,\n        freqs,\n        context,\n        context_clip=None,\n        self_mask=None,\n        cross_mask=None,\n        grid_sizes = None,\n        #mask=None,\n    ):\n        r\"\"\"\n        Args:\n            x(Tensor): Shape [B, L, C]\n            e(Tensor): Shape [B, 6, C]\n            freqs(Tensor): Rope freqs, shape [1024, C / num_heads / 2]\n        \"\"\"\n        # assert e.dtype == torch.float32\n        \n        e = (comfy.model_management.cast_to(self.modulation, dtype=x.dtype, device=x.device) + e).chunk(6, dim=1)\n        # assert e[0].dtype == torch.float32\n        # e = tuple with 6 elem, shape = 2,1,1536    # with length = 33 so 9 frames\n        # self-attention\n\n        y = self.self_attn(\n            self.norm1(x) * (1 + e[1]) + e[0],\n            freqs,\n            grid_sizes=grid_sizes,\n            mask=self_mask) # mask[:,txt_len:])\n\n        x = x + y * e[2]\n\n        # cross-attention & ffn   # x,y.shape 2,14040,1536   \n        x = x + self.cross_attn(self.norm3(x), context, context_clip=context_clip, mask=cross_mask, grid_sizes=grid_sizes,) #mask[:,:txt_len])\n        #print(\"before norm2 \", torch.cuda.memory_allocated() / 1024**3)\n        y = self.ffn(self.norm2(x) * (1 + e[4]) + e[3])\n        #print(\"after norm2 \", torch.cuda.memory_allocated() / 1024**3)\n        x = x + y * e[5]\n        return x\n\n\nclass Head(nn.Module):\n\n    def __init__(self, dim, out_dim, patch_size, eps=1e-6, operation_settings={}):\n        super().__init__()\n        self.dim        = dim\n        self.out_dim    = out_dim\n        self.patch_size = patch_size\n        self.eps        = eps\n\n        # layers\n        out_dim = math.prod(patch_size) * out_dim\n        self.norm = operation_settings.get(\"operations\").LayerNorm(dim, eps, elementwise_affine=False, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n        self.head = operation_settings.get(\"operations\").Linear   (dim, out_dim,                       device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\"))\n\n        # modulation\n        self.modulation = nn.Parameter(torch.empty(1, 2, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")))\n\n    def forward(self, x, e):\n        r\"\"\"\n        Args:\n            x(Tensor): Shape [B, L1, C]\n            e(Tensor): Shape [B, C]\n        \"\"\"\n        # assert e.dtype == torch.float32\n        e = (comfy.model_management.cast_to(self.modulation, dtype=x.dtype, device=x.device) + e.unsqueeze(1)).chunk(2, dim=1)\n        x = (self.head(self.norm(x) * (1 + e[1]) + e[0]))\n        return x\n\n\nclass MLPProj(torch.nn.Module):\n\n    def __init__(self, in_dim, out_dim, operation_settings={}):\n        super().__init__()\n\n        self.proj = torch.nn.Sequential(\n            operation_settings                 .get(\"operations\").LayerNorm(in_dim,          device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")), operation_settings.get(\"operations\").Linear(in_dim, in_dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")),\n            torch.nn.GELU(), operation_settings.get(\"operations\").Linear   (in_dim, out_dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")),\n            operation_settings                 .get(\"operations\").LayerNorm(out_dim,         device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")))\n\n    def forward(self, image_embeds):\n        clip_extra_context_tokens = self.proj(image_embeds)\n        return clip_extra_context_tokens\n\n\nclass ReWanModel(torch.nn.Module):\n    r\"\"\"\n    Wan diffusion backbone supporting both text-to-video and image-to-video.\n    \"\"\"\n\n    def __init__(self,\n                model_type      = 't2v',\n                patch_size      = (1, 2, 2),\n                text_len        = 512,\n                in_dim          = 16,\n                dim             = 2048,\n                ffn_dim         = 8192,\n                freq_dim        = 256,\n                text_dim        = 4096,\n                out_dim         = 16,\n                num_heads       = 16,\n                num_layers      = 32,\n                window_size     = (-1, -1),\n                qk_norm         = True,\n                cross_attn_norm = True,\n                eps             = 1e-6,\n                image_model     = None,\n                device          = None,\n                dtype           = None,\n                operations      = None,\n                ):\n        r\"\"\"\n        Initialize the diffusion model backbone.\n\n        Args:\n            model_type (`str`, *optional*, defaults to 't2v'):\n                Model variant - 't2v' (text-to-video) or 'i2v' (image-to-video)\n            patch_size (`tuple`, *optional*, defaults to (1, 2, 2)):\n                3D patch dimensions for video embedding (t_patch, h_patch, w_patch)\n            text_len (`int`, *optional*, defaults to 512):\n                Fixed length for text embeddings\n            in_dim (`int`, *optional*, defaults to 16):\n                Input video channels (C_in)\n            dim (`int`, *optional*, defaults to 2048):\n                Hidden dimension of the transformer\n            ffn_dim (`int`, *optional*, defaults to 8192):\n                Intermediate dimension in feed-forward network\n            freq_dim (`int`, *optional*, defaults to 256):\n                Dimension for sinusoidal time embeddings\n            text_dim (`int`, *optional*, defaults to 4096):\n                Input dimension for text embeddings\n            out_dim (`int`, *optional*, defaults to 16):\n                Output video channels (C_out)\n            num_heads (`int`, *optional*, defaults to 16):\n                Number of attention heads\n            num_layers (`int`, *optional*, defaults to 32):\n                Number of transformer blocks\n            window_size (`tuple`, *optional*, defaults to (-1, -1)):\n                Window size for local attention (-1 indicates global attention)\n            qk_norm (`bool`, *optional*, defaults to True):\n                Enable query/key normalization\n            cross_attn_norm (`bool`, *optional*, defaults to False):\n                Enable cross-attention normalization\n            eps (`float`, *optional*, defaults to 1e-6):\n                Epsilon value for normalization layers\n        \"\"\"\n\n        super().__init__()\n        self.dtype           = dtype\n        operation_settings   = {\"operations\": operations, \"device\": device, \"dtype\": dtype}\n\n        assert model_type in ['t2v', 'i2v']\n        self.model_type      = model_type\n\n        self.patch_size      = patch_size\n        self.text_len        = text_len\n        self.in_dim          = in_dim\n        self.dim             = dim\n        self.ffn_dim         = ffn_dim\n        self.freq_dim        = freq_dim\n        self.text_dim        = text_dim\n        self.out_dim         = out_dim\n        self.num_heads       = num_heads\n        self.num_layers      = num_layers\n        self.window_size     = window_size\n        self.qk_norm         = qk_norm\n        self.cross_attn_norm = cross_attn_norm\n        self.eps             = eps\n\n        # embeddings\n        self.patch_embedding = operations.Conv3d(\n            in_dim, dim, kernel_size=patch_size, stride=patch_size, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")) #dtype=torch.float32)\n        \n        \n        self.text_embedding = nn.Sequential(\n            operations.Linear(text_dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")), nn.GELU(approximate='tanh'),\n            operations.Linear(dim, dim,      device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")))\n\n        self.time_embedding = nn.Sequential(\n            operations.Linear(freq_dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")), nn.SiLU(), operations.Linear(dim, dim, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")))\n        self.time_projection = nn.Sequential(nn.SiLU(), operations.Linear(dim, dim * 6, device=operation_settings.get(\"device\"), dtype=operation_settings.get(\"dtype\")))\n\n        # blocks\n        cross_attn_type = 't2v_cross_attn' if model_type == 't2v' else 'i2v_cross_attn'\n        \n        self.blocks = nn.ModuleList([\n            ReWanAttentionBlock(\n                                cross_attn_type,\n                                dim, \n                                ffn_dim, num_heads,\n                                window_size,\n                                qk_norm,\n                                cross_attn_norm,\n                                eps, \n                                operation_settings=operation_settings)\n            \n            for _ in range(num_layers)\n        ])\n\n        # head\n        self.head = Head(dim, out_dim, patch_size, eps, operation_settings=operation_settings)\n\n        d = dim // num_heads\n        self.rope_embedder = EmbedND(dim=d, theta=10000.0, axes_dim=[d - 4 * (d // 6), 2 * (d // 6), 2 * (d // 6)])\n\n        if model_type == 'i2v':\n            self.img_emb = MLPProj(1280, dim, operation_settings=operation_settings)\n        else:\n            self.img_emb = None\n\n\n    def invert_patch_embedding(self, z: torch.Tensor, original_shape: torch.Size, grid_sizes: Optional[Tuple[int,int,int]] = None) -> torch.Tensor:\n\n        import torch.nn.functional as F\n        B, C_in, D, H, W = original_shape\n        pD, pH, pW = self.patch_size\n        sD, sH, sW = pD, pH, pW\n\n        if z.ndim == 3:\n            # [B, S, C_out] -> reshape to [B, C_out, D', H', W']\n            S = z.shape[1]\n            if grid_sizes is None:\n                Dp = D // pD\n                Hp = H // pH\n                Wp = W // pW\n            else:\n                Dp, Hp, Wp = grid_sizes\n            C_out = z.shape[2]\n            z = z.transpose(1, 2).reshape(B, C_out, Dp, Hp, Wp)\n        else:\n            B2, C_out, Dp, Hp, Wp = z.shape\n            assert B2 == B, \"Batch size mismatch... ya sharked it.\"\n\n        # kncokout bias\n        b = self.patch_embedding.bias.view(1, C_out, 1, 1, 1)\n        z_nobias = z - b\n\n        # 2D filter -> pinv\n        w3 = self.patch_embedding.weight         # [C_out, C_in, 1, pH, pW]\n        w2 = w3.squeeze(2)                       # [C_out, C_in, pH, pW]\n        out_ch, in_ch, kH, kW = w2.shape\n        W_flat = w2.view(out_ch, -1)            # [C_out, in_ch*pH*pW]\n        W_pinv = torch.linalg.pinv(W_flat)      # [in_ch*pH*pW, C_out]\n\n        # merge depth for 2D unfold wackiness\n        z2 = z_nobias.permute(0,2,1,3,4).reshape(B*Dp, C_out, Hp, Wp)\n\n        # apply pinv ... get patch vectors\n        z_flat    = z2.reshape(B*Dp, C_out, -1)  # [B*Dp, C_out, L]\n        x_patches = W_pinv @ z_flat              # [B*Dp, in_ch*pH*pW, L]\n\n        # fold -> spatial frames\n        x2 = F.fold(\n            x_patches,\n            output_size=(H, W),\n            kernel_size=(pH, pW),\n            stride=(sH, sW)\n        )  # → [B*Dp, C_in, H, W]\n\n        # un-merge depth\n        x2 = x2.reshape(B, Dp, in_ch, H, W)           # [B, Dp,  C_in, H, W]\n        x_recon = x2.permute(0,2,1,3,4).contiguous()  # [B, C_in,   D, H, W]\n        return x_recon\n\n\n    def forward_orig(\n        self,\n        x,\n        t,\n        context,\n        clip_fea = None,\n        freqs    = None,\n        transformer_options = {},\n        UNCOND = False,\n    ):\n        r\"\"\"\n        Forward pass through the diffusion model\n\n        Args:\n            x (Tensor):\n                List of input video tensors with shape [B, C_in, F, H, W]\n            t (Tensor):\n                Diffusion timesteps tensor of shape [B]\n            context (List[Tensor]):\n                List of text embeddings each with shape [B, L, C]\n            seq_len (`int`):\n                Maximum sequence length for positional encoding\n            clip_fea (Tensor, *optional*):\n                CLIP image features for image-to-video mode\n            y (List[Tensor], *optional*):\n                Conditional video inputs for image-to-video mode, same shape as x\n\n        Returns:\n            List[Tensor]:\n                List of denoised video tensors with original input shapes [C_out, F, H / 8, W / 8]\n        \"\"\"\n        \n        \n        \"\"\"trash = x[:,16:,...]\n        x_slice_flip = torch.cat([x[:,:16,...], torch.flip(trash, dims=[2])], dim=1)\n        x_slice_flip = self.patch_embedding(x_slice_flip.float()).to(x.dtype) \n        x          = self.patch_embedding(x.float()).to(x.dtype)  \n        x = torch.cat([x[:,:,:9,...], x_slice_flip[:,:,9:,...]], dim=2)\"\"\"\n        \n        \"\"\"x1 = self.patch_embedding(x[:,:,:8,...].float()).to(x.dtype)\n        \n        x_slice = torch.cat([x[:,:16,8:,...], trash[:,:,0:9, ...]], dim=1)\n        \n        x2          = self.patch_embedding(x_slice.float()).to(x.dtype) \n        \n        x = torch.cat([x1, x2], dim=2)\"\"\"\n        \n\n        y0_style_pos        = transformer_options.get(\"y0_style_pos\")\n        y0_style_neg        = transformer_options.get(\"y0_style_neg\")\n        SIGMA = t[0].clone() / 1000\n        EO = transformer_options.get(\"ExtraOptions\", ExtraOptions(\"\"))\n        \n        # embeddings\n        #self.patch_embedding.to(self.time_embedding[0].weight.dtype)\n        x_orig     = x.clone()\n        #x          = self.patch_embedding(x.float()).to(self.time_embedding[0].weight.dtype)     #next line to torch.Size([1, 5120, 17, 30, 30]) from 1,36,17,30,30\n        x          = self.patch_embedding(x.float()).to(x.dtype)         # vram jumped from ~16-16.5 up to 17.98     gained 300mb with weights at torch.float8_e4m3fn\n        grid_sizes = x.shape[2:]\n        x          = x.flatten(2).transpose(1, 2)      # x.shape 1,32400,5120  bfloat16   316.4 MB\n\n        # time embeddings\n        e = self.time_embedding(\n            sinusoidal_embedding_1d(self.freq_dim, t).to(dtype=x[0].dtype))\n        e0 = self.time_projection(e).unflatten(1, (6, self.dim))              # e0.shape = 2,6,1536       tiny ( < 0.1 MB)\n\n        # context\n        context = self.text_embedding(context)\n\n        context_clip = None\n        if clip_fea is not None and self.img_emb is not None:\n            context_clip = self.img_emb(clip_fea)  # bs x 257 x dim\n            #context      = torch.concat([context_clip, context], dim=1)\n\n        # arguments\n        kwargs = dict(\n            e       = e0,\n            freqs   = freqs,              # 1,32400,1,64,2,2 bfloat16 15.8 MB\n            context = context,            # 1,1536,5120      bfloat16 15.0 MB\n            context_clip = context_clip,\n            grid_sizes = grid_sizes)\n\n\n\n\n\n        weight    = transformer_options['reg_cond_weight'] if 'reg_cond_weight' in transformer_options else 0.0\n        floor     = transformer_options['reg_cond_floor']  if 'reg_cond_floor'  in transformer_options else 0.0\n        \n        floor     = min(floor, weight)\n        \n        if type(weight) == float or type(weight) == int:\n            pass\n        else:\n            weight = weight.item()\n        \n        AttnMask = transformer_options.get('AttnMask')    # somewhere around here, jumped to 20.6GB\n        mask     = None\n        if AttnMask is not None and weight > 0:\n            mask                      = AttnMask.get(weight=weight) #mask_obj[0](transformer_options, weight.item())         # 32400,33936  bool   1048.6 MB\n            \n            mask_type_bool = type(mask[0][0].item()) == bool if mask is not None else False\n            if not mask_type_bool:\n                mask = mask.to(x.dtype)\n            \n            #text_len                  = context.shape[1] # mask_obj[0].text_len\n            \n            #mask[text_len:,text_len:] = torch.clamp(mask[text_len:,text_len:], min=floor.to(mask.device))   #ORIGINAL SELF-ATTN REGION BLEED\n            #reg_cond_mask = reg_cond_mask_expanded.unsqueeze(0).clone() if reg_cond_mask_expanded is not None else None\n        \n        mask_type_bool = type(mask[0][0].item()) == bool if mask is not None else False\n\n\n\n\n        txt_len = context.shape[1] # mask_obj[0].text_len\n        #txt_len = mask.shape[-1] - mask.shape[-2] if mask is not None else \"Unlogic Condition\"          #what's the point of this?\n        \n        #self_attn_mask  = mask[:, txt_len:]\n        #cross_attn_mask = mask[:,:txt_len ].bool()\n        #i = 0\n        #for block in self.blocks:\n        for i, block in enumerate(self.blocks):\n            if mask_type_bool and weight < (i / (len(self.blocks)-1)) and mask is not None:\n                mask = mask.to(x.dtype)\n            \n            #if mask_type_bool and weight < (i / (len(self.blocks)-1)) and mask is not None:\n            #    mask = mask.to(x.dtype)\n            \n            if mask is not None:\n                #if True:\n                #    x = block(x, self_mask=None, cross_mask=mask.bool(), **kwargs)\n                if mask_type_bool and floor < 0 and          (i / (len(self.blocks)-1)) < (-floor):    # use self-attn mask until block number\n                    x = block(x, self_mask=mask[:,txt_len:], cross_mask=mask[:,:txt_len].bool(), **kwargs)\n                elif mask_type_bool and floor > 0 and  floor < (i / (len(self.blocks)-1)):               # use self-attn mask after block number\n                    x = block(x, self_mask=mask[:,txt_len:], cross_mask=mask[:,:txt_len].bool(), **kwargs)\n                    #x = block(x, self_mask=None, cross_mask=mask[:,:txt_len].bool(), **kwargs)\n                elif floor == 0:\n                    x = block(x, self_mask=mask[:,txt_len:], cross_mask=mask[:,:txt_len].bool(), **kwargs)\n                else:\n                    #x = block(x, self_mask=mask[:,txt_len:], cross_mask=mask[:,:txt_len].bool(), **kwargs)\n                    x = block(x, self_mask=None, cross_mask=mask[:,:txt_len].bool(), **kwargs)\n            \n            else:\n                x = block(x, **kwargs)\n            #x = block(x, mask=mask, **kwargs)\n            \n            #i += 1\n\n        # head\n        x = self.head(x, e)\n\n        # unpatchify\n        eps = self.unpatchify(x, grid_sizes)\n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        dtype = eps.dtype if self.style_dtype is None else self.style_dtype\n        pinv_dtype = torch.float32 if dtype != torch.float64 else dtype\n        W_inv = None\n        \n        \n        #if eps.shape[0] == 2 or (eps.shape[0] == 1 and not UNCOND):\n        if y0_style_pos is not None:\n            y0_style_pos_weight    = transformer_options.get(\"y0_style_pos_weight\")\n            y0_style_pos_synweight = transformer_options.get(\"y0_style_pos_synweight\")\n            y0_style_pos_synweight *= y0_style_pos_weight\n            \n            y0_style_pos = y0_style_pos.to(torch.float32)\n            x   = x_orig.clone().to(torch.float32)\n            eps = eps.to(torch.float32)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n\n\n            img = comfy.ldm.common_dit.pad_to_patch_size(denoised, self.patch_size)\n            patch_size = self.patch_size\n\n            denoised_embed          = self.patch_embedding(img.float()) #.to(x.dtype)         # vram jumped from ~16-16.5 up to 17.98     gained 300mb with weights at torch.float8_e4m3fn\n            grid_sizes = denoised_embed.shape[2:]\n            denoised_embed          = denoised_embed.flatten(2).transpose(1, 2) \n\n\n            img_y0_adain = comfy.ldm.common_dit.pad_to_patch_size(y0_style_pos, self.patch_size)\n            patch_size = self.patch_size\n\n            y0_adain_embed          = self.patch_embedding(img_y0_adain.float()) #.to(x.dtype)         # vram jumped from ~16-16.5 up to 17.98     gained 300mb with weights at torch.float8_e4m3fn\n            grid_sizes = y0_adain_embed.shape[2:]\n            y0_adain_embed          = y0_adain_embed.flatten(2).transpose(1, 2) \n\n\n            if transformer_options['y0_style_method'] == \"AdaIN\":\n                denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    #denoised_embed = (denoised_embed - b) @ torch.linalg.pinv(W.to(pinv_dtype)).T.to(dtype)\n                    denoised_embed = self.invert_patch_embedding(denoised_embed, x_orig.shape, grid_sizes)\n                    denoised_embed = self.patch_embedding(denoised_embed.float()) #.to(x.dtype)         # vram jumped from ~16-16.5 up to 17.98     gained 300mb with weights at torch.float8_e4m3fn\n                    grid_sizes     = denoised_embed.shape[2:]\n                    denoised_embed = denoised_embed.flatten(2).transpose(1, 2) \n                    \n                    #denoised_embed = F.linear(denoised_embed         .to(W), W, b).to(img)\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n                    \n                    \n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                if self.y0_adain_embed is None or self.y0_adain_embed.shape != y0_adain_embed.shape or torch.norm(self.y0_adain_embed - y0_adain_embed) > 0:\n                    self.y0_adain_embed = y0_adain_embed\n                    \n                    f_s          = y0_adain_embed[0].clone()\n                    self.mu_s    = f_s.mean(dim=0, keepdim=True)\n                    f_s_centered = f_s - self.mu_s\n                    \n                    cov = (f_s_centered.T.double() @ f_s_centered.double()) / (f_s_centered.size(0) - 1)\n\n                    S_eig, U_eig = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    S_eig_sqrt    = S_eig.clamp(min=0).sqrt() # eigenvalues -> singular values\n                    \n                    whiten = U_eig @ torch.diag(S_eig_sqrt) @ U_eig.T\n                    self.y0_color  = whiten.to(f_s_centered)\n\n                for wct_i in range(eps.shape[0]):\n                    f_c          = denoised_embed[wct_i].clone()\n                    mu_c         = f_c.mean(dim=0, keepdim=True)\n                    f_c_centered = f_c - mu_c\n                    \n                    cov = (f_c_centered.T.double() @ f_c_centered.double()) / (f_c_centered.size(0) - 1)\n\n                    S_eig, U_eig  = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    inv_sqrt_eig  = S_eig.clamp(min=0).rsqrt() \n                    \n                    whiten = U_eig @ torch.diag(inv_sqrt_eig) @ U_eig.T\n                    whiten = whiten.to(f_c_centered)\n\n                    f_c_whitened = f_c_centered @ whiten.T\n                    f_cs         = f_c_whitened @ self.y0_color.T + self.mu_s\n                    \n                    denoised_embed[wct_i] = f_cs\n\n            denoised_approx = self.invert_patch_embedding(denoised_embed, x_orig.shape, grid_sizes)\n            \n            denoised_approx = denoised_approx.to(eps)\n\n            eps = (x - denoised_approx) / sigma\n            #if eps.shape[0] == 2:\n            #    eps[1] = eps_orig[1] + y0_style_pos_weight * (eps[1] - eps_orig[1])\n            #    eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n            #else:\n            #    eps[0] = eps_orig[0] + y0_style_pos_weight * (eps[0] - eps_orig[0])\n            \n            if not UNCOND:\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_pos_weight * (eps[1] - eps_orig[1])\n                    eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n                else:\n                    eps[0] = eps_orig[0] + y0_style_pos_weight * (eps[0] - eps_orig[0])\n            elif eps.shape[0] == 1 and UNCOND:\n                eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])\n            \n            eps = eps.float()\n        \n        \n        #if eps.shape[0] == 2 or (eps.shape[0] == 1 and UNCOND):\n        if y0_style_neg is not None:\n            y0_style_neg_weight    = transformer_options.get(\"y0_style_neg_weight\")\n            y0_style_neg_synweight = transformer_options.get(\"y0_style_neg_synweight\")\n            y0_style_neg_synweight *= y0_style_neg_weight\n            \n            y0_style_neg = y0_style_neg.to(torch.float32)\n            x   = x_orig.clone().to(torch.float32)\n            eps = eps.to(torch.float32)\n            eps_orig = eps.clone()\n            \n            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000\n            denoised = x - sigma * eps\n\n\n            img = comfy.ldm.common_dit.pad_to_patch_size(denoised, self.patch_size)\n            patch_size = self.patch_size\n\n            denoised_embed          = self.patch_embedding(img.float()) #.to(x.dtype)         # vram jumped from ~16-16.5 up to 17.98     gained 300mb with weights at torch.float8_e4m3fn\n            grid_sizes = denoised_embed.shape[2:]\n            denoised_embed          = denoised_embed.flatten(2).transpose(1, 2) \n\n\n            img_y0_adain = comfy.ldm.common_dit.pad_to_patch_size(y0_style_neg, self.patch_size)\n            patch_size = self.patch_size\n\n            y0_adain_embed          = self.patch_embedding(img_y0_adain.float()) #.to(x.dtype)         # vram jumped from ~16-16.5 up to 17.98     gained 300mb with weights at torch.float8_e4m3fn\n            grid_sizes = y0_adain_embed.shape[2:]\n            y0_adain_embed          = y0_adain_embed.flatten(2).transpose(1, 2) \n\n\n            if transformer_options['y0_style_method'] == \"AdaIN\":\n                denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                for adain_iter in range(EO(\"style_iter\", 0)):\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    #denoised_embed = (denoised_embed - b) @ torch.linalg.pinv(W.to(pinv_dtype)).T.to(dtype)\n                    denoised_embed = self.invert_patch_embedding(denoised_embed, x_orig.shape, grid_sizes)\n                    denoised_embed = self.patch_embedding(denoised_embed.float()) #.to(x.dtype)         # vram jumped from ~16-16.5 up to 17.98     gained 300mb with weights at torch.float8_e4m3fn\n                    grid_sizes = denoised_embed.shape[2:]\n                    denoised_embed = denoised_embed.flatten(2).transpose(1, 2)                         \n                    \n                    #denoised_embed = F.linear(denoised_embed         .to(W), W, b).to(img)\n                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)\n                    \n                    \n                    \n                    \n            elif transformer_options['y0_style_method'] == \"WCT\":\n                if self.y0_adain_embed is None or self.y0_adain_embed.shape != y0_adain_embed.shape or torch.norm(self.y0_adain_embed - y0_adain_embed) > 0:\n                    self.y0_adain_embed = y0_adain_embed\n                    \n                    f_s          = y0_adain_embed[0].clone()\n                    self.mu_s    = f_s.mean(dim=0, keepdim=True)\n                    f_s_centered = f_s - self.mu_s\n                    \n                    cov = (f_s_centered.T.double() @ f_s_centered.double()) / (f_s_centered.size(0) - 1)\n\n                    S_eig, U_eig = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    S_eig_sqrt    = S_eig.clamp(min=0).sqrt() # eigenvalues -> singular values\n                    \n                    whiten = U_eig @ torch.diag(S_eig_sqrt) @ U_eig.T\n                    self.y0_color  = whiten.to(f_s_centered)\n\n                for wct_i in range(eps.shape[0]):\n                    f_c          = denoised_embed[wct_i].clone()\n                    mu_c         = f_c.mean(dim=0, keepdim=True)\n                    f_c_centered = f_c - mu_c\n                    \n                    cov = (f_c_centered.T.double() @ f_c_centered.double()) / (f_c_centered.size(0) - 1)\n\n                    S_eig, U_eig  = torch.linalg.eigh(cov + 1e-5 * torch.eye(cov.size(0), dtype=cov.dtype, device=cov.device))\n                    inv_sqrt_eig  = S_eig.clamp(min=0).rsqrt() \n                    \n                    whiten = U_eig @ torch.diag(inv_sqrt_eig) @ U_eig.T\n                    whiten = whiten.to(f_c_centered)\n\n                    f_c_whitened = f_c_centered @ whiten.T\n                    f_cs         = f_c_whitened @ self.y0_color.T + self.mu_s\n                    \n                    denoised_embed[wct_i] = f_cs\n\n            denoised_approx = self.invert_patch_embedding(denoised_embed, x_orig.shape, grid_sizes)\n            \n            denoised_approx = denoised_approx.to(eps)\n\n            #eps = (x - denoised_approx) / sigma\n            #eps[0] = eps_orig[0] + y0_style_neg_weight * (eps[0] - eps_orig[0])\n            #if eps.shape[0] == 2:\n            #    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n                \n            if UNCOND:\n                eps = (x - denoised_approx) / sigma\n                eps[0] = eps_orig[0] + y0_style_neg_weight * (eps[0] - eps_orig[0])\n                if eps.shape[0] == 2:\n                    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])\n            elif eps.shape[0] == 1 and not UNCOND:\n                eps[0] = eps_orig[0] + y0_style_neg_synweight * (eps[0] - eps_orig[0])\n            \n            eps = eps.float()\n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        return eps\n    \n    \n    \n    \n    \n    \n    # context.shape = 2,512,1536     x.shape = 2,14040,1536      timestep.shape      h_len=30, w_len=52    30 * 52 = 1560\n    def forward(self, x, timestep, context, clip_fea=None, transformer_options={}, **kwargs):\n        \n        \"\"\"if False: #clip_fea is not None:\n            bs, c, t, h, w = x.shape\n            x = comfy.ldm.common_dit.pad_to_patch_size(x, self.patch_size)\n            patch_size = self.patch_size    # tuple = 1,2,2,\n            \n            t_len = ((t + (patch_size[0] // 2)) // patch_size[0])\n            h_len = ((h + (patch_size[1] // 2)) // patch_size[1])\n            w_len = ((w + (patch_size[2] // 2)) // patch_size[2])\n            \n            img_ids = torch.zeros((t_len, h_len, w_len, 3), device=x.device, dtype=x.dtype)\n            \n            img_ids[:, :, :, 0] = img_ids[:, :, :, 0] + torch.linspace(0, t_len - 1, steps=t_len, device=x.device, dtype=x.dtype).reshape(-1, 1, 1)\n            img_ids[:, :, :, 1] = img_ids[:, :, :, 1] + torch.linspace(0, h_len - 1, steps=h_len, device=x.device, dtype=x.dtype).reshape(1, -1, 1)\n            img_ids[:, :, :, 2] = img_ids[:, :, :, 2] + torch.linspace(0, w_len - 1, steps=w_len, device=x.device, dtype=x.dtype).reshape(1, 1, -1)\n            \n            img_ids = repeat(img_ids, \"t h w c -> b (t h w) c\", b=bs)\n            # 14040 = 9 * 1560       1560 = 1536 + 24  1560/24 = 65\n            freqs = self.rope_embedder(img_ids).movedim(1, 2)\n            return self.forward_orig(x, timestep, context, clip_fea=clip_fea, freqs=freqs)[:, :, :t, :h, :w]\"\"\"\n            \n        \n        \n        \n        #x = torch.cat([x[:,:,:8,...],   torch.flip(x[:,:,8:,...], dims=[2])], dim=2)\n        \n        x_orig = x.clone()      # 1,16,36,60,60   bfloat16\n        timestep_orig = timestep.clone() # 1000    float32\n        context_orig = context.clone() # 1,512,4096 bfloat16\n        \n        \n        out_list = []\n        for i in range(len(transformer_options['cond_or_uncond'])):\n            UNCOND = transformer_options['cond_or_uncond'][i] == 1\n\n            x = x_orig.clone()\n            timestep = timestep_orig.clone()\n            context = context_orig.clone()\n\n\n            bs, c, t, h, w = x.shape\n            x = comfy.ldm.common_dit.pad_to_patch_size(x, self.patch_size)\n            patch_size = self.patch_size\n            \n\n\n            transformer_options['original_shape'] = x.shape\n            transformer_options['patch_size']     = patch_size\n            \n\n            \"\"\"if UNCOND:\n                transformer_options['reg_cond_weight'] = 0.0 # -1\n                context_tmp = context[i][None,...].clone()\"\"\"\n                \n            if UNCOND:\n                #transformer_options['reg_cond_weight'] = -1\n                #context_tmp = context[i][None,...].clone()\n                \n                transformer_options['reg_cond_weight'] = transformer_options.get(\"regional_conditioning_weight\", 0.0) #transformer_options['regional_conditioning_weight']\n                transformer_options['reg_cond_floor']  = transformer_options.get(\"regional_conditioning_floor\",  0.0) #transformer_options['regional_conditioning_floor'] #if \"regional_conditioning_floor\" in transformer_options else 0.0\n                transformer_options['reg_cond_mask_orig'] = transformer_options.get('regional_conditioning_mask_orig')\n                \n                AttnMask   = transformer_options.get('AttnMask',   None)                    \n                RegContext = transformer_options.get('RegContext', None)\n                \n                if AttnMask is not None and transformer_options['reg_cond_weight'] != 0.0:\n                    AttnMask.attn_mask_recast(x.dtype)\n                    context_tmp = RegContext.get().to(context.dtype)\n                    clip_fea    = RegContext.get_clip_fea()\n                    clip_fea     = clip_fea.to(x.dtype) if clip_fea else None\n                    \n                    A = context[i][None,...].clone()\n                    B = context_tmp\n                    context_tmp = A.repeat(1, (B.shape[1] // A.shape[1]) + 1, 1)[:, :B.shape[1], :]\n\n                else:\n                    context_tmp = context[i][None,...].clone()\n            \n            elif UNCOND == False:\n                transformer_options['reg_cond_weight'] = transformer_options.get(\"regional_conditioning_weight\", 0.0) #transformer_options['regional_conditioning_weight']\n                transformer_options['reg_cond_floor']  = transformer_options.get(\"regional_conditioning_floor\", 0.0) #transformer_options['regional_conditioning_floor'] #if \"regional_conditioning_floor\" in transformer_options else 0.0\n                transformer_options['reg_cond_mask_orig'] = transformer_options.get('regional_conditioning_mask_orig')\n                \n                AttnMask   = transformer_options.get('AttnMask',   None)\n                RegContext = transformer_options.get('RegContext', None)\n                \n                if AttnMask is not None and transformer_options['reg_cond_weight'] != 0.0:\n                    AttnMask.attn_mask_recast(x.dtype)\n                    context_tmp  = RegContext.get()\n                    clip_fea     = RegContext.get_clip_fea()\n                    clip_fea     = clip_fea.to(x.dtype) if clip_fea else None\n                else:\n                    context_tmp = context[i][None,...].clone()\n\n            if context_tmp is None:\n                context_tmp = context[i][None,...].clone()\n            context_tmp = context_tmp.to(context.dtype)\n            \n            \n            \n            t_len = ((t + (patch_size[0] // 2)) // patch_size[0])\n            h_len = ((h + (patch_size[1] // 2)) // patch_size[1])\n            w_len = ((w + (patch_size[2] // 2)) // patch_size[2])\n            \n            img_ids = torch.zeros((t_len, h_len, w_len, 3), device=x.device, dtype=x.dtype)\n            \n            img_ids[:, :, :, 0] = img_ids[:, :, :, 0] + torch.linspace(0, t_len - 1, steps=t_len, device=x.device, dtype=x.dtype).reshape(-1, 1, 1)\n            img_ids[:, :, :, 1] = img_ids[:, :, :, 1] + torch.linspace(0, h_len - 1, steps=h_len, device=x.device, dtype=x.dtype).reshape(1, -1, 1)\n            img_ids[:, :, :, 2] = img_ids[:, :, :, 2] + torch.linspace(0, w_len - 1, steps=w_len, device=x.device, dtype=x.dtype).reshape(1, 1, -1)\n            \n            img_ids = repeat(img_ids, \"t h w c -> b (t h w) c\", b=bs)\n            # 14040 = 9 * 1560       1560 = 1536 + 24  1560/24 = 65\n            freqs = self.rope_embedder(img_ids).movedim(1, 2).to(x.dtype)\n            \n            \n            \n            \n            out_x = self.forward_orig(\n                                        x          [i][None,...], \n                                        timestep   [i][None,...], \n                                        context_tmp,\n                                        clip_fea            = clip_fea,\n                                        freqs               = freqs[i][None,...],\n                                        transformer_options = transformer_options,\n                                        UNCOND              = UNCOND,\n                                        )[:, :, :t, :h, :w]\n        \n            #out_x = torch.cat([out_x[:,:,:8,...],   torch.flip(out_x[:,:,8:,...], dims=[2])], dim=2)\n            out_list.append(out_x)\n        \n        out_stack = torch.stack(out_list, dim=0).squeeze(dim=1)\n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n\n        return out_stack\n        \n\n    def unpatchify(self, x, grid_sizes):\n        r\"\"\"\n        Reconstruct video tensors from patch embeddings.\n\n        Args:\n            x (List[Tensor]):\n                List of patchified features, each with shape [L, C_out * prod(patch_size)]\n            grid_sizes (Tensor):\n                Original spatial-temporal grid dimensions before patching,\n                    shape [B, 3] (3 dimensions correspond to F_patches, H_patches, W_patches)\n\n        Returns:\n            List[Tensor]:\n                Reconstructed video tensors with shape [L, C_out, F, H / 8, W / 8]\n        \"\"\"\n\n        c = self.out_dim\n        u = x\n        b = u.shape[0]\n        u = u[:, :math.prod(grid_sizes)].view(b, *grid_sizes, *self.patch_size, c)\n        u = torch.einsum('bfhwpqrc->bcfphqwr', u)\n        u = u.reshape(b, c, *[i * j for i, j in zip(grid_sizes, self.patch_size)])\n    \n        \n        \n        return u\n\n\n\n\ndef adain_seq_inplace(content: torch.Tensor, style: torch.Tensor, eps: float = 1e-7) -> torch.Tensor:\n    mean_c = content.mean(1, keepdim=True)\n    std_c  = content.std (1, keepdim=True).add_(eps)\n    mean_s = style.mean  (1, keepdim=True)\n    std_s  = style.std   (1, keepdim=True).add_(eps)\n\n    content.sub_(mean_c).div_(std_c).mul_(std_s).add_(mean_s)\n    return content\n"
  },
  {
    "path": "wan/vae.py",
    "content": "# original version: https://github.com/Wan-Video/Wan2.1/blob/main/wan/modules/vae.py\n# Copyright 2024-2025 The Alibaba Wan Team Authors. All rights reserved.\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom einops import rearrange\nfrom comfy.ldm.modules.diffusionmodules.model import vae_attention\n\nimport comfy.ops\nops = comfy.ops.disable_weight_init\n\nCACHE_T = 2\n\n\nclass CausalConv3d(ops.Conv3d):\n    \"\"\"\n    Causal 3d convolusion.\n    \"\"\"\n\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self._padding = (self.padding[2], self.padding[2], self.padding[1],\n                         self.padding[1], 2 * self.padding[0], 0)\n        self.padding = (0, 0, 0)\n\n    def forward(self, x, cache_x=None):\n        padding = list(self._padding)\n        if cache_x is not None and self._padding[4] > 0:\n            cache_x = cache_x.to(x.device)\n            x = torch.cat([cache_x, x], dim=2)\n            padding[4] -= cache_x.shape[2]\n        x = F.pad(x, padding)\n\n        return super().forward(x)\n\n\nclass RMS_norm(nn.Module):\n\n    def __init__(self, dim, channel_first=True, images=True, bias=False):\n        super().__init__()\n        broadcastable_dims = (1, 1, 1) if not images else (1, 1)\n        shape = (dim, *broadcastable_dims) if channel_first else (dim,)\n\n        self.channel_first = channel_first\n        self.scale = dim**0.5\n        self.gamma = nn.Parameter(torch.ones(shape))\n        self.bias = nn.Parameter(torch.zeros(shape)) if bias else None\n\n    def forward(self, x):\n        return F.normalize(\n            x, dim=(1 if self.channel_first else -1)) * self.scale * self.gamma.to(x) + (self.bias.to(x) if self.bias is not None else 0)\n\n\nclass Upsample(nn.Upsample):\n\n    def forward(self, x):\n        \"\"\"\n        Fix bfloat16 support for nearest neighbor interpolation.\n        \"\"\"\n        return super().forward(x.float()).type_as(x)\n\n\nclass Resample(nn.Module):\n\n    def __init__(self, dim, mode):\n        assert mode in ('none', 'upsample2d', 'upsample3d', 'downsample2d',\n                        'downsample3d')\n        super().__init__()\n        self.dim = dim\n        self.mode = mode\n\n        # layers\n        if mode == 'upsample2d':\n            self.resample = nn.Sequential(\n                Upsample(scale_factor=(2., 2.), mode='nearest-exact'),\n                ops.Conv2d(dim, dim // 2, 3, padding=1))\n        elif mode == 'upsample3d':\n            self.resample = nn.Sequential(\n                Upsample(scale_factor=(2., 2.), mode='nearest-exact'),\n                ops.Conv2d(dim, dim // 2, 3, padding=1))\n            self.time_conv = CausalConv3d(\n                dim, dim * 2, (3, 1, 1), padding=(1, 0, 0))\n\n        elif mode == 'downsample2d':\n            self.resample = nn.Sequential(\n                nn.ZeroPad2d((0, 1, 0, 1)),\n                ops.Conv2d(dim, dim, 3, stride=(2, 2)))\n        elif mode == 'downsample3d':\n            self.resample = nn.Sequential(\n                nn.ZeroPad2d((0, 1, 0, 1)),\n                ops.Conv2d(dim, dim, 3, stride=(2, 2)))\n            self.time_conv = CausalConv3d(\n                dim, dim, (3, 1, 1), stride=(2, 1, 1), padding=(0, 0, 0))\n\n        else:\n            self.resample = nn.Identity()\n\n    def forward(self, x, feat_cache=None, feat_idx=[0]):\n        b, c, t, h, w = x.size()\n        if self.mode == 'upsample3d':\n            if feat_cache is not None:\n                idx = feat_idx[0]\n                if feat_cache[idx] is None:\n                    feat_cache[idx] = 'Rep'\n                    feat_idx[0] += 1\n                else:\n\n                    cache_x = x[:, :, -CACHE_T:, :, :].clone()\n                    if cache_x.shape[2] < 2 and feat_cache[\n                            idx] is not None and feat_cache[idx] != 'Rep':\n                        # cache last frame of last two chunk\n                        cache_x = torch.cat([\n                            feat_cache[idx][:, :, -1, :, :].unsqueeze(2).to(\n                                cache_x.device), cache_x\n                        ],\n                                            dim=2)\n                    if cache_x.shape[2] < 2 and feat_cache[\n                            idx] is not None and feat_cache[idx] == 'Rep':\n                        cache_x = torch.cat([\n                            torch.zeros_like(cache_x).to(cache_x.device),\n                            cache_x\n                        ],\n                                            dim=2)\n                    if feat_cache[idx] == 'Rep':\n                        x = self.time_conv(x)\n                    else:\n                        x = self.time_conv(x, feat_cache[idx])\n                    feat_cache[idx] = cache_x\n                    feat_idx[0] += 1\n\n                    x = x.reshape(b, 2, c, t, h, w)\n                    x = torch.stack((x[:, 0, :, :, :, :], x[:, 1, :, :, :, :]),\n                                    3)\n                    x = x.reshape(b, c, t * 2, h, w)\n        t = x.shape[2]\n        x = rearrange(x, 'b c t h w -> (b t) c h w')\n        x = self.resample(x)\n        x = rearrange(x, '(b t) c h w -> b c t h w', t=t)\n\n        if self.mode == 'downsample3d':\n            if feat_cache is not None:\n                idx = feat_idx[0]\n                if feat_cache[idx] is None:\n                    feat_cache[idx] = x.clone()\n                    feat_idx[0] += 1\n                else:\n\n                    cache_x = x[:, :, -1:, :, :].clone()\n                    # if cache_x.shape[2] < 2 and feat_cache[idx] is not None and feat_cache[idx]!='Rep':\n                    #     # cache last frame of last two chunk\n                    #     cache_x = torch.cat([feat_cache[idx][:, :, -1, :, :].unsqueeze(2).to(cache_x.device), cache_x], dim=2)\n\n                    x = self.time_conv(\n                        torch.cat([feat_cache[idx][:, :, -1:, :, :], x], 2))\n                    feat_cache[idx] = cache_x\n                    feat_idx[0] += 1\n        return x\n\n    def init_weight(self, conv):\n        conv_weight = conv.weight\n        nn.init.zeros_(conv_weight)\n        c1, c2, t, h, w = conv_weight.size()\n        one_matrix = torch.eye(c1, c2)\n        init_matrix = one_matrix\n        nn.init.zeros_(conv_weight)\n        #conv_weight.data[:,:,-1,1,1] = init_matrix * 0.5\n        conv_weight.data[:, :, 1, 0, 0] = init_matrix  #* 0.5\n        conv.weight.data.copy_(conv_weight)\n        nn.init.zeros_(conv.bias.data)\n\n    def init_weight2(self, conv):\n        conv_weight = conv.weight.data\n        nn.init.zeros_(conv_weight)\n        c1, c2, t, h, w = conv_weight.size()\n        init_matrix = torch.eye(c1 // 2, c2)\n        #init_matrix = repeat(init_matrix, 'o ... -> (o 2) ...').permute(1,0,2).contiguous().reshape(c1,c2)\n        conv_weight[:c1 // 2, :, -1, 0, 0] = init_matrix\n        conv_weight[c1 // 2:, :, -1, 0, 0] = init_matrix\n        conv.weight.data.copy_(conv_weight)\n        nn.init.zeros_(conv.bias.data)\n\n\nclass ResidualBlock(nn.Module):\n\n    def __init__(self, in_dim, out_dim, dropout=0.0):\n        super().__init__()\n        self.in_dim = in_dim\n        self.out_dim = out_dim\n\n        # layers\n        self.residual = nn.Sequential(\n            RMS_norm(in_dim, images=False), nn.SiLU(),\n            CausalConv3d(in_dim, out_dim, 3, padding=1),\n            RMS_norm(out_dim, images=False), nn.SiLU(), nn.Dropout(dropout),\n            CausalConv3d(out_dim, out_dim, 3, padding=1))\n        self.shortcut = CausalConv3d(in_dim, out_dim, 1) \\\n            if in_dim != out_dim else nn.Identity()\n\n    def forward(self, x, feat_cache=None, feat_idx=[0]):\n        h = self.shortcut(x)\n        for layer in self.residual:\n            if isinstance(layer, CausalConv3d) and feat_cache is not None:\n                idx = feat_idx[0]\n                cache_x = x[:, :, -CACHE_T:, :, :].clone()\n                if cache_x.shape[2] < 2 and feat_cache[idx] is not None:\n                    # cache last frame of last two chunk\n                    cache_x = torch.cat([\n                        feat_cache[idx][:, :, -1, :, :].unsqueeze(2).to(\n                            cache_x.device), cache_x\n                    ],\n                                        dim=2)\n                x = layer(x, feat_cache[idx])\n                feat_cache[idx] = cache_x\n                feat_idx[0] += 1\n            else:\n                x = layer(x)\n        return x + h\n\n\nclass AttentionBlock(nn.Module):\n    \"\"\"\n    Causal self-attention with a single head.\n    \"\"\"\n\n    def __init__(self, dim):\n        super().__init__()\n        self.dim = dim\n\n        # layers\n        self.norm = RMS_norm(dim)\n        self.to_qkv = ops.Conv2d(dim, dim * 3, 1)\n        self.proj = ops.Conv2d(dim, dim, 1)\n        self.optimized_attention = vae_attention()\n\n    def forward(self, x):\n        identity = x\n        b, c, t, h, w = x.size()\n        x = rearrange(x, 'b c t h w -> (b t) c h w')\n        x = self.norm(x)\n        # compute query, key, value\n\n        q, k, v = self.to_qkv(x).chunk(3, dim=1)\n        x = self.optimized_attention(q, k, v)\n\n        # output\n        x = self.proj(x)\n        x = rearrange(x, '(b t) c h w-> b c t h w', t=t)\n        return x + identity\n\n\nclass Encoder3d(nn.Module):\n\n    def __init__(self,\n                 dim=128,\n                 z_dim=4,\n                 dim_mult=[1, 2, 4, 4],\n                 num_res_blocks=2,\n                 attn_scales=[],\n                 temperal_downsample=[True, True, False],\n                 dropout=0.0):\n        super().__init__()\n        self.dim = dim\n        self.z_dim = z_dim\n        self.dim_mult = dim_mult\n        self.num_res_blocks = num_res_blocks\n        self.attn_scales = attn_scales\n        self.temperal_downsample = temperal_downsample\n\n        # dimensions\n        dims = [dim * u for u in [1] + dim_mult]\n        scale = 1.0\n\n        # init block\n        self.conv1 = CausalConv3d(3, dims[0], 3, padding=1)\n\n        # downsample blocks\n        downsamples = []\n        for i, (in_dim, out_dim) in enumerate(zip(dims[:-1], dims[1:])):\n            # residual (+attention) blocks\n            for _ in range(num_res_blocks):\n                downsamples.append(ResidualBlock(in_dim, out_dim, dropout))\n                if scale in attn_scales:\n                    downsamples.append(AttentionBlock(out_dim))\n                in_dim = out_dim\n\n            # downsample block\n            if i != len(dim_mult) - 1:\n                mode = 'downsample3d' if temperal_downsample[\n                    i] else 'downsample2d'\n                downsamples.append(Resample(out_dim, mode=mode))\n                scale /= 2.0\n        self.downsamples = nn.Sequential(*downsamples)\n\n        # middle blocks\n        self.middle = nn.Sequential(\n            ResidualBlock(out_dim, out_dim, dropout), AttentionBlock(out_dim),\n            ResidualBlock(out_dim, out_dim, dropout))\n\n        # output blocks\n        self.head = nn.Sequential(\n            RMS_norm(out_dim, images=False), nn.SiLU(),\n            CausalConv3d(out_dim, z_dim, 3, padding=1))\n\n    def forward(self, x, feat_cache=None, feat_idx=[0]):\n        if feat_cache is not None:\n            idx = feat_idx[0]\n            cache_x = x[:, :, -CACHE_T:, :, :].clone()\n            if cache_x.shape[2] < 2 and feat_cache[idx] is not None:\n                # cache last frame of last two chunk\n                cache_x = torch.cat([\n                    feat_cache[idx][:, :, -1, :, :].unsqueeze(2).to(\n                        cache_x.device), cache_x\n                ],\n                                    dim=2)\n            x = self.conv1(x, feat_cache[idx])\n            feat_cache[idx] = cache_x\n            feat_idx[0] += 1\n        else:\n            x = self.conv1(x)\n\n        ## downsamples\n        for layer in self.downsamples:\n            if feat_cache is not None:\n                x = layer(x, feat_cache, feat_idx)\n            else:\n                x = layer(x)\n\n        ## middle\n        for layer in self.middle:\n            if isinstance(layer, ResidualBlock) and feat_cache is not None:\n                x = layer(x, feat_cache, feat_idx)\n            else:\n                x = layer(x)\n\n        ## head\n        for layer in self.head:\n            if isinstance(layer, CausalConv3d) and feat_cache is not None:\n                idx = feat_idx[0]\n                cache_x = x[:, :, -CACHE_T:, :, :].clone()\n                if cache_x.shape[2] < 2 and feat_cache[idx] is not None:\n                    # cache last frame of last two chunk\n                    cache_x = torch.cat([\n                        feat_cache[idx][:, :, -1, :, :].unsqueeze(2).to(\n                            cache_x.device), cache_x\n                    ],\n                                        dim=2)\n                x = layer(x, feat_cache[idx])\n                feat_cache[idx] = cache_x\n                feat_idx[0] += 1\n            else:\n                x = layer(x)\n        return x\n\n\nclass Decoder3d(nn.Module):\n\n    def __init__(self,\n                 dim=128,\n                 z_dim=4,\n                 dim_mult=[1, 2, 4, 4],\n                 num_res_blocks=2,\n                 attn_scales=[],\n                 temperal_upsample=[False, True, True],\n                 dropout=0.0):\n        super().__init__()\n        self.dim = dim\n        self.z_dim = z_dim\n        self.dim_mult = dim_mult\n        self.num_res_blocks = num_res_blocks\n        self.attn_scales = attn_scales\n        self.temperal_upsample = temperal_upsample\n\n        # dimensions\n        dims = [dim * u for u in [dim_mult[-1]] + dim_mult[::-1]]\n        scale = 1.0 / 2**(len(dim_mult) - 2)\n\n        # init block\n        self.conv1 = CausalConv3d(z_dim, dims[0], 3, padding=1)\n\n        # middle blocks\n        self.middle = nn.Sequential(\n            ResidualBlock(dims[0], dims[0], dropout), AttentionBlock(dims[0]),\n            ResidualBlock(dims[0], dims[0], dropout))\n\n        # upsample blocks\n        upsamples = []\n        for i, (in_dim, out_dim) in enumerate(zip(dims[:-1], dims[1:])):\n            # residual (+attention) blocks\n            if i == 1 or i == 2 or i == 3:\n                in_dim = in_dim // 2\n            for _ in range(num_res_blocks + 1):\n                upsamples.append(ResidualBlock(in_dim, out_dim, dropout))\n                if scale in attn_scales:\n                    upsamples.append(AttentionBlock(out_dim))\n                in_dim = out_dim\n\n            # upsample block\n            if i != len(dim_mult) - 1:\n                mode = 'upsample3d' if temperal_upsample[i] else 'upsample2d'\n                upsamples.append(Resample(out_dim, mode=mode))\n                scale *= 2.0\n        self.upsamples = nn.Sequential(*upsamples)\n\n        # output blocks\n        self.head = nn.Sequential(\n            RMS_norm(out_dim, images=False), nn.SiLU(),\n            CausalConv3d(out_dim, 3, 3, padding=1))\n\n    def forward(self, x, feat_cache=None, feat_idx=[0]):\n        ## conv1\n        if feat_cache is not None:\n            idx = feat_idx[0]\n            cache_x = x[:, :, -CACHE_T:, :, :].clone()\n            if cache_x.shape[2] < 2 and feat_cache[idx] is not None:\n                # cache last frame of last two chunk\n                cache_x = torch.cat([\n                    feat_cache[idx][:, :, -1, :, :].unsqueeze(2).to(\n                        cache_x.device), cache_x\n                ],\n                                    dim=2)\n            x = self.conv1(x, feat_cache[idx])\n            feat_cache[idx] = cache_x\n            feat_idx[0] += 1\n        else:\n            x = self.conv1(x)\n\n        ## middle\n        for layer in self.middle:\n            if isinstance(layer, ResidualBlock) and feat_cache is not None:\n                x = layer(x, feat_cache, feat_idx)\n            else:\n                x = layer(x)\n\n        ## upsamples\n        for layer in self.upsamples:\n            if feat_cache is not None:\n                x = layer(x, feat_cache, feat_idx)\n            else:\n                x = layer(x)\n\n        ## head\n        for layer in self.head:\n            if isinstance(layer, CausalConv3d) and feat_cache is not None:\n                idx = feat_idx[0]\n                cache_x = x[:, :, -CACHE_T:, :, :].clone()\n                if cache_x.shape[2] < 2 and feat_cache[idx] is not None:\n                    # cache last frame of last two chunk\n                    cache_x = torch.cat([\n                        feat_cache[idx][:, :, -1, :, :].unsqueeze(2).to(\n                            cache_x.device), cache_x\n                    ],\n                                        dim=2)\n                x = layer(x, feat_cache[idx])\n                feat_cache[idx] = cache_x\n                feat_idx[0] += 1\n            else:\n                x = layer(x)\n        return x\n\n\ndef count_conv3d(model):\n    count = 0\n    for m in model.modules():\n        if isinstance(m, CausalConv3d):\n            count += 1\n    return count\n\n\nclass WanVAE(nn.Module):\n\n    def __init__(self,\n                 dim=128,\n                 z_dim=4,\n                 dim_mult=[1, 2, 4, 4],\n                 num_res_blocks=2,\n                 attn_scales=[],\n                 temperal_downsample=[True, True, False],\n                 dropout=0.0):\n        super().__init__()\n        self.dim = dim\n        self.z_dim = z_dim\n        self.dim_mult = dim_mult\n        self.num_res_blocks = num_res_blocks\n        self.attn_scales = attn_scales\n        self.temperal_downsample = temperal_downsample\n        self.temperal_upsample = temperal_downsample[::-1]\n\n        # modules\n        self.encoder = Encoder3d(dim, z_dim * 2, dim_mult, num_res_blocks,\n                                 attn_scales, self.temperal_downsample, dropout)\n        self.conv1 = CausalConv3d(z_dim * 2, z_dim * 2, 1)\n        self.conv2 = CausalConv3d(z_dim, z_dim, 1)\n        self.decoder = Decoder3d(dim, z_dim, dim_mult, num_res_blocks,\n                                 attn_scales, self.temperal_upsample, dropout)\n\n    def forward(self, x):\n        mu, log_var = self.encode(x)\n        z = self.reparameterize(mu, log_var)\n        x_recon = self.decode(z)\n        return x_recon, mu, log_var\n\n    def encode(self, x):\n        self.clear_cache()\n        ## cache\n        t = x.shape[2]\n        iter_ = 1 + (t - 1) // 4\n        ## 对encode输入的x，按时间拆分为1、4、4、4....\n        for i in range(iter_):\n            self._enc_conv_idx = [0]\n            if i == 0:\n                out = self.encoder(\n                    x[:, :, :1, :, :],\n                    feat_cache=self._enc_feat_map,\n                    feat_idx=self._enc_conv_idx)\n            else:\n                out_ = self.encoder(\n                    x[:, :, 1 + 4 * (i - 1):1 + 4 * i, :, :],\n                    feat_cache=self._enc_feat_map,\n                    feat_idx=self._enc_conv_idx)\n                out = torch.cat([out, out_], 2)\n        mu, log_var = self.conv1(out).chunk(2, dim=1)\n        self.clear_cache()\n        return mu\n\n    def decode(self, z):\n        self.clear_cache()\n        # z: [b,c,t,h,w]\n\n        iter_ = z.shape[2]\n        x = self.conv2(z)\n        for i in range(iter_):\n            self._conv_idx = [0]\n            if i == 0:\n                out = self.decoder(\n                    x[:, :, i:i + 1, :, :],\n                    feat_cache=self._feat_map,\n                    feat_idx=self._conv_idx)\n            else:\n                out_ = self.decoder(\n                    x[:, :, i:i + 1, :, :],\n                    feat_cache=self._feat_map,\n                    feat_idx=self._conv_idx)\n                out = torch.cat([out, out_], 2)\n        self.clear_cache()\n        return out\n\n    def reparameterize(self, mu, log_var):\n        std = torch.exp(0.5 * log_var)\n        eps = torch.randn_like(std)\n        return eps * std + mu\n\n    def sample(self, imgs, deterministic=False):\n        mu, log_var = self.encode(imgs)\n        if deterministic:\n            return mu\n        std = torch.exp(0.5 * log_var.clamp(-30.0, 20.0))\n        return mu + std * torch.randn_like(std)\n\n    def clear_cache(self):\n        self._conv_num = count_conv3d(self.decoder)\n        self._conv_idx = [0]\n        self._feat_map = [None] * self._conv_num\n        #cache encode\n        self._enc_conv_num = count_conv3d(self.encoder)\n        self._enc_conv_idx = [0]\n        self._enc_feat_map = [None] * self._enc_conv_num\n"
  },
  {
    "path": "web/js/RES4LYF_dynamicWidgets.js",
    "content": "import { app } from \"../../scripts/app.js\";\nimport { ComfyWidgets } from \"../../scripts/widgets.js\";\n\nlet RESDEBUG = false;\nlet TOP_CLOWNDOG = true;\nlet DISPLAY_CATEGORY = true;\n\nlet nodeCounter = 1;\nconst processedNodeMap = new WeakMap();\n\nconst originalGetNodeTypesCategories = typeof LiteGraph.getNodeTypesCategories === 'function' ? LiteGraph.getNodeTypesCategories : null;\n\n// Override the getNodeTypesCategories method if it exists\nif (originalGetNodeTypesCategories) {\n    LiteGraph.getNodeTypesCategories = function(filter) {\n        if (TOP_CLOWNDOG == false) {\n            return originalGetNodeTypesCategories.call(this, filter);\n        }\n        \n        try {\n            // Get the original categories\n            const categories = originalGetNodeTypesCategories.call(this, filter);\n            \n            categories.sort((a, b) => {\n                const isARes4Lyf = a.startsWith(\"RES4LYF\");\n                const isBRes4Lyf = b.startsWith(\"RES4LYF\");\n                if (isARes4Lyf && !isBRes4Lyf) return -1;\n                if (!isARes4Lyf && isBRes4Lyf) return 1;\n\n                // Do the other auto sorting if enabled\n                if (LiteGraph.auto_sort_node_types) {\n                    return a.localeCompare(b);\n                }\n                return 0;\n            });\n            return categories;\n        } catch (error) {\n            return originalGetNodeTypesCategories.call(this, filter);\n        }\n    };\n}\n\nfunction debugLog(...args) {\n    let force = false;\n    if (typeof args[args.length - 1] === \"boolean\") {\n        force = args.pop();\n    }\n    if (RESDEBUG || force) {\n        console.log(...args);\n        \n        // Attempt to post the log text to the Python backend\n        const logText = args.join(' ');\n        fetch('/reslyf/log', {\n            method: 'POST',\n            headers: {\n                'Content-Type': 'application/json'\n            },\n            body: JSON.stringify({ log: logText })\n        }).catch(error => {\n            console.error('Error posting log to backend:', error);\n        });\n    }\n}\n\nconst resDebugLog = debugLog;\n\n// Adapted from essentials.DisplayAny from ComfyUI_essentials\napp.registerExtension({\n    name: \"Comfy.RES4LYF.DisplayInfo\",\n    async beforeRegisterNodeDef(nodeType, nodeData, app) {\n        if (!nodeData?.category?.startsWith(\"RES4LYF\")) {\n            return;\n        }\n\n        if (nodeData.name === \"Latent Display State Info\") {\n            const onExecuted = nodeType.prototype.onExecuted;\n\n            nodeType.prototype.onExecuted = function (message) {\n                onExecuted?.apply(this, arguments);\n\n                if (this.widgets && this.widgets.length === 0) {\n\t\t\t\t\tfor (let i = 1; i < this.widgets.length; i++) {\n\t\t\t\t\t\tthis.widgets[i].onRemove?.();\n\t\t\t\t\t}\n\t\t\t\t\tthis.widgets.length = 0;\n\t\t\t\t}\n\n                // Check if the \"text\" widget already exists.\n                let textWidget = this.widgets && this.widgets.length > 0 && this.widgets.find(w => w.name === \"displaytext\");\n                if (!textWidget) {\n                    textWidget = ComfyWidgets[\"STRING\"](this, \"displaytext\", [\"STRING\", { multiline: true }], app).widget;\n                    textWidget.inputEl.readOnly = true;\n                    textWidget.inputEl.style.border = \"none\";\n                    textWidget.inputEl.style.backgroundColor = \"transparent\";\n                }\n                textWidget.value = message[\"text\"].join(\"\");\n            };\n        }\n    },\n});\n\napp.registerExtension({\n    name: \"Comfy.RES4LYF.DynamicWidgets\",\n\n    async setup(app) {\n        app.ui.settings.addSetting({\n            id: \"RES4LYF.topClownDog\",\n            name: \"RES4LYF: Top ClownDog\",\n            defaultValue: true,\n            type: \"boolean\",\n            options: [\n                { value: true, text: \"On\" },\n                { value: false, text: \"Off\" },\n            ],\n            onChange: (value) => {\n                TOP_CLOWNDOG = value;\n                debugLog(`Top ClownDog ${value ? \"enabled\" : \"disabled\"}`);\n                \n                // Send to backend\n                fetch('/reslyf/settings', {\n                    method: 'POST',\n                    headers: {\n                        'Content-Type': 'application/json'\n                    },\n                    body: JSON.stringify({\n                        setting: \"topClownDog\",\n                        value: value\n                    })\n                }).catch(error => {\n                    debugLog(`Error updating topClownDog setting: ${error}`);\n                });\n            },\n        });\n                \n        app.ui.settings.addSetting({\n            id: \"RES4LYF.enableDebugLogs\",\n            name: \"RES4LYF: Enable debug logging to console\",\n            defaultValue: false,\n            type: \"boolean\",\n            options: [\n                { value: true, text: \"On\" },\n                { value: false, text: \"Off\" },\n            ],\n            onChange: (value) => {\n                RESDEBUG = value;\n                debugLog(`Debug logging ${value ? \"enabled\" : \"disabled\"}`);\n                \n                // Send to backend\n                fetch('/reslyf/settings', {\n                    method: 'POST',\n                    headers: {\n                        'Content-Type': 'application/json'\n                    },\n                    body: JSON.stringify({\n                        setting: \"enableDebugLogs\",\n                        value: value\n                    })\n                }).catch(error => {\n                    debugLog(`Error updating enableDebugLogs setting: ${error}`);\n                });\n            },\n        });\n        \n        app.ui.settings.addSetting({\n            id: \"RES4LYF.displayCategory\",\n            name: \"RES4LYF: Display Category in Sampler Names (requires browser refresh)\",\n            defaultValue: true,\n            type: \"boolean\",\n            options: [\n                { value: true, text: \"On\" },\n                { value: false, text: \"Off\" },\n            ],\n            onChange: (value) => {\n                DISPLAY_CATEGORY = value;\n                resDebugLog(`Display Category ${value ? \"enabled\" : \"disabled\"}`);\n                \n                // Send to backend\n                fetch('/reslyf/settings', {\n                    method: 'POST',\n                    headers: {\n                        'Content-Type': 'application/json'\n                    },\n                    body: JSON.stringify({\n                        setting: \"displayCategory\",\n                        value: value\n                    })\n                }).catch(error => {\n                    resDebugLog(`Error updating displayCategory setting: ${error}`);\n                });\n            },\n        });\n        \n    },\n\n\n    nodeCreated(node) {\n        if (NODES_WITH_EXPANDABLE_OPTIONS.includes(node.comfyClass)) {\n            //debugLog(`Setting up expandable options for ${node.comfyClass}`, true);\n            setupExpandableOptions(node);\n        }\n    }\n});\n\nconst NODES_WITH_EXPANDABLE_OPTIONS = [\n    \"ClownsharKSampler_Beta\",\n    \"ClownsharkChainsampler_Beta\",\n    \"SharkChainsampler_Beta\",\n\n\n    \"ClownSampler_Beta\",\n    \"ClownSamplerAdvanced_Beta\",\n\n    \"SharkSampler\",\n    \"SharkSampler_Beta\",\n    \"SharkSamplerAdvanced_Beta\",\n\n    \"ClownOptions_Combine\",\n]\n\nfunction setupExpandableOptions(node) {\n    if (!processedNodeMap.has(node)) {\n        processedNodeMap.set(node, ++nodeCounter);\n        //debugLog(`Assigned ID ${nodeCounter} to node ${node.comfyClass}`);\n    } else {\n        //debugLog(`Node ${node.comfyClass} already processed with ID ${processedNodeMap.get(node)} - skipping`);\n        return;\n    }\n        \n    const originalOnConnectionsChange = node.onConnectionsChange;\n    \n    const hasOptionsInput = node.inputs.some(input => input.name === \"options\");\n    if (!hasOptionsInput) {\n        //debugLog(`Node ${node.comfyClass} doesn't have an options input - skipping`);\n        return;\n    }\n    \n    node.onConnectionsChange = function(type, index, connected, link_info) {\n        if (originalOnConnectionsChange) {\n            originalOnConnectionsChange.call(this, type, index, connected, link_info);\n        }\n        \n        if (type === LiteGraph.INPUT && !connected) {\n            const input = this.inputs[index];\n            if (!input || !input.name.startsWith(\"options\")) {\n                return;\n            }\n            \n            //debugLog(`Options input disconnected: ${input.name}`);\n            \n            // setTimeout to let the graph update first\n            setTimeout(() => {\n                cleanupOptionsInputs(this);\n            }, 100);\n            return;\n        }\n        \n        if (type === LiteGraph.INPUT && connected && link_info) {\n            const input = this.inputs[index];\n            if (!input || !input.name.startsWith(\"options\")) {\n                return;\n            }\n            \n            let hasEmptyOptions = false;\n            for (let i = 0; i < this.inputs.length; i++) {\n                const input = this.inputs[i];\n                if (input.name.startsWith(\"options\") && input.link === null) {\n                    hasEmptyOptions = true;\n                    break;\n                }\n            }\n            \n            if (!hasEmptyOptions) {\n                //debugLog(`All options inputs are connected, adding a new one`);\n                \n                // Find the highest index number in existing options inputs\n                let maxIndex = 0;\n                for (let i = 0; i < this.inputs.length; i++) {\n                    const input = this.inputs[i];\n                    if (input.name === \"options\") {\n                        continue; // Skip the base \"options\" input\n                    } else if (input.name.startsWith(\"options \")) {\n                        const match = input.name.match(/options (\\d+)/);\n                        if (match) {\n                            const index = parseInt(match[1]) - 1;\n                            maxIndex = Math.max(maxIndex, index);\n                        }\n                    }\n                }\n                \n                const newName = maxIndex === 0 ? \"options 2\" : `options ${maxIndex + 2}`;\n                this.addInput(newName, \"OPTIONS\");\n                //debugLog(`Created new options input: ${newName}`);\n                \n                this.setDirtyCanvas(true, true);\n            }\n        }\n    };\n\n    const optionsInputs = node.inputs.filter(input => \n        input.name.startsWith(\"options\")\n    );\n    \n    const baseOptionsInput = optionsInputs.find(input => input.name === \"options\");\n    const hasOptionsWithIndex = optionsInputs.some(input => input.name !== \"options\");\n    \n    // if (baseOptionsInput && !hasOptionsWithIndex) {\n    //     debugLog(`Adding initial options 1 input to ${node.comfyClass}`);\n    //     node.addInput(\"options 1\", \"OPTIONS\");\n    //     node.setDirtyCanvas(true, true);\n    // }\n    \n    const originalOnConfigure = node.onConfigure;\n    node.onConfigure = function(info) {\n        if (originalOnConfigure) {\n            originalOnConfigure.call(this, info);\n        }\n        \n        let hasEmptyOptions = false;\n        for (let i = 0; i < this.inputs.length; i++) {\n            const input = this.inputs[i];\n            if (input.name.startsWith(\"options\") && input.link === null) {\n                hasEmptyOptions = true;\n                break;\n            }\n        }\n        \n        if (!hasEmptyOptions && this.inputs.some(i => i.name.startsWith(\"options\"))) {\n            let maxIndex = 0;\n            for (let i = 0; i < this.inputs.length; i++) {\n                const input = this.inputs[i];\n                if (input.name === \"options\") {\n                    continue;\n                } else if (input.name.startsWith(\"options \")) {\n                    const match = input.name.match(/options (\\d+)/);\n                    if (match) {\n                        const index = parseInt(match[1]) - 1;\n                        maxIndex = Math.max(maxIndex, index);\n                    }\n                }\n            }\n            \n            const newName = maxIndex === 0 ? \"options 2\" : `options ${maxIndex + 2}`;\n            this.addInput(newName, \"OPTIONS\");\n        }\n    };\n\n    function cleanupOptionsInputs(node) {\n        const optionsInputs = [];\n        for (let i = 0; i < node.inputs.length; i++) {\n            const input = node.inputs[i];\n            if (input.name.startsWith(\"options\")) {\n                optionsInputs.push({\n                    index: i,\n                    name: input.name,\n                    connected: input.link !== null,\n                    isBase: input.name === \"options\"\n                });\n            }\n        }\n        \n        const baseInput = optionsInputs.find(info => info.isBase);\n        const nonBaseInputs = optionsInputs.filter(info => !info.isBase);\n        \n        let needsRenumbering = false;\n        \n        if (baseInput && !baseInput.connected && nonBaseInputs.every(info => !info.connected)) {\n            nonBaseInputs.sort((a, b) => b.index - a.index);\n            \n            for (const inputInfo of nonBaseInputs) {\n                //debugLog(`Removing unnecessary options input: ${inputInfo.name} (index ${inputInfo.index})`);\n                node.removeInput(inputInfo.index);\n                needsRenumbering = true;\n            }\n            \n            node.setDirtyCanvas(true, true);\n            return;\n        }\n        \n        const disconnectedInputs = nonBaseInputs.filter(info => !info.connected);\n        \n        if (disconnectedInputs.length > 1) {\n            disconnectedInputs.sort((a, b) => b.index - a.index);\n            \n            for (let i = 1; i < disconnectedInputs.length; i++) {\n                //debugLog(`Removing unnecessary options input: ${disconnectedInputs[i].name} (index ${disconnectedInputs[i].index})`);\n                node.removeInput(disconnectedInputs[i].index);\n                needsRenumbering = true;\n            }\n        }\n        \n        const hasConnectedOptions = optionsInputs.some(info => info.connected);\n        const hasEmptyOptions = optionsInputs.some(info => !info.connected && !info.isBase);\n        \n        if (hasConnectedOptions && !hasEmptyOptions) {\n            node.addInput(\"options temp\", \"OPTIONS\");\n            //debugLog(`Added new empty options input`);\n            needsRenumbering = true;\n        }\n        \n        if (needsRenumbering) {\n            renumberOptionsInputs(node);\n            node.setDirtyCanvas(true, true);\n        }\n    }\n    \n    function renumberOptionsInputs(node) {\n        const optionsInfo = [];\n        for (let i = 0; i < node.inputs.length; i++) {\n            const input = node.inputs[i];\n            if (input.name.startsWith(\"options\")) {\n                if (input.name === \"options\") {\n                    continue;\n                }\n                \n                optionsInfo.push({\n                    index: i,\n                    connected: input.link !== null,\n                    name: input.name\n                });\n            }\n        }\n        \n        optionsInfo.sort((a, b) => {\n            if (a.connected !== b.connected) {\n                return b.connected ? 1 : -1; // Connected inputs first\n            }\n            return a.index - b.index;\n        });\n        \n        for (let i = 0; i < optionsInfo.length; i++) {\n            const inputInfo = optionsInfo[i];\n            const newName = `options ${i + 2}`;\n            \n            if (inputInfo.name !== newName) {\n                //debugLog(`Renaming ${inputInfo.name} to ${newName}`);\n                node.inputs[inputInfo.index].name = newName;\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "web/js/conditioningToBase64.js",
    "content": "import { app } from \"../../../scripts/app.js\";\nimport { ComfyWidgets } from \"../../../scripts/widgets.js\";\n\n// Displays input text on a node\napp.registerExtension({\n\tname: \"res4lyf.ConditioningToBase64\",\n\tasync beforeRegisterNodeDef(nodeType, nodeData, app) {\n\t\tif (nodeData.name === \"ConditioningToBase64\") {\n\t\t\tfunction populate(text) {\n\t\t\t\tif (this.widgets) {\n\t\t\t\t\tfor (let i = 1; i < this.widgets.length; i++) {\n\t\t\t\t\t\tthis.widgets[i].onRemove?.();\n\t\t\t\t\t}\n\t\t\t\t\tthis.widgets.length = 1;\n\t\t\t\t}\n\n\t\t\t\tconst v = [...text];\n\t\t\t\tif (!v[0]) {\n\t\t\t\t\tv.shift();\n\t\t\t\t}\n\t\t\t\tfor (const list of v) {\n\t\t\t\t\tconst w = ComfyWidgets[\"STRING\"](this, \"text2\", [\"STRING\", { multiline: true }], app).widget;\n\t\t\t\t\tw.inputEl.readOnly = true;\n\t\t\t\t\tw.inputEl.style.opacity = 0.6;\n\t\t\t\t\tw.value = list;\n\t\t\t\t}\n\n\t\t\t\trequestAnimationFrame(() => {\n\t\t\t\t\tconst sz = this.computeSize();\n\t\t\t\t\tif (sz[0] < this.size[0]) {\n\t\t\t\t\t\tsz[0] = this.size[0];\n\t\t\t\t\t}\n\t\t\t\t\tif (sz[1] < this.size[1]) {\n\t\t\t\t\t\tsz[1] = this.size[1];\n\t\t\t\t\t}\n\t\t\t\t\tthis.onResize?.(sz);\n\t\t\t\t\tapp.graph.setDirtyCanvas(true, false);\n\t\t\t\t});\n\t\t\t}\n\n\t\t\t// When the node is executed we will be sent the input text, display this in the widget\n\t\t\tconst onExecuted = nodeType.prototype.onExecuted;\n\t\t\tnodeType.prototype.onExecuted = function (message) {\n\t\t\t\tonExecuted?.apply(this, arguments);\n\t\t\t\tpopulate.call(this, message.text);\n\t\t\t};\n\n\t\t\tconst onConfigure = nodeType.prototype.onConfigure;\n\t\t\tnodeType.prototype.onConfigure = function () {\n\t\t\t\tonConfigure?.apply(this, arguments);\n\t\t\t\tif (this.widgets_values?.length) {\n\t\t\t\t\tpopulate.call(this, this.widgets_values.slice(+this.widgets_values.length > 1));\n\t\t\t\t}\n\t\t\t};\n\t\t}\n\t},\n});\n"
  },
  {
    "path": "web/js/res4lyf.default.json",
    "content": "{\n    \"name\": \"RES4LYF\",\n    \"topClownDog\": true,\n    \"enableDebugLogs\": false,\n    \"displayCategory\": true\n}"
  }
]