Full Code of ClownsharkBatwing/RES4LYF for AI

main 0dc91c00c4c3 cached
134 files
3.9 MB
1.0M tokens
3031 symbols
1 requests
Download .txt
Showing preview only (4,150K chars total). Download the full file or copy to clipboard to get everything.
Repository: ClownsharkBatwing/RES4LYF
Branch: main
Commit: 0dc91c00c4c3
Files: 134
Total size: 3.9 MB

Directory structure:
gitextract__f8xgulp/

├── .gitignore
├── LICENSE
├── README.md
├── __init__.py
├── attention_masks.py
├── aura/
│   └── mmdit.py
├── beta/
│   ├── __init__.py
│   ├── constants.py
│   ├── deis_coefficients.py
│   ├── noise_classes.py
│   ├── phi_functions.py
│   ├── rk_coefficients_beta.py
│   ├── rk_guide_func_beta.py
│   ├── rk_method_beta.py
│   ├── rk_noise_sampler_beta.py
│   ├── rk_sampler_beta.py
│   ├── samplers.py
│   └── samplers_extensions.py
├── chroma/
│   ├── layers.py
│   ├── math.py
│   └── model.py
├── conditioning.py
├── example_workflows/
│   ├── chroma regional antiblur.json
│   ├── chroma txt2img.json
│   ├── comparison ksampler vs csksampler chain workflows.json
│   ├── flux faceswap sync pulid.json
│   ├── flux faceswap sync.json
│   ├── flux faceswap.json
│   ├── flux inpaint area.json
│   ├── flux inpaint bongmath.json
│   ├── flux inpainting.json
│   ├── flux regional antiblur.json
│   ├── flux regional redux (2 zone).json
│   ├── flux regional redux (3 zone, nested).json
│   ├── flux regional redux (3 zone, overlapping).json
│   ├── flux regional redux (3 zones).json
│   ├── flux style antiblur.json
│   ├── flux style transfer gguf.json
│   ├── flux upscale thumbnail large multistage.json
│   ├── flux upscale thumbnail large.json
│   ├── flux upscale thumbnail widescreen.json
│   ├── hidream guide data projection.json
│   ├── hidream guide epsilon projection.json
│   ├── hidream guide flow.json
│   ├── hidream guide fully_pseudoimplicit.json
│   ├── hidream guide lure.json
│   ├── hidream guide pseudoimplicit.json
│   ├── hidream hires fix.json
│   ├── hidream regional 3 zones.json
│   ├── hidream regional antiblur.json
│   ├── hidream style antiblur.json
│   ├── hidream style transfer txt2img.json
│   ├── hidream style transfer v2.json
│   ├── hidream style transfer.json
│   ├── hidream txt2img.json
│   ├── hidream unsampling data WF.json
│   ├── hidream unsampling data.json
│   ├── hidream unsampling pseudoimplicit.json
│   ├── hidream unsampling.json
│   ├── intro to clownsampling.json
│   ├── sd35 medium unsampling data.json
│   ├── sd35 medium unsampling.json
│   ├── sdxl regional antiblur.json
│   ├── sdxl style transfer.json
│   ├── style transfer.json
│   ├── ultracascade txt2img style transfer.json
│   ├── ultracascade txt2img.json
│   ├── wan img2vid 720p (fp8 fast).json
│   ├── wan txt2img (fp8 fast).json
│   └── wan vid2vid.json
├── flux/
│   ├── controlnet.py
│   ├── layers.py
│   ├── math.py
│   ├── model.py
│   └── redux.py
├── helper.py
├── helper_sigma_preview_image_preproc.py
├── hidream/
│   └── model.py
├── images.py
├── latent_images.py
├── latents.py
├── legacy/
│   ├── __init__.py
│   ├── conditioning.py
│   ├── constants.py
│   ├── deis_coefficients.py
│   ├── flux/
│   │   ├── controlnet.py
│   │   ├── layers.py
│   │   ├── math.py
│   │   ├── model.py
│   │   └── redux.py
│   ├── helper.py
│   ├── latents.py
│   ├── legacy_sampler_rk.py
│   ├── legacy_samplers.py
│   ├── models.py
│   ├── noise_classes.py
│   ├── noise_sigmas_timesteps_scaling.py
│   ├── phi_functions.py
│   ├── rk_coefficients.py
│   ├── rk_guide_func.py
│   ├── rk_method.py
│   ├── rk_sampler.py
│   ├── samplers.py
│   ├── samplers_extensions.py
│   ├── samplers_tiled.py
│   ├── sigmas.py
│   └── tiling.py
├── lightricks/
│   ├── model.py
│   ├── symmetric_patchifier.py
│   └── vae/
│       ├── causal_conv3d.py
│       ├── causal_video_autoencoder.py
│       ├── conv_nd_factory.py
│       ├── dual_conv3d.py
│       └── pixel_norm.py
├── loaders.py
├── misc_scripts/
│   └── replace_metadata.py
├── models.py
├── nodes_latents.py
├── nodes_misc.py
├── nodes_precision.py
├── requirements.txt
├── res4lyf.py
├── rk_method_beta.py
├── samplers_extensions.py
├── sd/
│   ├── attention.py
│   └── openaimodel.py
├── sd35/
│   └── mmdit.py
├── sigmas.py
├── style_transfer.py
├── wan/
│   ├── model.py
│   └── vae.py
└── web/
    └── js/
        ├── RES4LYF_dynamicWidgets.js
        ├── conditioningToBase64.js
        └── res4lyf.default.json

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
__pycache__/
.idea/
.vscode/
.tmp
.cache
tests/
/*.json
*.config.json

================================================
FILE: LICENSE
================================================
The use of this software or any derivative work for the purpose of 
providing a commercial service, such as (but not limited to) an
AI image generation service, is strictly prohibited without obtaining 
permission and/or a separate commercial license from the copyright holder. 
This includes any service that charges users directly or indirectly for 
access to this software's functionality, whether standalone or integrated 
into a larger product.

                        GNU AFFERO GENERAL PUBLIC 
                       Version 3, 19 November 2007

 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

                            Preamble

  The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.

  The licenses for most software and other practical works are designed
to take away your freedom to share and change the works.  By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.

  When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.

  Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.

  A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate.  Many developers of free software are heartened and
encouraged by the resulting cooperation.  However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.

  The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community.  It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server.  Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.

  An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals.  This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.

  The precise terms and conditions for copying, distribution and
modification follow.

                       TERMS AND CONDITIONS

  0. Definitions.

  "This License" refers to version 3 of the GNU Affero General Public License.

  "Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.

  "The Program" refers to any copyrightable work licensed under this
License.  Each licensee is addressed as "you".  "Licensees" and
"recipients" may be individuals or organizations.

  To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy.  The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.

  A "covered work" means either the unmodified Program or a work based
on the Program.

  To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy.  Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.

  To "convey" a work means any kind of propagation that enables other
parties to make or receive copies.  Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.

  An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License.  If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.

  1. Source Code.

  The "source code" for a work means the preferred form of the work
for making modifications to it.  "Object code" means any non-source
form of a work.

  A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.

  The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form.  A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.

  The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities.  However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work.  For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.

  The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.

  The Corresponding Source for a work in source code form is that
same work.

  2. Basic Permissions.

  All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met.  This License explicitly affirms your unlimited
permission to run the unmodified Program.  The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work.  This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.

  You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force.  You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright.  Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.

  Conveying under any other circumstances is permitted solely under
the conditions stated below.  Sublicensing is not allowed; section 10
makes it unnecessary.

  3. Protecting Users' Legal Rights From Anti-Circumvention Law.

  No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.

  When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.

  4. Conveying Verbatim Copies.

  You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.

  You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.

  5. Conveying Modified Source Versions.

  You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:

    a) The work must carry prominent notices stating that you modified
    it, and giving a relevant date.

    b) The work must carry prominent notices stating that it is
    released under this License and any conditions added under section
    7.  This requirement modifies the requirement in section 4 to
    "keep intact all notices".

    c) You must license the entire work, as a whole, under this
    License to anyone who comes into possession of a copy.  This
    License will therefore apply, along with any applicable section 7
    additional terms, to the whole of the work, and all its parts,
    regardless of how they are packaged.  This License gives no
    permission to license the work in any other way, but it does not
    invalidate such permission if you have separately received it.

    d) If the work has interactive user interfaces, each must display
    Appropriate Legal Notices; however, if the Program has interactive
    interfaces that do not display Appropriate Legal Notices, your
    work need not make them do so.

  A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit.  Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.

  6. Conveying Non-Source Forms.

  You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:

    a) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by the
    Corresponding Source fixed on a durable physical medium
    customarily used for software interchange.

    b) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by a
    written offer, valid for at least three years and valid for as
    long as you offer spare parts or customer support for that product
    model, to give anyone who possesses the object code either (1) a
    copy of the Corresponding Source for all the software in the
    product that is covered by this License, on a durable physical
    medium customarily used for software interchange, for a price no
    more than your reasonable cost of physically performing this
    conveying of source, or (2) access to copy the
    Corresponding Source from a network server at no charge.

    c) Convey individual copies of the object code with a copy of the
    written offer to provide the Corresponding Source.  This
    alternative is allowed only occasionally and noncommercially, and
    only if you received the object code with such an offer, in accord
    with subsection 6b.

    d) Convey the object code by offering access from a designated
    place (gratis or for a charge), and offer equivalent access to the
    Corresponding Source in the same way through the same place at no
    further charge.  You need not require recipients to copy the
    Corresponding Source along with the object code.  If the place to
    copy the object code is a network server, the Corresponding Source
    may be on a different server (operated by you or a third party)
    that supports equivalent copying facilities, provided you maintain
    clear directions next to the object code saying where to find the
    Corresponding Source.  Regardless of what server hosts the
    Corresponding Source, you remain obligated to ensure that it is
    available for as long as needed to satisfy these requirements.

    e) Convey the object code using peer-to-peer transmission, provided
    you inform other peers where the object code and Corresponding
    Source of the work are being offered to the general public at no
    charge under subsection 6d.

  A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.

  A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling.  In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage.  For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product.  A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.

  "Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source.  The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.

  If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information.  But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).

  The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed.  Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.

  Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.

  7. Additional Terms.

  "Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law.  If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.

  When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it.  (Additional permissions may be written to require their own
removal in certain cases when you modify the work.)  You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.

  Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:

    a) Disclaiming warranty or limiting liability differently from the
    terms of sections 15 and 16 of this License; or

    b) Requiring preservation of specified reasonable legal notices or
    author attributions in that material or in the Appropriate Legal
    Notices displayed by works containing it; or

    c) Prohibiting misrepresentation of the origin of that material, or
    requiring that modified versions of such material be marked in
    reasonable ways as different from the original version; or

    d) Limiting the use for publicity purposes of names of licensors or
    authors of the material; or

    e) Declining to grant rights under trademark law for use of some
    trade names, trademarks, or service marks; or

    f) Requiring indemnification of licensors and authors of that
    material by anyone who conveys the material (or modified versions of
    it) with contractual assumptions of liability to the recipient, for
    any liability that these contractual assumptions directly impose on
    those licensors and authors.

  All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10.  If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term.  If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.

  If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.

  Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.

  8. Termination.

  You may not propagate or modify a covered work except as expressly
provided under this License.  Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).

  However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.

  Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.

  Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License.  If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.

  9. Acceptance Not Required for Having Copies.

  You are not required to accept this License in order to receive or
run a copy of the Program.  Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance.  However,
nothing other than this License grants you permission to propagate or
modify any covered work.  These actions infringe copyright if you do
not accept this License.  Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.

  10. Automatic Licensing of Downstream Recipients.

  Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License.  You are not responsible
for enforcing compliance by third parties with this License.

  An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations.  If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.

  You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License.  For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.

  11. Patents.

  A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based.  The
work thus licensed is called the contributor's "contributor version".

  A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version.  For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.

  Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.

  In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement).  To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.

  If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients.  "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.

  If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.

  A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License.  You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.

  Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.

  12. No Surrender of Others' Freedom.

  If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License.  If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all.  For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.

  13. Remote Network Interaction; Use with the GNU General Public License.

  Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software.  This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.

  Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work.  The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.

  14. Revised Versions of this License.

  The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time.  Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

  Each version is given a distinguishing version number.  If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation.  If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.

  If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.

  Later license versions may give you additional or different
permissions.  However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.

  15. Disclaimer of Warranty.

  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  16. Limitation of Liability.

  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.

  17. Interpretation of Sections 15 and 16.

  If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.

                     END OF TERMS AND CONDITIONS

            How to Apply These Terms to Your New Programs

  If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

  To do so, attach the following notices to the program.  It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>
    Copyright (C) <year>  <name of author>

    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU Affero General Public License as published
    by the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU Affero General Public License for more details.

    You should have received a copy of the GNU Affero General Public License
    along with this program.  If not, see <https://www.gnu.org/licenses/>.

Also add information on how to contact you by electronic and paper mail.

  If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source.  For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code.  There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.

  You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.


================================================
FILE: README.md
================================================
# SUPERIOR SAMPLING WITH RES4LYF: THE POWER OF BONGMATH

RES_3M vs. Uni-PC (WAN). Typically only 20 steps are needed with RES samplers. Far more are needed with Uni-PC and other common samplers, and they never reach the same level of quality.

![res_3m_vs_unipc_1](https://github.com/user-attachments/assets/9321baf9-2d68-4fe8-9427-fcf0609bd02b)
![res_3m_vs_unipc_2](https://github.com/user-attachments/assets/d7ab48e4-51dd-4fa7-8622-160c8f9e33d6)


# INSTALLATION

If you are using a venv, you will need to first run from within your ComfyUI folder (that contains your "venv" folder):

_Linux:_

source venv/bin/activate

_Windows:_

venv\Scripts\activate

_Then, "cd" into your "custom_nodes" folder and run the following commands:_

git clone https://github.com/ClownsharkBatwing/RES4LYF/

cd RES4LYF

_If you are using a venv, run these commands:_

pip install -r requirements.txt

_Alternatively, if you are using the portable version of ComfyUI you will need to replace "pip" with the path to your embedded pip executable. For example, on Windows:_

X:\path\to\your\comfy_portable_folder\python_embedded\Scripts\pip.exe install -r requirements.txt


# IMPORTANT UPDATE INFO

The previous versions will remain available but with "Legacy" prepended to their names.

If you wish to use the sampler menu shown below, you will need to install https://github.com/rgthree/rgthree-comfy (which I highly recommend you have regardless).

![image](https://github.com/user-attachments/assets/b36360bb-a59e-4654-aed7-6b6f53673826)

If these menus do not show up after restarting ComfyUI and refreshing the page (hit F5, not just "r") verify that these menus are enabled in the rgthree settings (click the gear in the bottom left of ComfyUI, select rgthree, and ensure "Auto Nest Subdirectories" is checked):

![image](https://github.com/user-attachments/assets/db46fc90-df1a-4d1c-b6ed-c44d26b8a9b3)


# NEW VERSION DOCUMENTATION

I have prepared a detailed explanation of many of the concepts of sampling with exmaples in this workflow. There's also many tips, explanations of parameters, and all of the most important nodes are laid out for you to see. Some new workflow-enhancing tricks like "chainsamplers" are demonstrated, and **regional AND temporal prompting** are explained (supporting Flux, HiDream, SD3.5, AuraFlow, and WAN - you can even change the conditioning on a frame-by-frame basis!).

[[example_workflows/intro to clownsampling.json
]((https://github.com/ClownsharkBatwing/RES4LYF/blob/main/example_workflows/intro%20to%20clownsampling.json))](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/example_workflows/intro%20to%20clownsampling.json)

![intro to clownsampling](https://github.com/user-attachments/assets/40c23993-c70e-4a71-9207-4cee4b7e71e0)



# STYLE TRANSFER

Supported models: HiDream, Flux, Chroma, AuraFlow, SD1.5, SDXL, SD3.5, Stable Cascade, LTXV, and WAN. Also supported: Stable Cascade (and UltraPixel) which has an excellent understanding of style (https://github.com/ClownsharkBatwing/UltraCascade).

Currently, best results are with HiDream or Chroma, or Flux with a style lora (Flux Dev is very lacking with style knowledge). Include some mention of the style you wish to use in the prompt. (Try with the guide off to confirm the prompt is not doing the heavy lifting!)

![image](https://github.com/user-attachments/assets/a62593fa-b104-4347-bf69-e1e50217ce2d)


For example, the prompt for the below was simply "a gritty illustration of a japanese woman with traditional hair in traditional clothes". Mostly you just need to make clear whether it's supposed to be a photo or an illustration, etc. so that the conditioning isn't fighting the style guide (every model has its inherent biases).

![image](https://github.com/user-attachments/assets/e872e258-c786-4475-8369-c8487ee5ec72)

**COMPOSITION GUIDE; OUTPUT; STYLE GUIDE**

![style example](https://github.com/user-attachments/assets/4970c6ea-d142-4e4e-967a-59ff93528840)

![image](https://github.com/user-attachments/assets/fb071885-48b8-4698-9288-63a2866cb67b)

# KILL FLUX BLUR (and HiDream blur)

**Consecutive seeds, no cherrypicking.**

![antiblur](https://github.com/user-attachments/assets/5bc0e1e3-82e1-4ccc-8d39-64a939815e57)


# REGIONAL CONDITIONING

Unlimited zones! Over 10 zones have been used in one image before. 

Currently supported models: HiDream, Flux, Chroma, SD3.5, SD1.5, SDXL, AuraFlow, and WAN.

Masks can be drawn freely, or more traditional rigid ones may be used, such as in this example:

![image](https://github.com/user-attachments/assets/edfb076a-78e2-4077-b53f-3e8bab07040a)

![ComfyUI_16020_](https://github.com/user-attachments/assets/5f45cdcb-f879-43ca-bcf4-bcae60aa4bbc)

![ComfyUI_12157_](https://github.com/user-attachments/assets/b9e385d2-3359-4a13-99b9-4a7243863b0d)

![ComfyUI_12039_](https://github.com/user-attachments/assets/6d36ae62-ce8c-41e3-b52c-823e9c1b1d50)


# TEMPORAL CONDITIONING

Unlimited zones! Ability to change the prompt for each frame.

Currently supported models: WAN.

![image](https://github.com/user-attachments/assets/743bc972-cfbf-45a8-8745-d6ca1a6b0bab)

![temporal conditioning 09580](https://github.com/user-attachments/assets/eef0e04c-d1b2-49b7-a1ca-f8cb651dd3a7)

# VIDEO 2 VIDEO EDITING

Viable with any video model, demo with WAN:

![wan vid2vid compressed](https://github.com/user-attachments/assets/431c30f7-339e-4b86-8d02-6180b09b15b2)

# PREVIOUS VERSION NODE DOCUMENTATION

At the heart of this repository is the "ClownsharKSampler", which was specifically designed to support both rectified flow and probability flow models. It features 69 different selectible samplers (44 explicit, 18 fully implicit, 7 diagonally implicit) all available in both ODE or SDE modes with 20 noise types, 9 noise scaling modes, and options for implicit Runge-Kutta sampling refinement steps. Several new explicit samplers are implemented, most notably RES_2M, RES_3S, and RES_5S. Additionally, img2img capabilities include both latent image guidance and unsampling/resampling (via new forms of rectified noise inversion). 

A particular emphasis of this project has been to facilitate modulating parameters vs. time, which can facilitate large gains in image quality from the sampling process. To this end, a wide variety of sigma, latent, and noise manipulation nodes are included. 

Much of this work remains experimental and is subject to further changes.

# ClownSampler
![image](https://github.com/user-attachments/assets/f787ad74-0d95-4d8f-84b6-af4c4c1ac5e5)

# SharkSampler
![image](https://github.com/user-attachments/assets/299c9285-b298-4452-b0dd-48ae425ce30a)

# ClownsharKSampler
![image](https://github.com/user-attachments/assets/430fb77a-7353-4b40-acb6-cbd33392f7fc)

This is an all-in-one sampling node designed for convenience without compromising on control or quality. 

There are several key sections to the parameters which will be explained below.

## INPUTS
![image](https://github.com/user-attachments/assets/e8fe825d-2fb1-4e93-874c-89fb73ba68f7)

The only two mandatory inputs here are "model" and "latent_image". 

**POSITIVE and NEGATIVE:** If you connect nothing to either of these inputs, the node will automatically generate null conditioning. If you are unsampling, you actually don't need to hook up any conditioning at all (and will set CFG = 1.0). In most cases, merely using the positive conditioning will suffice, unless you really need to use a specific negative prompt.

**SIGMAS:** If a sigmas scheduler node is connected to this input, it will override the scheduler and steps settings chosen within the node.

## NOISE SETTINGS
![image](https://github.com/user-attachments/assets/caaa41a4-5afa-4c3c-8fb2-003b9a6b2578)

**NOISE_TYPE_INIT:** This sets the initial noise type applied to the latent image. 

**NOISE_TYPE_SDE:** This sets the noise type used during SDE sampling. Note that SDE sampling is identical to ODE sampling in most ways - the difference is that noise is added after each step. It's like a form of carefully controlled continuous noise injection.

**NOISE_MODE_SDE:** This determines what method is used for scaling the amount of noise to be added based on the "eta" setting below. They are listed in order of strength of the effect. 

**ETA:** This controls how much noise is added after each step. Note that for most of the noise modes, anything equal to or greater than 1.0 will trigger internal scaling to prevent NaN errors. The exception is the noise mode "exp" which allows for settings far above 1.0. 

**NOISE_SEED:** Largely identical to the setting in KSampler. Set to -1 to have it increment the most recently used seed (by the workflow) by 1.

**CONTROL_AFTER_GENERATE:** Self-explanatory. I recommend setting to "fixed" or "increment" (as you don't have to reload the workflow to regenerate something, you can just decement it by one).

## SAMPLER SETTINGS
![image](https://github.com/user-attachments/assets/d5ef0bef-7388-44f0-a119-220beec9883d)

**SAMPLER_MODE:** In virtually all situations, use "standard". However, if you are unsampling, set to "unsample", and if you are resampling (the stage after unsampling), set to "resample". Both of these modes will disable noise addition within ComfyUI, which is essential for these methods to work properly. 

**SAMPLER_NAME:** This is used similarly to the KSampler setting. This selects the explicit sampler type. Note the use of numbers and letters at the end of each sampler name: "2m, 3m, 2s, 3s, 5s, etc." 

Samplers that end in "s" use substeps between each step. One ending with "2s" has two stages per step, therefore costs two model calls per step (Euler costs one - model calls are what determine inference time). "3s" would take three model calls per step, and therefore take three times as long to run as Euler. However, the increase in accuracy can be very dramatic, especially when using noise (SDE sampling). The "res" family of samplers are particularly notable (they are effectively refinements of the dpmpp family, with new, higher order, much more accurate versions implemented here).

Samplers that end in "m" are "multistep" samplers, which instead of issuing new model calls for substeps, recycle previous steps as estimations for these substeps. They're less accurate, but all run at Euler speed (one model call per step). Sometimes this can be an advantage, as multistep samplers tend to converge more linearly toward a target image. This can be useful for img2img transformations, unsampling, or when using latent image guides.

**IMPLICIT_SAMPLER_NAME:** This is very useful with SD3.5 Medium for improving coherence, reducing artifacts and mutations, etc. It may be difficult to use with a model like Flux unless you plan on setting up a queue of generations and walking away. It will use the explicit step type as a predictor for each of the implicit substeps, so if you choose a slow explicit sampler, you will be waiting a long time. Euler, res_2m, deis_2m, etc. will often suffice as a predictor for implicit sampling, though any sampler may be used. Try "res_5s" as your explicit sampler type, and "gauss-legendre_5s", if you wish to demonstrate your commitment to climate change (and image quality).

Setting this to "none" has the same effect as setting implicit_steps = 0.

## SCHEDULER AND DENOISE SETTINGS
![image](https://github.com/user-attachments/assets/b89d3956-1734-4368-8bb4-429b9989cd4d)

These are identical in most ways to the settings by the same name in KSampler. 

**SCHEDULER:** There is one extra sigma scheduler offered by default: "beta57" which is the beta schedule with modified parameters (alpha = 0.5, beta = 0.7).

**IMPLICIT_STEPS:** This controls the number of implicit steps to run. Note that it will double, triple, etc. the runtime as you increase the stepcount. Typically, gains diminish quickly after 2-3 implicit steps.

**DENOISE:** This is identical to the KSampler setting. Controls the amount of noise removed from the image. Note that with this method, the effect will change significantly depending on your choice of scheduler.

**DENOISE_ALT:** Instead of splitting the sigma schedule like "denoise", this multiplies them. The results are different, but track more closely from one scheduler to another when using the same value. This can be particularly useful for img2img workflows.

**CFG:** This is identical to the KSampler setting. Typically, you'll set this to 1.0 (to disable it) when using Flux, if you're using Flux guidance. However, the effect is quite nice when using dedistilled models if you use "CLIP Text Encode" without any Flux guidance, and set CFG to 3.0. 

If you've never quite understood CFG, you can think of it this way. Imagine you're walking down the street and see what looks like an enticing music festival in the distance (your positive conditioning). You're on the fence about attending, but then, suddenly, a horde of pickleshark cannibals come storming out of a nearby bar (your negative conditioning). Together, the two team up to drive you toward the music festival. That's CFG.

## SHIFT SETTINGS
![image](https://github.com/user-attachments/assets/e9a2e2d7-be5c-4b63-8647-275409600b56)

These are present for convenience as they are used in virtually every workflow.

**SHIFT:** This is the same as "shift" for the ModelSampling nodes for SD3.5, AuraFlow, etc., and is equivalent to "max_shift" for Flux. Set this value to -1 to disable setting shift (or max_shift) within the node.

**BASE_SHIFT:** This is only used by Flux. Set this value to -1 to disable setting base_shift within the node.

**SHIFT_SCALING:** This changes how the shift values are calculated. "exponential" is the default used by Flux, whereas "linear" is the default used by SD3.5 and AuraFlow. In most cases, "exponential" leads to better results, though "linear" has some niche uses. 

# Sampler and noise mode list

## Explicit samplers
Bolded samplers are added as options to the sampler dropdown in ComyfUI (an ODE and SDE version for each).

**res_2m**

**res_2/3/5s**

**deis_2/3/4m**

ralston_2/3/4s

dpmpp_2/3m

dpmpp_sde_2s

dpmpp_2/3s

midpoint_2s

heun_2/3s

houwen-wray_3s

kutta_3s

ssprk3_3s

rk38_4s

rk4_4s

dormand-prince_6s

dormand-prince_13s

bogacki-shampine_7s

ddim

euler

## Fully Implicit Samplers

gauss-legendre_2/3/4/5s

radau_(i/ii)a_2/3s

lobatto_iii(a/b/c/d/star)_2/3s

## Diagonally Implicit Samplers

kraaijevanger_spijker_2s

qin_zhang_2s

pareschi_russo_2s

pareschi_russo_alt_2s

crouzeix_2/3s

irk_exp_diag_2s (features an exponential integrator)

# PREVIOUS FLUX WORKFLOWS

## TXT2IMG:
This uses my amateur cell phone lora, which is freely available (https://huggingface.co/ClownsharkBatwing/CSBW_Style/blob/main/amateurphotos_1_amateurcellphonephoto_recapt2.safetensors). It significantly reduces the plastic, blurred look of Flux Dev.
![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20flux.png)
![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20WF%20flux.png)

## INPAINTING:

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/inpainting%20flux.png)
![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/inpainting%20WF%20flux.png)

## UNSAMPLING (Dual guides with masks):

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20masked%20flux.png)
![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20masked%20WF%20flux.png)

# PREVIOUS WORKFLOWS
**THE FOLLOWING WORKFLOWS ARE FOR A PREVIOUS VERSION OF THE NODE.** 
These will still work! You will, however, need to manually delete and recreate the sampler and guide nodes and input the settings as they appear in the screenshots. The layout of the nodes has been changed slightly. To replicate their behavior precisely, add to the new extra_options box in ClownsharKSampler: truncate_conditioning=true (if that setting was used in the screenshot for the node).

![image](https://github.com/user-attachments/assets/a55ec484-1339-45a2-bcc4-76934f4648d4)

**TXT2IMG Workflow:** 

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20SD35M%20output.png)

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20SD35M.png)

**TXT2IMG Workflow (Latent Image Guides):**
![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20guided%20SD35M%20output.png)

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20guided%20SD35M.png)

Input image:
https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20guided%20SD35M%20input.png

**TXT2IMG Workflow (Dual Guides with Masking):**
![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20with%20mask%20SD35M%20output.png)

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20with%20mask%20SD35M.png)

Input images and mask:
https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20with%20mask%20SD35M%20input1.png
https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20with%20mask%20SD35M%20input2.png
https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/txt2img%20dual%20guides%20with%20mask%20SD35M%20mask.png

**IMG2IMG Workflow (Unsampling):** 

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20unsampling%20SD35L%20output.png)

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20unsampling%20SD35L.png)

Input image:
https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20unsampling%20SD35L%20input.png

**IMG2IMG Workflow (Unsampling with SDXL):**

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20unsampling%20SDXL%20output.png)

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20unsampling%20SDXL.png)

Input image:
https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20unsampling%20SDXL%20input.png

**IMG2IMG Workflow (Unsampling with latent image guide):**

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20guided%20unsampling%20SD35M%20output.png)

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20guided%20unsampling%20SD35M.png)

Input image:
https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20guided%20unsampling%20SD35M%20input.png

**IMG2IMG Workflow (Unsampling with dual latent image guides and masking):**

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20dual%20guided%20masked%20unsampling%20SD35M%20output.png)

![image](https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20dual%20guided%20masked%20unsampling%20SD35M.png)

Input images and mask:
https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20dual%20guided%20masked%20unsampling%20SD35M%20input1.png
https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20dual%20guided%20masked%20unsampling%20SD35M%20input2.png
https://github.com/ClownsharkBatwing/RES4LYF/blob/main/workflows/img2img%20dual%20guided%20masked%20unsampling%20SD35M%20mask.png


================================================
FILE: __init__.py
================================================
import importlib
import os

from . import loaders
from . import sigmas
from . import conditioning
from . import images
from . import models
from . import helper_sigma_preview_image_preproc
from . import nodes_misc

from . import nodes_latents
from . import nodes_precision


import torch
from math import *


from comfy.samplers import SchedulerHandler, SCHEDULER_HANDLERS, SCHEDULER_NAMES
new_scheduler_name = "bong_tangent"
if new_scheduler_name not in SCHEDULER_HANDLERS:
    bong_tangent_handler = SchedulerHandler(handler=sigmas.bong_tangent_scheduler, use_ms=True)
    SCHEDULER_HANDLERS[new_scheduler_name] = bong_tangent_handler
    SCHEDULER_NAMES.append(new_scheduler_name)


from .res4lyf import RESplain

#torch.use_deterministic_algorithms(True)
#torch.backends.cudnn.deterministic = True
#torch.backends.cudnn.benchmark = False

res4lyf.init()

discard_penultimate_sigma_samplers = set((
))


def add_samplers():
    from comfy.samplers import KSampler, k_diffusion_sampling
    if hasattr(KSampler, "DISCARD_PENULTIMATE_SIGMA_SAMPLERS"):
        KSampler.DISCARD_PENULTIMATE_SIGMA_SAMPLERS |= discard_penultimate_sigma_samplers
    added = 0
    for sampler in extra_samplers: #getattr(self, "sample_{}".format(extra_samplers))
        if sampler not in KSampler.SAMPLERS:
            try:
                idx = KSampler.SAMPLERS.index("uni_pc_bh2") # *should* be last item in samplers list
                KSampler.SAMPLERS.insert(idx+1, sampler) # add custom samplers (presumably) to end of list
                setattr(k_diffusion_sampling, "sample_{}".format(sampler), extra_samplers[sampler])
                added += 1
            except ValueError as _err:
                pass
    if added > 0:
        import importlib
        importlib.reload(k_diffusion_sampling)

extra_samplers = {}

extra_samplers = dict(reversed(extra_samplers.items()))

NODE_CLASS_MAPPINGS = {

    "FluxLoader"                          : loaders.FluxLoader,
    "SD35Loader"                          : loaders.SD35Loader,
    "ClownModelLoader"                    : loaders.RES4LYFModelLoader,
    

    "TextBox1"                            : nodes_misc.TextBox1,
    "TextBox2"                            : nodes_misc.TextBox2,
    "TextBox3"                            : nodes_misc.TextBox3,
    
    "TextConcatenate"                     : nodes_misc.TextConcatenate,
    "TextBoxConcatenate"                  : nodes_misc.TextBoxConcatenate,
    
    "TextLoadFile"                        : nodes_misc.TextLoadFile,
    "TextShuffle"                         : nodes_misc.TextShuffle,
    "TextShuffleAndTruncate"              : nodes_misc.TextShuffleAndTruncate,
    "TextTruncateTokens"                  : nodes_misc.TextTruncateTokens,

    "SeedGenerator"                       : nodes_misc.SeedGenerator,
    
    "ClownRegionalConditioning"           : conditioning.ClownRegionalConditioning,
    "ClownRegionalConditionings"          : conditioning.ClownRegionalConditionings,
    
    "ClownRegionalConditioning2"          : conditioning.ClownRegionalConditioning2,
    "ClownRegionalConditioning3"          : conditioning.ClownRegionalConditioning3,
    
    "ClownRegionalConditioning_AB"        : conditioning.ClownRegionalConditioning_AB,
    "ClownRegionalConditioning_ABC"       : conditioning.ClownRegionalConditioning_ABC,

    "CLIPTextEncodeFluxUnguided"          : conditioning.CLIPTextEncodeFluxUnguided,
    "ConditioningOrthoCollin"             : conditioning.ConditioningOrthoCollin,

    "ConditioningAverageScheduler"        : conditioning.ConditioningAverageScheduler,
    "ConditioningMultiply"                : conditioning.ConditioningMultiply,
    "ConditioningAdd"                     : conditioning.ConditioningAdd,
    "Conditioning Recast FP64"            : conditioning.Conditioning_Recast64,
    "StableCascade_StageB_Conditioning64" : conditioning.StableCascade_StageB_Conditioning64,
    "ConditioningZeroAndTruncate"         : conditioning.ConditioningZeroAndTruncate,
    "ConditioningTruncate"                : conditioning.ConditioningTruncate,
    "StyleModelApplyStyle"                : conditioning.StyleModelApplyStyle,
    "CrossAttn_EraseReplace_HiDream"      : conditioning.CrossAttn_EraseReplace_HiDream,

    "ConditioningDownsample (T5)"         : conditioning.ConditioningDownsampleT5,

    "ConditioningToBase64"                : conditioning.ConditioningToBase64,
    "Base64ToConditioning"                : conditioning.Base64ToConditioning,
    
    "ConditioningBatch4"                  : conditioning.ConditioningBatch4,
    "ConditioningBatch8"                  : conditioning.ConditioningBatch8,
    
    "TemporalMaskGenerator"               : conditioning.TemporalMaskGenerator,
    "TemporalSplitAttnMask"               : conditioning.TemporalSplitAttnMask,
    "TemporalSplitAttnMask (Midframe)"    : conditioning.TemporalSplitAttnMask_Midframe,
    "TemporalCrossAttnMask"               : conditioning.TemporalCrossAttnMask,



    "Set Precision"                       : nodes_precision.set_precision,
    "Set Precision Universal"             : nodes_precision.set_precision_universal,
    "Set Precision Advanced"              : nodes_precision.set_precision_advanced,
    
    "LatentUpscaleWithVAE"                : helper_sigma_preview_image_preproc.LatentUpscaleWithVAE,
    
    "LatentNoised"                        : nodes_latents.LatentNoised,
    "LatentNoiseList"                     : nodes_latents.LatentNoiseList,
    "AdvancedNoise"                       : nodes_latents.AdvancedNoise,

    "LatentNoiseBatch_perlin"             : nodes_latents.LatentNoiseBatch_perlin,
    "LatentNoiseBatch_fractal"            : nodes_latents.LatentNoiseBatch_fractal,
    "LatentNoiseBatch_gaussian"           : nodes_latents.LatentNoiseBatch_gaussian,
    "LatentNoiseBatch_gaussian_channels"  : nodes_latents.LatentNoiseBatch_gaussian_channels,
    
    "LatentBatch_channels"                : nodes_latents.LatentBatch_channels,
    "LatentBatch_channels_16"             : nodes_latents.LatentBatch_channels_16,
    
    "Latent Get Channel Means"            : nodes_latents.latent_get_channel_means,
    
    "Latent Match Channelwise"            : nodes_latents.latent_channelwise_match,
    
    "Latent to RawX"                      : nodes_latents.latent_to_raw_x,
    "Latent Clear State Info"             : nodes_latents.latent_clear_state_info,
    "Latent Replace State Info"           : nodes_latents.latent_replace_state_info,
    "Latent Display State Info"           : nodes_latents.latent_display_state_info,
    "Latent Transfer State Info"          : nodes_latents.latent_transfer_state_info,
    "Latent TrimVideo State Info"         : nodes_latents.TrimVideoLatent_state_info,
    "Latent to Cuda"                      : nodes_latents.latent_to_cuda,
    "Latent Batcher"                      : nodes_latents.latent_batch,
    "Latent Normalize Channels"           : nodes_latents.latent_normalize_channels,
    "Latent Channels From To"             : nodes_latents.latent_mean_channels_from_to,



    "LatentPhaseMagnitude"                : nodes_latents.LatentPhaseMagnitude,
    "LatentPhaseMagnitudeMultiply"        : nodes_latents.LatentPhaseMagnitudeMultiply,
    "LatentPhaseMagnitudeOffset"          : nodes_latents.LatentPhaseMagnitudeOffset,
    "LatentPhaseMagnitudePower"           : nodes_latents.LatentPhaseMagnitudePower,
    
    "MaskFloatToBoolean"                  : nodes_latents.MaskFloatToBoolean,
    
    "MaskToggle"                          : nodes_latents.MaskToggle,
    "MaskEdge"                            : nodes_latents.MaskEdge,
    #"MaskEdgeRatio"                       : nodes_latents.MaskEdgeRatio,

    "Frames Masks Uninterpolate"          : nodes_latents.Frames_Masks_Uninterpolate,
    "Frames Masks ZeroOut"                : nodes_latents.Frames_Masks_ZeroOut,
    "Frames Latent ReverseOrder"          : nodes_latents.Frames_Latent_ReverseOrder,

    
    "EmptyLatentImage64"                  : nodes_latents.EmptyLatentImage64,
    "EmptyLatentImageCustom"              : nodes_latents.EmptyLatentImageCustom,
    "StableCascade_StageC_VAEEncode_Exact": nodes_latents.StableCascade_StageC_VAEEncode_Exact,
    
    
    
    "PrepForUnsampling"                   : helper_sigma_preview_image_preproc.VAEEncodeAdvanced,
    "VAEEncodeAdvanced"                   : helper_sigma_preview_image_preproc.VAEEncodeAdvanced,
    "VAEStyleTransferLatent"              : helper_sigma_preview_image_preproc.VAEStyleTransferLatent,
    
    "SigmasPreview"                       : helper_sigma_preview_image_preproc.SigmasPreview,
    "SigmasSchedulePreview"               : helper_sigma_preview_image_preproc.SigmasSchedulePreview,


    "TorchCompileModelFluxAdv"            : models.TorchCompileModelFluxAdvanced,
    "TorchCompileModelAura"               : models.TorchCompileModelAura,
    "TorchCompileModelSD35"               : models.TorchCompileModelSD35,
    "TorchCompileModels"                  : models.TorchCompileModels,
    "ClownpileModelWanVideo"              : models.ClownpileModelWanVideo,


    "ModelTimestepPatcher"                : models.ModelSamplingAdvanced,
    "ModelSamplingAdvanced"               : models.ModelSamplingAdvanced,
    "ModelSamplingAdvancedResolution"     : models.ModelSamplingAdvancedResolution,
    "FluxGuidanceDisable"                 : models.FluxGuidanceDisable,

    "ReWanPatcher"                        : models.ReWanPatcher,
    "ReFluxPatcher"                       : models.ReFluxPatcher,
    "ReChromaPatcher"                     : models.ReChromaPatcher,
    "ReSD35Patcher"                       : models.ReSD35Patcher,
    "ReAuraPatcher"                       : models.ReAuraPatcher,
    "ReLTXVPatcher"                       : models.ReLTXVPatcher,
    "ReHiDreamPatcher"                    : models.ReHiDreamPatcher,
    "ReSDPatcher"                         : models.ReSDPatcher,
    "ReReduxPatcher"                      : models.ReReduxPatcher,
    
    "ReWanPatcherAdvanced"                : models.ReWanPatcherAdvanced,
    "ReFluxPatcherAdvanced"               : models.ReFluxPatcherAdvanced,
    "ReChromaPatcherAdvanced"             : models.ReChromaPatcherAdvanced,
    "ReSD35PatcherAdvanced"               : models.ReSD35PatcherAdvanced,
    "ReAuraPatcherAdvanced"               : models.ReAuraPatcherAdvanced,
    "ReLTXVPatcherAdvanced"               : models.ReLTXVPatcherAdvanced,

    
    "ReHiDreamPatcherAdvanced"            : models.ReHiDreamPatcherAdvanced,
    
    "LayerPatcher"                        : loaders.LayerPatcher,
    
    "FluxOrthoCFGPatcher"                 : models.FluxOrthoCFGPatcher,

    
    "UNetSave"                            : models.UNetSave,



    "Sigmas Recast"                       : sigmas.set_precision_sigmas,
    "Sigmas Noise Inversion"              : sigmas.sigmas_noise_inversion,
    "Sigmas From Text"                    : sigmas.sigmas_from_text, 

    "Sigmas Variance Floor"               : sigmas.sigmas_variance_floor,
    "Sigmas Truncate"                     : sigmas.sigmas_truncate,
    "Sigmas Start"                        : sigmas.sigmas_start,
    "Sigmas Split"                        : sigmas.sigmas_split,
    "Sigmas Split Value"                  : sigmas.sigmas_split_value,
    "Sigmas Concat"                       : sigmas.sigmas_concatenate,
    "Sigmas Pad"                          : sigmas.sigmas_pad,
    "Sigmas Unpad"                        : sigmas.sigmas_unpad,
    
    "Sigmas SetFloor"                     : sigmas.sigmas_set_floor,
    "Sigmas DeleteBelowFloor"             : sigmas.sigmas_delete_below_floor,
    "Sigmas DeleteDuplicates"             : sigmas.sigmas_delete_consecutive_duplicates,
    "Sigmas Cleanup"                      : sigmas.sigmas_cleanup,
    
    "Sigmas Mult"                         : sigmas.sigmas_mult,
    "Sigmas Modulus"                      : sigmas.sigmas_modulus,
    "Sigmas Quotient"                     : sigmas.sigmas_quotient,
    "Sigmas Add"                          : sigmas.sigmas_add,
    "Sigmas Power"                        : sigmas.sigmas_power,
    "Sigmas Abs"                          : sigmas.sigmas_abs,
    
    "Sigmas2 Mult"                        : sigmas.sigmas2_mult,
    "Sigmas2 Add"                         : sigmas.sigmas2_add,
    
    "Sigmas Rescale"                      : sigmas.sigmas_rescale,
    "Sigmas Count"                        : sigmas.sigmas_count,
    "Sigmas Resample"                     : sigmas.sigmas_interpolate,

    "Sigmas Math1"                        : sigmas.sigmas_math1,
    "Sigmas Math3"                        : sigmas.sigmas_math3,

    "Sigmas Iteration Karras"             : sigmas.sigmas_iteration_karras,
    "Sigmas Iteration Polyexp"            : sigmas.sigmas_iteration_polyexp,

    # New Sigma Nodes
    "Sigmas Lerp"                         : sigmas.sigmas_lerp,
    "Sigmas InvLerp"                      : sigmas.sigmas_invlerp,
    "Sigmas ArcSine"                      : sigmas.sigmas_arcsine,
    "Sigmas LinearSine"                   : sigmas.sigmas_linearsine,
    "Sigmas Append"                       : sigmas.sigmas_append,
    "Sigmas ArcCosine"                    : sigmas.sigmas_arccosine,
    "Sigmas ArcTangent"                   : sigmas.sigmas_arctangent,
    "Sigmas CrossProduct"                 : sigmas.sigmas_crossproduct,
    "Sigmas DotProduct"                   : sigmas.sigmas_dotproduct,
    "Sigmas Fmod"                         : sigmas.sigmas_fmod,
    "Sigmas Frac"                         : sigmas.sigmas_frac,
    "Sigmas If"                           : sigmas.sigmas_if,
    "Sigmas Logarithm2"                   : sigmas.sigmas_logarithm2,
    "Sigmas SmoothStep"                   : sigmas.sigmas_smoothstep,
    "Sigmas SquareRoot"                   : sigmas.sigmas_squareroot,
    "Sigmas TimeStep"                     : sigmas.sigmas_timestep,
    "Sigmas Sigmoid"                      : sigmas.sigmas_sigmoid,
    "Sigmas Easing"                       : sigmas.sigmas_easing,
    "Sigmas Hyperbolic"                   : sigmas.sigmas_hyperbolic,
    "Sigmas Gaussian"                     : sigmas.sigmas_gaussian,
    "Sigmas Percentile"                   : sigmas.sigmas_percentile,
    "Sigmas KernelSmooth"                 : sigmas.sigmas_kernel_smooth,
    "Sigmas QuantileNorm"                 : sigmas.sigmas_quantile_norm,
    "Sigmas AdaptiveStep"                 : sigmas.sigmas_adaptive_step,
    "Sigmas Chaos"                        : sigmas.sigmas_chaos,
    "Sigmas ReactionDiffusion"            : sigmas.sigmas_reaction_diffusion,
    "Sigmas Attractor"                    : sigmas.sigmas_attractor,
    "Sigmas CatmullRom"                   : sigmas.sigmas_catmull_rom,
    "Sigmas LambertW"                     : sigmas.sigmas_lambert_w,
    "Sigmas ZetaEta"                      : sigmas.sigmas_zeta_eta,
    "Sigmas GammaBeta"                    : sigmas.sigmas_gamma_beta,
    
    
    "Sigmas GaussianCDF"                  : sigmas.sigmas_gaussian_cdf,
    "Sigmas StepwiseMultirate"            : sigmas.sigmas_stepwise_multirate,
    "Sigmas HarmonicDecay"                : sigmas.sigmas_harmonic_decay,
    "Sigmas AdaptiveNoiseFloor"           : sigmas.sigmas_adaptive_noise_floor,
    "Sigmas CollatzIteration"             : sigmas.sigmas_collatz_iteration,
    "Sigmas ConwaySequence"               : sigmas.sigmas_conway_sequence,
    "Sigmas GilbreathSequence"            : sigmas.sigmas_gilbreath_sequence,
    "Sigmas CNFInverse"                   : sigmas.sigmas_cnf_inverse,
    "Sigmas RiemannianFlow"               : sigmas.sigmas_riemannian_flow,
    "Sigmas LangevinDynamics"             : sigmas.sigmas_langevin_dynamics,
    "Sigmas PersistentHomology"           : sigmas.sigmas_persistent_homology,
    "Sigmas NormalizingFlows"             : sigmas.sigmas_normalizing_flows,
    
    "ClownScheduler"                      : sigmas.ClownScheduler, # for modulating parameters
    "Tan Scheduler"                       : sigmas.tan_scheduler,
    "Tan Scheduler 2"                     : sigmas.tan_scheduler_2stage,
    "Tan Scheduler 2 Simple"              : sigmas.tan_scheduler_2stage_simple,
    "Constant Scheduler"                  : sigmas.constant_scheduler,
    "Linear Quadratic Advanced"           : sigmas.linear_quadratic_advanced,
    
    "SetImageSizeWithScale"               : nodes_misc.SetImageSizeWithScale,
    "SetImageSize"                        : nodes_misc.SetImageSize,
    
    "Mask Bounding Box Aspect Ratio"      : images.MaskBoundingBoxAspectRatio,
    
    
    "Image Get Color Swatches"            : images.Image_Get_Color_Swatches,
    "Masks From Color Swatches"           : images.Masks_From_Color_Swatches,
    "Masks From Colors"                   : images.Masks_From_Colors,
    
    "Masks Unpack 4"                      : images.Masks_Unpack4,
    "Masks Unpack 8"                      : images.Masks_Unpack8,
    "Masks Unpack 16"                     : images.Masks_Unpack16,

    
    "Image Sharpen FS"                    : images.ImageSharpenFS,
    "Image Channels LAB"                  : images.Image_Channels_LAB,
    "Image Median Blur"                   : images.ImageMedianBlur,
    "Image Gaussian Blur"                 : images.ImageGaussianBlur,

    "Image Pair Split"                    : images.Image_Pair_Split,
    "Image Crop Location Exact"           : images.Image_Crop_Location_Exact,
    "Film Grain"                          : images.Film_Grain,
    "Frequency Separation Linear Light"   : images.Frequency_Separation_Linear_Light,
    "Frequency Separation Hard Light"     : images.Frequency_Separation_Hard_Light,
    "Frequency Separation Hard Light LAB" : images.Frequency_Separation_Hard_Light_LAB,
    
    "Frame Select"                        : images.Frame_Select,
    "Frames Slice"                        : images.Frames_Slice,
    "Frames Concat"                       : images.Frames_Concat,
    
    "Mask Sketch"                         : images.MaskSketch,
    
    "Image Grain Add"                     : images.Image_Grain_Add,
    "Image Repeat Tile To Size"           : images.ImageRepeatTileToSize,

    "Frames Concat Masks"                 : nodes_latents.Frames_Concat_Masks,


    "Frame Select Latent"                 : nodes_latents.Frame_Select_Latent,
    "Frames Slice Latent"                 : nodes_latents.Frames_Slice_Latent,
    "Frames Concat Latent"                : nodes_latents.Frames_Concat_Latent,


    "Frame Select Latent Raw"             : nodes_latents.Frame_Select_Latent_Raw,
    "Frames Slice Latent Raw"             : nodes_latents.Frames_Slice_Latent_Raw,
    "Frames Concat Latent Raw"            : nodes_latents.Frames_Concat_Latent_Raw,



}


NODE_DISPLAY_NAME_MAPPINGS = {
    
}


WEB_DIRECTORY = "./web/js"



flags = {
    "zampler"        : False,
    "beta_samplers"  : False,
    "legacy_samplers": False,
}


file_path = os.path.join(os.path.dirname(__file__), "zampler_test_code.txt")
if os.path.exists(file_path):
    try:
        from .zampler import add_zamplers
        NODE_CLASS_MAPPINGS, extra_samplers = add_zamplers(NODE_CLASS_MAPPINGS, extra_samplers)
        flags["zampler"] = True
        RESplain("Importing zampler.")
    except ImportError:
        try:
            import importlib
            for module_name in ["RES4LYF.zampler", "res4lyf.zampler"]:
                try:
                    zampler_module = importlib.import_module(module_name)
                    add_zamplers = zampler_module.add_zamplers
                    NODE_CLASS_MAPPINGS, extra_samplers = add_zamplers(NODE_CLASS_MAPPINGS, extra_samplers)
                    flags["zampler"] = True
                    RESplain(f"Importing zampler via {module_name}.")
                    break
                except ImportError:
                    continue
            else:
                raise ImportError("Zampler module not found in any path")
        except Exception as e:
            print(f"(RES4LYF) Failed to import zamplers: {e}")



try:
    from .beta import add_beta
    NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers = add_beta(NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers)
    flags["beta_samplers"] = True
    RESplain("Importing beta samplers.")
except ImportError:
    try:
        import importlib
        for module_name in ["RES4LYF.beta", "res4lyf.beta"]:
            try:
                beta_module = importlib.import_module(module_name)
                add_beta = beta_module.add_beta
                NODE_CLASS_MAPPINGS, extra_samplers = add_beta(NODE_CLASS_MAPPINGS, extra_samplers)
                flags["beta_samplers"] = True
                RESplain(f"Importing beta samplers via {module_name}.")
                break
            except ImportError:
                continue
        else:
            raise ImportError("Beta module not found in any path")
    except Exception as e:
        print(f"(RES4LYF) Failed to import beta samplers: {e}")



try:
    from .legacy import add_legacy
    NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers = add_legacy(NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers)
    flags["legacy_samplers"] = True
    RESplain("Importing legacy samplers.")
except ImportError:
    try:
        import importlib
        for module_name in ["RES4LYF.legacy", "res4lyf.legacy"]:
            try:
                legacy_module = importlib.import_module(module_name)
                add_legacy = legacy_module.add_legacy
                NODE_CLASS_MAPPINGS, extra_samplers = add_legacy(NODE_CLASS_MAPPINGS, extra_samplers)
                flags["legacy_samplers"] = True
                RESplain(f"Importing legacy samplers via {module_name}.")
                break
            except ImportError:
                continue
        else:
            raise ImportError("Legacy module not found in any path")
    except Exception as e:
        print(f"(RES4LYF) Failed to import legacy samplers: {e}")


add_samplers()


__all__ = ["NODE_CLASS_MAPPINGS", "NODE_DISPLAY_NAME_MAPPINGS", "WEB_DIRECTORY"]






================================================
FILE: attention_masks.py
================================================
import torch
import torch.nn.functional as F

from torch  import Tensor
from typing import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKING, TypeVar

from einops import rearrange

import copy
import base64

import comfy.supported_models
import node_helpers
import gc


from .sigmas  import get_sigmas

from .helper  import initialize_or_scale, precision_tool, get_res4lyf_scheduler_list
from .latents import get_orthogonal, get_collinear, get_edge_mask, checkerboard_variable
from .res4lyf import RESplain
from .beta.constants import MAX_STEPS



def fp_not(tensor):
    return 1 - tensor

def fp_or(tensor1, tensor2):
    return torch.maximum(tensor1, tensor2)

def fp_and(tensor1, tensor2):
    return torch.minimum(tensor1, tensor2)

def fp_and2(tensor1, tensor2):
    triu = torch.triu(torch.ones_like(tensor1))
    tril = torch.tril(torch.ones_like(tensor2))
    triu.diagonal().fill_(0.0)
    tril.diagonal().fill_(0.0)
    new_tensor = tensor1 * triu + tensor2 * tril
    new_tensor.diagonal().fill_(1.0)
    
    return new_tensor



class CoreAttnMask:
    def __init__(self, mask, mask_type=None, start_sigma=None, end_sigma=None, start_block=0, end_block=-1, idle_device='cpu', work_device='cuda'):
        self.mask        = mask.to(idle_device)
        self.start_sigma = start_sigma
        self.end_sigma   = end_sigma
        self.start_block = start_block
        self.end_block   = end_block
        self.work_device = work_device
        self.idle_device = idle_device
        self.mask_type   = mask_type
    
    def set_sigma_range(self, start_sigma, end_sigma):
        self.start_sigma = start_sigma
        self.end_sigma   = end_sigma
        
    def set_block_range(self, start_block, end_block):
        self.start_block = start_block
        self.end_block   = end_block

    def __call__(self, weight=1.0, mask_type=None, transformer_options=None, block_idx=0):
        """ 
        Return mask if block_idx is in range, sigma passed via transformer_options is in range, else return None. If no range is specified, return mask.
        """
        if block_idx < self.start_block:
            return None
        if block_idx > self.end_block and self.end_block > 0:
            return None
        
        mask_type = self.mask_type if mask_type is None else mask_type
        
        if transformer_options is None:
            return self.mask.to(self.work_device) * weight if mask_type.startswith("gradient") else self.mask.to(self.work_device) > 0

        sigma = transformer_options['sigmas'][0].to(self.start_sigma.device)
        
        if self.start_sigma is not None and self.end_sigma is not None:
            if self.start_sigma >= sigma > self.end_sigma:
                return self.mask.to(self.work_device) * weight if mask_type.startswith("gradient") else self.mask.to(self.work_device) > 0
        else:
            return self.mask.to(self.work_device) * weight if mask_type.startswith("gradient") else self.mask.to(self.work_device) > 0
        
        return None



class BaseAttentionMask:
    def __init__(self, mask_type="gradient", edge_width=0, edge_width_list=None, use_self_attn_mask_list=None, dtype=torch.float16):
        self.t                    = 1
        self.img_len              = 0
        self.text_len             = 0
        self.text_off             = 0

        self.h                    = 0
        self.w                    = 0
    
        self.text_register_tokens = 0
        
        self.context_lens         = []
        self.context_lens_list    = []
        self.masks                = []
        
        self.num_regions          = 0
        
        self.attn_mask            = None
        self.mask_type            = mask_type
        self.edge_width           = edge_width
        
        self.edge_width_list      = edge_width_list
        self.use_self_attn_mask_list = use_self_attn_mask_list
        
        if mask_type == "gradient":
            self.dtype            = dtype
        else:
            self.dtype            = torch.bool


    def set_latent(self, latent):
        if latent.ndim == 4:
            self.b, self.c, self.h, self.w = latent.shape
            
        elif latent.ndim == 5:
            self.b, self.c, self.t, self.h, self.w = latent.shape
            
        #if not isinstance(self.model_config, comfy.supported_models.Stable_Cascade_C):
        self.h //= 2  # 16x16 PE      patch_size = 2  1024x1024 rgb -> 128x128 16ch latent -> 64x64 img
        self.w //= 2
        
        self.img_len = self.h * self.w        

    def add_region(self, context, mask):
        self.context_lens.append(context.shape[-2])
        self.masks       .append(mask)
        
        self.text_len = sum(self.context_lens)
        self.text_off = self.text_len
        
        self.num_regions += 1
        
    def add_region_sizes(self, context_size_list, mask):
        
        self.context_lens     .append(sum(context_size_list))
        self.context_lens_list.append(    context_size_list)
        self.masks            .append(mask)
        
        self.text_len = sum(sum(sublist) for sublist in self.context_lens_list)
        self.text_off = self.text_len

        self.num_regions += 1
        
    def add_regions(self, contexts, masks):
        for context, mask in zip(contexts, masks):
            self.add_region(context, mask)
    
    def clear_regions(self):
        self.context_lens  = []
        self.masks         = []
        self.text_len      = 0
        self.text_off      = 0
        self.num_regions   = 0
        
    def generate(self):
        print("Initializing ergosphere.")
        
    def get(self, **kwargs):
        return self.attn_mask(**kwargs)
    
    def attn_mask_recast(self, dtype):
        if self.attn_mask.mask.dtype != dtype:
            self.attn_mask.mask = self.attn_mask.mask.to(dtype)




class FullAttentionMask(BaseAttentionMask):
    def generate(self, mask_type=None, dtype=None):
        mask_type = self.mask_type if mask_type is None else mask_type
        dtype     = self.dtype     if dtype     is None else dtype
        text_off  = self.text_off
        text_len  = self.text_len
        img_len   = self.img_len
        t         = self.t
        h         = self.h
        w         = self.w
        
        if self.edge_width_list is None:
            self.edge_width_list = [self.edge_width] * self.num_regions
        
        attn_mask = torch.zeros((text_off+t*img_len, text_len+t*img_len), dtype=dtype)
        
        #cross_self_mask = torch.zeros((t*img_len, t*img_len), dtype=torch.float16)
        
        prev_len = 0
        for context_len, mask in zip(self.context_lens, self.masks):
            
            img2txt_mask    = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, context_len)
            img2txt_mask_sq = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)

            curr_len = prev_len + context_len
            
            attn_mask[prev_len:curr_len, prev_len:curr_len] = 1.0                                         # self             TXT 2 TXT
            attn_mask[prev_len:curr_len, text_len:        ] = img2txt_mask.transpose(-1, -2).repeat(1,t)  # cross            TXT 2 regional IMG    # txt2img_mask
            attn_mask[text_off:        , prev_len:curr_len] = img2txt_mask.repeat(t,1)                    # cross   regional IMG 2 TXT

            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], fp_and(img2txt_mask_sq.repeat(t,t), img2txt_mask_sq.transpose(-1, -2).repeat(t,t))) # img2txt_mask_sq, txt2img_mask_sq
            
            #cross_self_mask[:,:] = fp_or(cross_self_mask, fp_and(img2txt_mask_sq.repeat(t,t), (1-img2txt_mask_sq).transpose(-1, -2).repeat(t,t)))
            
            prev_len = curr_len
            
        if self.mask_type.endswith("_masked") or self.mask_type.endswith("_A") or self.mask_type.endswith("_AB") or self.mask_type.endswith("_AC") or self.mask_type.endswith("_A,unmasked"):
            img2txt_mask_sq = F.interpolate(self.masks[0].unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)
        
        if self.mask_type.endswith("_unmasked") or self.mask_type.endswith("_C") or self.mask_type.endswith("_BC") or self.mask_type.endswith("_AC") or self.mask_type.endswith("_B,unmasked") or self.mask_type.endswith("_A,unmasked"):
            img2txt_mask_sq = F.interpolate(self.masks[-1].unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)
            
        if self.mask_type.endswith("_B") or self.mask_type.endswith("_AB") or self.mask_type.endswith("_BC") or self.mask_type.endswith("_B,unmasked"):
            img2txt_mask_sq = F.interpolate(self.masks[1].unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)
            
        if self.edge_width > 0:
            edge_mask = torch.zeros_like(self.masks[0])
            for mask in self.masks:
                edge_mask = fp_or(edge_mask, get_edge_mask(mask, dilation=self.edge_width))
                
            img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)
            
        elif self.edge_width_list is not None:
            edge_mask = torch.zeros_like(self.masks[0])
            
            for mask, edge_width in zip(self.masks, self.edge_width_list):
                if edge_width != 0:
                    edge_mask_new = get_edge_mask(mask, dilation=abs(edge_width))
                    edge_mask     = fp_or(edge_mask, fp_and(edge_mask_new, mask)) #fp_and here is to ensure edge_mask only grows into the region for current mask
                    
                    img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
                    attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)
            
        if self.use_self_attn_mask_list is not None:
            for mask, use_self_attn_mask in zip(self.masks, self.use_self_attn_mask_list):
                if not use_self_attn_mask:
                    img2txt_mask_sq = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
                    attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)
        
        
        #cmask = torch.zeros((text_len+t*img_len), dtype=torch.bfloat16)
        #cmask[text_len:] = cross_self_mask #cmask[text_len:] + 0.25 * cross_self_mask
        
        #self.cross_self_mask = CoreAttnMask(cmask[None,None,...,None],     mask_type=mask_type)   # shape: 1, 1, txt_len+img_len, 1
        #self.cross_self_mask = CoreAttnMask(cross_self_mask[None,None,...,None],     mask_type=mask_type)   # shape: 1, 1, txt_len+img_len, 1
        #self.cross_self_mask = CoreAttnMask(cross_self_mask[None,None,...,None],     mask_type=mask_type)   # shape: 1, 1, txt_len+img_len, 1
        
        """
        cross_self_mask = F.interpolate(self.masks[0].unsqueeze(0).to(torch.bfloat16), (h, w), mode='nearest-exact').to(torch.bfloat16).flatten()#.unsqueeze(1) # .repeat(1, img_len)

        edge_mask = get_edge_mask(self.masks[0], dilation=80)
        edge_mask = F.interpolate(edge_mask.unsqueeze(0).to(torch.bfloat16), (h, w), mode='nearest-exact').flatten().unsqueeze(1).repeat(1, img_len)

        attn_mask[text_off:, text_len:] = F.interpolate((1-self.masks[0]).unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
        attn_mask = attn_mask.to(torch.bfloat16)

        edge_mask = edge_mask.to(torch.bfloat16)"""

        self.cross_self_mask = CoreAttnMask(torch.zeros_like(img2txt_mask_sq).to(torch.bfloat16).squeeze(),     mask_type=mask_type)
        
        self.attn_mask       = CoreAttnMask(attn_mask, mask_type=mask_type)









class FullAttentionMaskHiDream(BaseAttentionMask):
    def generate(self, mask_type=None, dtype=None):
        mask_type = self.mask_type if mask_type is None else mask_type
        dtype     = self.dtype     if dtype     is None else dtype
        text_off  = self.text_off
        text_len  = self.text_len
        img_len   = self.img_len
        t         = self.t
        h         = self.h
        w         = self.w
        
        if self.edge_width_list is None:
            self.edge_width_list = [self.edge_width] * self.num_regions
        
        attn_mask = torch.zeros((text_off+t*img_len, text_len+t*img_len), dtype=dtype)
        reg_num  = 0
        prev_len = 0
        for context_len, mask in zip(self.context_lens, self.masks):

            img2txt_mask_sq = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)

            curr_len = prev_len + context_len
            
            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], fp_and(img2txt_mask_sq.repeat(t,t), img2txt_mask_sq.transpose(-1,-2).repeat(t,t))) # img2txt_mask_sq, txt2img_mask_sq
            
            prev_len = curr_len
            reg_num += 1
        
        self.self_attn_mask = attn_mask[text_off:, text_len:].clone()
        
        if self.mask_type.endswith("_masked") or self.mask_type.endswith("_A") or self.mask_type.endswith("_AB") or self.mask_type.endswith("_AC") or self.mask_type.endswith("_A,unmasked"):
            img2txt_mask_sq = F.interpolate(self.masks[0].unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)
        
        if self.mask_type.endswith("_unmasked") or self.mask_type.endswith("_C") or self.mask_type.endswith("_BC") or self.mask_type.endswith("_AC") or self.mask_type.endswith("_B,unmasked") or self.mask_type.endswith("_A,unmasked"):
            img2txt_mask_sq = F.interpolate(self.masks[-1].unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)
            
        if self.mask_type.endswith("_B") or self.mask_type.endswith("_AB") or self.mask_type.endswith("_BC") or self.mask_type.endswith("_B,unmasked"):
            img2txt_mask_sq = F.interpolate(self.masks[1].unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)
        
        if   self.edge_width > 0:
            edge_mask = torch.zeros_like(self.masks[0])
            for mask in self.masks:
                edge_mask_new = get_edge_mask(mask, dilation=abs(self.edge_width))
                edge_mask = fp_or(edge_mask, edge_mask_new)
                #edge_mask = fp_or(edge_mask, get_edge_mask(mask, dilation=self.edge_width))
                
            img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)
            
        elif self.edge_width < 0: # edge masks using cross-attn too
            edge_mask = torch.zeros_like(self.masks[0])
            for mask in self.masks:
                edge_mask = fp_or(edge_mask, get_edge_mask(mask, dilation=abs(self.edge_width)))
                
            img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
            attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)
        
        elif self.edge_width_list is not None:
            edge_mask = torch.zeros_like(self.masks[0])
            
            for mask, edge_width in zip(self.masks, self.edge_width_list):
                if edge_width != 0:
                    edge_mask_new = get_edge_mask(mask, dilation=abs(edge_width))
                    edge_mask     = fp_or(edge_mask, fp_and(edge_mask_new, mask)) #fp_and here is to ensure edge_mask only grows into the region for current mask
                    
                    img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
                    attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)
            
        if self.use_self_attn_mask_list is not None:
            for mask, use_self_attn_mask in zip(self.masks, self.use_self_attn_mask_list):
                if not use_self_attn_mask:
                    img2txt_mask_sq = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
                    attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)

        text_len_t5     = sum(sublist[0] for sublist in self.context_lens_list)
        img2txt_mask_t5 = torch.empty((img_len, text_len_t5)).to(attn_mask)
        offset_t5_start = 0
        reg_num_slice   = 0
        for context_len, mask_slice, edge_width in zip(self.context_lens, self.masks, self.edge_width_list):
            if self.edge_width < 0: # edge masks using cross-attn too
                mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(self.edge_width)))
            if edge_width < 0: # edge masks using cross-attn too
                mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(edge_width)))
            
            slice_len     = self.context_lens_list[reg_num_slice][0]
            offset_t5_end = offset_t5_start + slice_len
            
            img2txt_mask_slice = F.interpolate(mask_slice.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, slice_len)
            
            img2txt_mask_t5[:, offset_t5_start:offset_t5_end] = img2txt_mask_slice
            
            offset_t5_start = offset_t5_end
            reg_num_slice += 1
        
        text_len_llama     = sum(sublist[1] for sublist in self.context_lens_list)
        img2txt_mask_llama = torch.empty((img_len, text_len_llama)).to(attn_mask)
        offset_llama_start = 0
        reg_num_slice      = 0
        for context_len, mask_slice, edge_width in zip(self.context_lens, self.masks, self.edge_width_list):
            if self.edge_width < 0: # edge masks using cross-attn too
                mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(self.edge_width)))
            if edge_width < 0: # edge masks using cross-attn too
                mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(edge_width)))
                
            slice_len        = self.context_lens_list[reg_num_slice][1]
            offset_llama_end = offset_llama_start + slice_len
            
            img2txt_mask_slice = F.interpolate(mask_slice.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, slice_len)
            
            img2txt_mask_llama[:, offset_llama_start:offset_llama_end] = img2txt_mask_slice
            
            offset_llama_start = offset_llama_end
            reg_num_slice += 1
        
        img2txt_mask = torch.cat([img2txt_mask_t5, img2txt_mask_llama.repeat(1,2)], dim=-1)
        
        attn_mask[:-text_off , :-text_len ] = attn_mask[text_off:, text_len:].clone()
        attn_mask[:-text_off ,  -text_len:] = img2txt_mask
        attn_mask[ -text_off:, :-text_len ] = img2txt_mask.transpose(-2,-1)

        attn_mask[img_len:,img_len:] = 1.0   # txt -> txt "self-cross" attn is critical with hidream in most cases. checkerboard strategies are generally poo
        
        # mask cross attention between text embeds
        flat = [v for group in zip(*self.context_lens_list) for v in group]
        checkvar = checkerboard_variable(flat)
        attn_mask[img_len:, img_len:] = checkvar
        
        self.attn_mask = CoreAttnMask(attn_mask, mask_type=mask_type)


        #flat = [v for group in zip(*self.context_lens_list) for v in group]

    def gen_edge_mask(self, block_idx):
        mask_type = self.mask_type
        dtype     = self.dtype     
        text_off  = self.text_off
        text_len  = self.text_len
        img_len   = self.img_len
        t         = self.t
        h         = self.h
        w         = self.w
        
        if self.edge_width_list is None:
            return self.attn_mask.mask
        else:
            #attn_mask = self.attn_mask.mask.clone()
            attn_mask = torch.zeros_like(self.attn_mask.mask)
            attn_mask[text_off:, text_len:] = self.self_attn_mask.clone()
            edge_mask = torch.zeros_like(self.masks[0])
            
            for mask, edge_width in zip(self.masks, self.edge_width_list):
                #edge_width *= (block_idx/48)
                edge_width *= torch.rand(1).item()
                edge_width = int(edge_width)
                if edge_width != 0:
                    #edge_width *= (block_idx/48)
                    #edge_width = int(edge_width)
                    edge_mask_new = get_edge_mask(mask, dilation=abs(edge_width))
                    edge_mask     = fp_or(edge_mask, fp_and(edge_mask_new, mask)) #fp_and here is to ensure edge_mask only grows into the region for current mask
                    
                    img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
                    
                    attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)


            if self.use_self_attn_mask_list is not None:
                for mask, use_self_attn_mask in zip(self.masks, self.use_self_attn_mask_list):
                    if not use_self_attn_mask:
                        img2txt_mask_sq = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, img_len)
                        attn_mask[text_off:, text_len:] = fp_or(attn_mask[text_off:, text_len:], img2txt_mask_sq)

            text_len_t5     = sum(sublist[0] for sublist in self.context_lens_list)
            img2txt_mask_t5 = torch.empty((img_len, text_len_t5)).to(attn_mask)
            offset_t5_start = 0
            reg_num_slice   = 0
            for context_len, mask_slice, edge_width in zip(self.context_lens, self.masks, self.edge_width_list):
                if self.edge_width < 0: # edge masks using cross-attn too
                    mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(self.edge_width)))
                if edge_width < 0: # edge masks using cross-attn too
                    mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(edge_width)))
                
                slice_len     = self.context_lens_list[reg_num_slice][0]
                offset_t5_end = offset_t5_start + slice_len
                
                img2txt_mask_slice = F.interpolate(mask_slice.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, slice_len)
                
                img2txt_mask_t5[:, offset_t5_start:offset_t5_end] = img2txt_mask_slice
                
                offset_t5_start = offset_t5_end
                reg_num_slice += 1
            
            text_len_llama     = sum(sublist[1] for sublist in self.context_lens_list)
            img2txt_mask_llama = torch.empty((img_len, text_len_llama)).to(attn_mask)
            offset_llama_start = 0
            reg_num_slice      = 0
            for context_len, mask_slice, edge_width in zip(self.context_lens, self.masks, self.edge_width_list):
                if self.edge_width < 0: # edge masks using cross-attn too
                    mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(self.edge_width)))
                if edge_width < 0: # edge masks using cross-attn too
                    mask_slice = fp_or(mask_slice, get_edge_mask(mask_slice, dilation=abs(edge_width)))
                    
                slice_len        = self.context_lens_list[reg_num_slice][1]
                offset_llama_end = offset_llama_start + slice_len
                
                img2txt_mask_slice = F.interpolate(mask_slice.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, slice_len)
                
                img2txt_mask_llama[:, offset_llama_start:offset_llama_end] = img2txt_mask_slice
                
                offset_llama_start = offset_llama_end
                reg_num_slice += 1
            
            img2txt_mask = torch.cat([img2txt_mask_t5, img2txt_mask_llama.repeat(1,2)], dim=-1)
            
            attn_mask[:-text_off , :-text_len ] = attn_mask[text_off:, text_len:].clone()
            attn_mask[:-text_off ,  -text_len:] = img2txt_mask
            attn_mask[ -text_off:, :-text_len ] = img2txt_mask.transpose(-2,-1)

            attn_mask[img_len:,img_len:] = 1.0   # txt -> txt "self-cross" attn is critical with hidream in most cases. checkerboard strategies are generally poo
            
            # mask cross attention between text embeds
            flat = [v for group in zip(*self.context_lens_list) for v in group]
            checkvar = checkerboard_variable(flat)
            attn_mask[img_len:, img_len:] = checkvar
            
            return attn_mask.to('cuda')
        
        
class RegionalContext:
    def __init__(self, idle_device='cpu', work_device='cuda'):
        self.context  = None
        self.clip_fea = None
        self.llama3   = None
        self.context_list = []
        self.clip_fea_list = []
        self.clip_pooled_list = []
        self.llama3_list = []
        self.t5_list     = []
        self.pooled_output = None
        self.idle_device = idle_device
        self.work_device = work_device
    
    def add_region(self, context, pooled_output=None, clip_fea=None):
        if self.context is not None:
            self.context = torch.cat([self.context, context], dim=1)
        else:
            self.context = context
        self.context_list.append(context)
            
        if pooled_output is not None:
            self.clip_pooled_list.append(pooled_output)
        
        if clip_fea is not None:
            if self.clip_fea is not None:
                self.clip_fea = torch.cat([self.clip_fea, clip_fea], dim=1)
            else:
                self.clip_fea = clip_fea
            self.clip_fea_list.append(clip_fea)
        


    def add_region_clip_fea(self, clip_fea):
        if self.clip_fea is not None:
            self.clip_fea = torch.cat([self.clip_fea, clip_fea], dim=1)
        else:
            self.clip_fea = clip_fea
        self.clip_fea_list.append(clip_fea)

    def add_region_llama3(self, llama3):
        if self.llama3 is not None:
            self.llama3 = torch.cat([self.llama3, llama3], dim=-2)   # base shape 1,32,128,4096
        else:
            self.llama3 = llama3
            
    def add_region_hidream(self, t5, llama3):
        self.t5_list    .append(t5)
        self.llama3_list.append(llama3)

    def clear_regions(self):
        if self.context is not None:
            del self.context
            self.context = None
        if self.clip_fea is not None:
            del self.clip_fea
            self.clip_fea = None
        if self.llama3 is not None:
            del self.llama3
            self.llama3 = None
            
        del self.t5_list
        del self.llama3_list
        self.t5_list     = []
        self.llama3_list = []

    def get(self):
        return self.context.to(self.work_device)

    def get_clip_fea(self):
        if self.clip_fea is not None:
            return self.clip_fea.to(self.work_device)
        else:
            return None

    def get_llama3(self):
        if self.llama3 is not None:
            return self.llama3.to(self.work_device)
        else:
            return None


class CrossAttentionMask(BaseAttentionMask):
    def generate(self, mask_type=None, dtype=None):
        mask_type = self.mask_type if mask_type is None else mask_type
        dtype     = self.dtype     if dtype     is None else dtype
        text_off  = self.text_off
        text_len  = self.text_len
        img_len   = self.img_len
        t         = self.t
        h         = self.h
        w         = self.w
        
        cross_attn_mask = torch.zeros((t * img_len,    text_len), dtype=dtype)
    
        prev_len = 0
        for context_len, mask in zip(self.context_lens, self.masks):

            cross_mask, self_mask = None, None
            if mask.ndim == 6:
                mask.squeeze_(0)
            if mask.ndim == 3:
                t_mask = mask.shape[0]
            elif mask.ndim == 4:
                if mask.shape[0] > 1:

                    cross_mask = mask[0]
                    if cross_mask.shape[-3] > self.t:
                        cross_mask = cross_mask[:self.t,...]
                    elif cross_mask.shape[-3] < self.t:
                        cross_mask = F.pad(cross_mask.permute(1,2,0), [0,self.t-cross_mask.shape[-3]], value=0).permute(2,0,1)

                    t_mask = self.t
                else:
                    t_mask = mask.shape[-3]
                    mask.squeeze_(0)
            elif mask.ndim == 5:
                t_mask = mask.shape[-3]
            else:
                t_mask = 1
                mask.unsqueeze_(0)
                
            if cross_mask is not None:
                img2txt_mask    = F.interpolate(cross_mask.unsqueeze(0).unsqueeze(0).to(torch.float16), (t_mask, h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1)
            else:
                img2txt_mask    = F.interpolate(      mask.unsqueeze(0).unsqueeze(0).to(torch.float16), (t_mask, h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1)
            
            if t_mask == 1: # ...why only if == 1?
                img2txt_mask = img2txt_mask.repeat(1, context_len)   

            curr_len = prev_len + context_len
            
            if t_mask == 1:
                cross_attn_mask[:, prev_len:curr_len] = img2txt_mask.repeat(t,1)
            else:
                cross_attn_mask[:, prev_len:curr_len] = img2txt_mask
            
            prev_len = curr_len
                            
        self.attn_mask = CoreAttnMask(cross_attn_mask, mask_type=mask_type)




class SplitAttentionMask(BaseAttentionMask):
    def generate(self, mask_type=None, dtype=None):
        mask_type = self.mask_type if mask_type is None else mask_type
        dtype     = self.dtype     if dtype     is None else dtype
        text_off  = self.text_off
        text_len  = self.text_len
        img_len   = self.img_len
        t         = self.t
        h         = self.h
        w         = self.w
        
        if self.edge_width_list is None:
            self.edge_width_list = [self.edge_width] * self.num_regions
        
        cross_attn_mask = torch.zeros((t * img_len,    text_len), dtype=dtype)
        self_attn_mask  = torch.zeros((t * img_len, t * img_len), dtype=dtype)
    
        prev_len = 0
        self_masks = []
        for context_len, mask in zip(self.context_lens, self.masks):

            cross_mask, self_mask = None, None
            if mask.ndim == 6:
                mask.squeeze_(0)
            if mask.ndim == 3:
                t_mask = mask.shape[0]
            elif mask.ndim == 4:

                if mask.shape[0] > 1:
                    cross_mask = mask[0]
                    if cross_mask.shape[-3] > self.t:
                        cross_mask = cross_mask[:self.t,...]
                    elif cross_mask.shape[-3] < self.t:
                        cross_mask = F.pad(cross_mask.permute(1,2,0), [0,self.t-cross_mask.shape[-3]], value=0).permute(2,0,1)

                    self_mask = mask[1]
                    if self_mask.shape[-3] > self.t:
                        self_mask = self_mask[:self.t,...]
                    elif self_mask.shape[-3] < self.t:
                        self_mask = F.pad(self_mask.permute(1,2,0), [0,self.t-self_mask.shape[-3]], value=0).permute(2,0,1)

                    t_mask = self.t
                else:
                    t_mask = mask.shape[-3]
                    mask.squeeze_(0)
            elif mask.ndim == 5:
                t_mask = mask.shape[-3]
            else:
                t_mask = 1
                mask.unsqueeze_(0)
                
            if cross_mask is not None:
                img2txt_mask    = F.interpolate(cross_mask.unsqueeze(0).unsqueeze(0).to(torch.float16), (t_mask, h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1)
            else:
                img2txt_mask    = F.interpolate(      mask.unsqueeze(0).unsqueeze(0).to(torch.float16), (t_mask, h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1)
            
            if t_mask == 1: # ...why only if == 1?
                img2txt_mask = img2txt_mask.repeat(1, context_len)   

            curr_len = prev_len + context_len
            
            if t_mask == 1:
                cross_attn_mask[:, prev_len:curr_len] = img2txt_mask.repeat(t,1)
            else:
                cross_attn_mask[:, prev_len:curr_len] = img2txt_mask
            
            if self_mask is not None:
                img2txt_mask_sq = F.interpolate(self_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, t_mask * img_len)
            else:
                img2txt_mask_sq = F.interpolate(     mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, t_mask * img_len)
            self_masks.append(img2txt_mask_sq)
            
            if t_mask > 1:
                self_attn_mask = fp_or(self_attn_mask, fp_and(img2txt_mask_sq, img2txt_mask_sq.transpose(-1,-2)))
            else:
                self_attn_mask = fp_or(self_attn_mask, fp_and(img2txt_mask_sq.repeat(t,t), img2txt_mask_sq.transpose(-1,-2)).repeat(t,t))
            
            prev_len = curr_len

        if self.mask_type.endswith("_masked") or self.mask_type.endswith("_A") or self.mask_type.endswith("_AB") or self.mask_type.endswith("_AC") or self.mask_type.endswith("_A,unmasked"):
            self_attn_mask = fp_or(self_attn_mask, self_masks[0])
        
        if self.mask_type.endswith("_unmasked") or self.mask_type.endswith("_C") or self.mask_type.endswith("_BC") or self.mask_type.endswith("_AC") or self.mask_type.endswith("_B,unmasked") or self.mask_type.endswith("_A,unmasked"):
            self_attn_mask = fp_or(self_attn_mask, self_masks[-1])
            
        if self.mask_type.endswith("_B") or self.mask_type.endswith("_AB") or self.mask_type.endswith("_BC") or self.mask_type.endswith("_B,unmasked"):
            self_attn_mask = fp_or(self_attn_mask, self_masks[1])
            
        if   self.edge_width > 0:
            edge_mask = torch.zeros_like(self.masks[0])
            for mask in self.masks:
                edge_mask_new = get_edge_mask(mask, dilation=abs(self.edge_width))
                edge_mask = fp_or(edge_mask, edge_mask_new)
                #edge_mask = fp_or(edge_mask, get_edge_mask(mask, dilation=self.edge_width))
            
            img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, t_mask * img_len)
            self_attn_mask = fp_or(self_attn_mask, img2txt_mask_sq)
            
        elif self.edge_width_list is not None:
            edge_mask = torch.zeros_like(self.masks[0])
            
            for mask, edge_width in zip(self.masks, self.edge_width_list):
                if edge_width != 0:
                    edge_mask_new = get_edge_mask(mask, dilation=abs(edge_width))
                    edge_mask     = fp_or(edge_mask, fp_and(edge_mask_new, mask)) #fp_and here is to ensure edge_mask only grows into the region for current mask
                    
                    img2txt_mask_sq = F.interpolate(edge_mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, t_mask * img_len)
                    self_attn_mask = fp_or(self_attn_mask, img2txt_mask_sq)
            
        if self.use_self_attn_mask_list is not None:
            for mask, use_self_attn_mask in zip(self.masks, self.use_self_attn_mask_list):
                if not use_self_attn_mask:
                    img2txt_mask_sq = F.interpolate(mask.unsqueeze(0).to(torch.float16), (h, w), mode='nearest-exact').to(dtype).flatten().unsqueeze(1).repeat(1, t_mask * img_len)
                    self_attn_mask = fp_or(self_attn_mask, img2txt_mask_sq)
        
        
        attn_mask = torch.cat([cross_attn_mask, self_attn_mask], dim=1)
        
        self.attn_mask = CoreAttnMask(attn_mask, mask_type=mask_type)



================================================
FILE: aura/mmdit.py
================================================
#AuraFlow MMDiT
#Originally written by the AuraFlow Authors

import math

import torch
import torch.nn as nn
import torch.nn.functional as F

#from comfy.ldm.modules.attention import optimized_attention
from comfy.ldm.modules.attention import attention_pytorch

import comfy.ops
import comfy.ldm.common_dit

from ..helper import ExtraOptions

from typing import Dict, Optional, Tuple, List
from ..latents import slerp_tensor, interpolate_spd, tile_latent, untile_latent, gaussian_blur_2d, median_blur_2d
from ..style_transfer import apply_scattersort_masked, apply_scattersort_tiled, adain_seq_inplace, adain_patchwise_row_batch_med, adain_patchwise_row_batch
from einops import rearrange
def modulate(x, shift, scale):
    return x * (1 + scale.unsqueeze(1)) + shift.unsqueeze(1)


def find_multiple(n: int, k: int) -> int:
    if n % k == 0:
        return n
    return n + k - (n % k)


class MLP(nn.Module): # not executed directly with ReAura?
    def __init__(self, dim, hidden_dim=None, dtype=None, device=None, operations=None) -> None:
        super().__init__()
        if hidden_dim is None:
            hidden_dim = 4 * dim

        n_hidden = int(2 * hidden_dim / 3)
        n_hidden = find_multiple(n_hidden, 256)

        self.c_fc1 = operations.Linear(dim, n_hidden, bias=False, dtype=dtype, device=device)
        self.c_fc2 = operations.Linear(dim, n_hidden, bias=False, dtype=dtype, device=device)
        self.c_proj = operations.Linear(n_hidden, dim, bias=False, dtype=dtype, device=device)

    #@torch.compile(mode="default", dynamic=False, fullgraph=False, backend="inductor")
    def forward(self, x: torch.Tensor) -> torch.Tensor:
        x = F.silu(self.c_fc1(x)) * self.c_fc2(x)
        x = self.c_proj(x)
        return x


class MultiHeadLayerNorm(nn.Module):
    def __init__(self, hidden_size=None, eps=1e-5, dtype=None, device=None):
        # Copy pasta from https://github.com/huggingface/transformers/blob/e5f71ecaae50ea476d1e12351003790273c4b2ed/src/transformers/models/cohere/modeling_cohere.py#L78

        super().__init__()
        self.weight = nn.Parameter(torch.empty(hidden_size, dtype=dtype, device=device))
        self.variance_epsilon = eps

    #@torch.compile(mode="default", dynamic=False, fullgraph=False, backend="inductor")
    def forward(self, hidden_states):
        input_dtype   =  hidden_states.dtype
        hidden_states =  hidden_states.to(torch.float32)
        mean          =  hidden_states.mean(-1,                keepdim=True)
        variance      = (hidden_states - mean).pow(2).mean(-1, keepdim=True)
        hidden_states = (hidden_states - mean) * torch.rsqrt(
            variance + self.variance_epsilon
        )
        hidden_states = self.weight.to(torch.float32) * hidden_states
        return hidden_states.to(input_dtype)

class ReSingleAttention(nn.Module):
    def __init__(self, dim, n_heads, mh_qknorm=False, dtype=None, device=None, operations=None):
        super().__init__()

        self.n_heads = n_heads
        self.head_dim = dim // n_heads

        # this is for cond
        self.w1q = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)
        self.w1k = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)
        self.w1v = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)
        self.w1o = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)

        self.q_norm1 = (
            MultiHeadLayerNorm((self.n_heads, self.head_dim), dtype=dtype, device=device)
            if mh_qknorm
            else operations.LayerNorm(self.head_dim, elementwise_affine=False, dtype=dtype, device=device)
        )
        self.k_norm1 = (
            MultiHeadLayerNorm((self.n_heads, self.head_dim), dtype=dtype, device=device)
            if mh_qknorm
            else operations.LayerNorm(self.head_dim, elementwise_affine=False, dtype=dtype, device=device)
        )

    #@torch.compile(mode="default", dynamic=False, fullgraph=False, backend="inductor")              # c = 1,4552,3072      #operations.Linear = torch.nn.Linear with recast
    def forward(self, c, mask=None):

        bsz, seqlen1, _ = c.shape

        q, k, v = self.w1q(c), self.w1k(c), self.w1v(c)
        q = q.view(bsz, seqlen1, self.n_heads, self.head_dim)
        k = k.view(bsz, seqlen1, self.n_heads, self.head_dim)
        v = v.view(bsz, seqlen1, self.n_heads, self.head_dim)
        q, k = self.q_norm1(q), self.k_norm1(k)

        output = attention_pytorch(q.permute(0, 2, 1, 3), k.permute(0, 2, 1, 3), v.permute(0, 2, 1, 3), self.n_heads, skip_reshape=True, mask=mask)
        c = self.w1o(output)
        return c



class ReDoubleAttention(nn.Module):
    def __init__(self, dim, n_heads, mh_qknorm=False, dtype=None, device=None, operations=None):
        super().__init__()

        self.n_heads  = n_heads
        self.head_dim = dim // n_heads

        # this is for cond   1 (one) not l (L)
        self.w1q = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)
        self.w1k = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)
        self.w1v = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)
        self.w1o = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)

        # this is for x
        self.w2q = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)
        self.w2k = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)
        self.w2v = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)
        self.w2o = operations.Linear(dim, dim, bias=False, dtype=dtype, device=device)

        self.q_norm1 = (
            MultiHeadLayerNorm((self.n_heads, self.head_dim), dtype=dtype, device=device)
            if mh_qknorm
            else operations.LayerNorm(self.head_dim, elementwise_affine=False, dtype=dtype, device=device)
        )
        self.k_norm1 = (
            MultiHeadLayerNorm((self.n_heads, self.head_dim), dtype=dtype, device=device)
            if mh_qknorm
            else operations.LayerNorm(self.head_dim, elementwise_affine=False, dtype=dtype, device=device)
        )

        self.q_norm2 = (
            MultiHeadLayerNorm((self.n_heads, self.head_dim), dtype=dtype, device=device)
            if mh_qknorm
            else operations.LayerNorm(self.head_dim, elementwise_affine=False, dtype=dtype, device=device)
        )
        self.k_norm2 = (
            MultiHeadLayerNorm((self.n_heads, self.head_dim), dtype=dtype, device=device)
            if mh_qknorm
            else operations.LayerNorm(self.head_dim, elementwise_affine=False, dtype=dtype, device=device)
        )


    #@torch.compile(mode="default", dynamic=False, fullgraph=False, backend="inductor")         # c.shape 1,264,3072    x.shape 1,4032,3072   
    def forward(self, c, x, mask=None):

        bsz, seqlen1, _ = c.shape
        bsz, seqlen2, _ = x.shape

        cq, ck, cv = self.w1q(c), self.w1k(c), self.w1v(c)
        cq         = cq.view(bsz, seqlen1, self.n_heads, self.head_dim)
        ck         = ck.view(bsz, seqlen1, self.n_heads, self.head_dim)
        cv         = cv.view(bsz, seqlen1, self.n_heads, self.head_dim)
        cq, ck     = self.q_norm1(cq), self.k_norm1(ck)

        xq, xk, xv = self.w2q(x), self.w2k(x), self.w2v(x)
        xq         = xq.view(bsz, seqlen2, self.n_heads, self.head_dim)
        xk         = xk.view(bsz, seqlen2, self.n_heads, self.head_dim)
        xv         = xv.view(bsz, seqlen2, self.n_heads, self.head_dim)
        xq, xk     = self.q_norm2(xq), self.k_norm2(xk)

        # concat all     q,k,v.shape 1,4299,12,256           cq 1,267,12,256   xq 1,4032,12,256      self.n_heads 12      
        q, k, v = (
            torch.cat([cq, xq], dim=1),
            torch.cat([ck, xk], dim=1),
            torch.cat([cv, xv], dim=1),
        )
        # attn mask would be 4299,4299
        if mask is not None:
            pass
        
        output = attention_pytorch(q.permute(0, 2, 1, 3), k.permute(0, 2, 1, 3), v.permute(0, 2, 1, 3), self.n_heads, skip_reshape=True, mask=mask)

        c, x = output.split([seqlen1, seqlen2], dim=1)
        c    = self.w1o(c)
        x    = self.w2o(x)

        return c, x


class ReMMDiTBlock(nn.Module):
    def __init__(self, dim, heads=8, global_conddim=1024, is_last=False, dtype=None, device=None, operations=None):
        super().__init__()

        self.normC1 = operations.LayerNorm(dim, elementwise_affine=False, dtype=dtype, device=device)
        self.normC2 = operations.LayerNorm(dim, elementwise_affine=False, dtype=dtype, device=device)
        if not is_last:
            self.mlpC = MLP(dim, hidden_dim=dim * 4, dtype=dtype, device=device, operations=operations)
            self.modC = nn.Sequential(
                nn.SiLU(),
                operations.Linear(global_conddim, 6 * dim, bias=False, dtype=dtype, device=device),
            )
        else:
            self.modC = nn.Sequential(
                nn.SiLU(),
                operations.Linear(global_conddim, 2 * dim, bias=False, dtype=dtype, device=device),
            )

        self.normX1 = operations.LayerNorm(dim, elementwise_affine=False, dtype=dtype, device=device)
        self.normX2 = operations.LayerNorm(dim, elementwise_affine=False, dtype=dtype, device=device)
        self.mlpX = MLP(dim, hidden_dim=dim * 4, dtype=dtype, device=device, operations=operations)
        self.modX = nn.Sequential(
            nn.SiLU(),
            operations.Linear(global_conddim, 6 * dim, bias=False, dtype=dtype, device=device),
        )

        self.attn = ReDoubleAttention(dim, heads, dtype=dtype, device=device, operations=operations)
        self.is_last = is_last

    #@torch.compile(mode="default", dynamic=False, fullgraph=False, backend="inductor")                    # MAIN BLOCK
    def forward(self, c, x, global_cond, mask=None, **kwargs):

        cres, xres = c, x

        cshift_msa, cscale_msa, cgate_msa, cshift_mlp, cscale_mlp, cgate_mlp = (
            self.modC(global_cond).chunk(6, dim=1)
        )

        c = modulate(self.normC1(c), cshift_msa, cscale_msa)

        # xpath
        xshift_msa, xscale_msa, xgate_msa, xshift_mlp, xscale_mlp, xgate_mlp = (
            self.modX(global_cond).chunk(6, dim=1)
        )

        x = modulate(self.normX1(x), xshift_msa, xscale_msa)

        # attention    c.shape 1,520,3072   x.shape 1,6144,3072
        c, x = self.attn(c, x, mask=mask)

        c = self.normC2(cres + cgate_msa.unsqueeze(1) * c)
        c = cgate_mlp.unsqueeze(1) * self.mlpC(modulate(c, cshift_mlp, cscale_mlp))
        c = cres + c

        x = self.normX2(xres + xgate_msa.unsqueeze(1) * x)
        x = xgate_mlp.unsqueeze(1) * self.mlpX(modulate(x, xshift_mlp, xscale_mlp))
        x = xres + x

        return c, x

class ReDiTBlock(nn.Module):
    # like MMDiTBlock, but it only has X
    def __init__(self, dim, heads=8, global_conddim=1024, dtype=None, device=None, operations=None):
        super().__init__()

        self.norm1 = operations.LayerNorm(dim, elementwise_affine=False, dtype=dtype, device=device)
        self.norm2 = operations.LayerNorm(dim, elementwise_affine=False, dtype=dtype, device=device)

        self.modCX = nn.Sequential(
            nn.SiLU(),
            operations.Linear(global_conddim, 6 * dim, bias=False, dtype=dtype, device=device),
        )

        self.attn = ReSingleAttention(dim, heads, dtype=dtype, device=device, operations=operations)
        self.mlp = MLP(dim, hidden_dim=dim * 4, dtype=dtype, device=device, operations=operations)

    #@torch.compile(mode="default", dynamic=False, fullgraph=False, backend="inductor")         # cx.shape 1,6664,3072   global_cond.shape 1,3072   mlpout.shape 1,6664,3072       float16
    def forward(self, cx, global_cond, mask=None, **kwargs):
        cxres = cx   
        shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.modCX(
            global_cond
        ).chunk(6, dim=1)
        cx = modulate(self.norm1(cx), shift_msa, scale_msa)
        cx = self.attn(cx, mask=mask)
        cx = self.norm2(cxres + gate_msa.unsqueeze(1) * cx)
        mlpout = self.mlp(modulate(cx, shift_mlp, scale_mlp))
        cx = gate_mlp.unsqueeze(1) * mlpout

        cx = cxres + cx    # residual connection

        return cx



class TimestepEmbedder(nn.Module):
    def __init__(self, hidden_size, frequency_embedding_size=256, dtype=None, device=None, operations=None):
        super().__init__()
        self.mlp = nn.Sequential(
            operations.Linear(frequency_embedding_size, hidden_size, dtype=dtype, device=device),
            nn.SiLU(),
            operations.Linear(hidden_size, hidden_size, dtype=dtype, device=device),
        )
        self.frequency_embedding_size = frequency_embedding_size

    @staticmethod
    def timestep_embedding(t, dim, max_period=10000):
        half = dim // 2
        freqs = 1000 * torch.exp(
            -math.log(max_period) * torch.arange(start=0, end=half) / half
        ).to(t.device)
        args = t[:, None] * freqs[None]
        embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
        if dim % 2:
            embedding = torch.cat(
                [embedding, torch.zeros_like(embedding[:, :1])], dim=-1
            )
        return embedding

    #@torch.compile(mode="default", dynamic=False, fullgraph=False, backend="inductor")
    def forward(self, t, dtype):
        t_freq = self.timestep_embedding(t, self.frequency_embedding_size).to(dtype)
        t_emb = self.mlp(t_freq)
        return t_emb


class ReMMDiT(nn.Module):
    def __init__(
        self,
        in_channels=4,
        out_channels=4,
        patch_size=2,
        dim=3072,
        n_layers=36,
        n_double_layers=4,
        n_heads=12,
        global_conddim=3072,
        cond_seq_dim=2048,
        max_seq=32 * 32,
        device=None,
        dtype=None,
        operations=None,
    ):
        super().__init__()
        self.dtype = dtype

        self.t_embedder = TimestepEmbedder(global_conddim, dtype=dtype, device=device, operations=operations)

        self.cond_seq_linear = operations.Linear(
            cond_seq_dim, dim, bias=False, dtype=dtype, device=device
        )  # linear for something like text sequence.
        self.init_x_linear = operations.Linear(
            patch_size * patch_size * in_channels, dim, dtype=dtype, device=device
        )  # init linear for patchified image.

        self.positional_encoding = nn.Parameter(torch.empty(1, max_seq, dim, dtype=dtype, device=device))
        self.register_tokens = nn.Parameter(torch.empty(1, 8, dim, dtype=dtype, device=device))

        self.double_layers = nn.ModuleList([])
        self.single_layers = nn.ModuleList([])


        for idx in range(n_double_layers):
            self.double_layers.append(
                ReMMDiTBlock(dim, n_heads, global_conddim, is_last=(idx == n_layers - 1), dtype=dtype, device=device, operations=operations)
            )

        for idx in range(n_double_layers, n_layers):
            self.single_layers.append(
                ReDiTBlock(dim, n_heads, global_conddim, dtype=dtype, device=device, operations=operations)
            )


        self.final_linear = operations.Linear(
            dim, patch_size * patch_size * out_channels, bias=False, dtype=dtype, device=device
        )

        self.modF = nn.Sequential(
            nn.SiLU(),
            operations.Linear(global_conddim, 2 * dim, bias=False, dtype=dtype, device=device),
        )

        self.out_channels = out_channels
        self.patch_size = patch_size
        self.n_double_layers = n_double_layers
        self.n_layers = n_layers

        self.h_max = round(max_seq**0.5)
        self.w_max = round(max_seq**0.5)

    @torch.no_grad()
    def extend_pe(self, init_dim=(16, 16), target_dim=(64, 64)):
        # extend pe
        pe_data = self.positional_encoding.data.squeeze(0)[: init_dim[0] * init_dim[1]]

        pe_as_2d = pe_data.view(init_dim[0], init_dim[1], -1).permute(2, 0, 1)

        # now we need to extend this to target_dim. for this we will use interpolation.
        # we will use torch.nn.functional.interpolate
        pe_as_2d = F.interpolate(
            pe_as_2d.unsqueeze(0), size=target_dim, mode="bilinear"
        )
        pe_new = pe_as_2d.squeeze(0).permute(1, 2, 0).flatten(0, 1)
        self.positional_encoding.data = pe_new.unsqueeze(0).contiguous()
        self.h_max, self.w_max = target_dim

    def pe_selection_index_based_on_dim(self, h, w):
        h_p, w_p            = h // self.patch_size, w // self.patch_size
        original_pe_indexes = torch.arange(self.positional_encoding.shape[1])
        original_pe_indexes = original_pe_indexes.view(self.h_max, self.w_max)
        starth              =  self.h_max // 2 - h_p // 2
        endh                = starth + h_p
        startw              = self.w_max // 2 - w_p // 2
        endw                = startw + w_p
        original_pe_indexes = original_pe_indexes[
            starth:endh, startw:endw
        ]
        return original_pe_indexes.flatten()

    def unpatchify(self, x, h, w):
        c = self.out_channels
        p = self.patch_size

        x = x.reshape(shape=(x.shape[0], h, w, p, p, c))
        x = torch.einsum("nhwpqc->nchpwq", x)
        imgs = x.reshape(shape=(x.shape[0], c, h * p, w * p))
        return imgs

    def patchify(self, x):
        B, C, H, W = x.size()
        x = comfy.ldm.common_dit.pad_to_patch_size(x, (self.patch_size, self.patch_size))
        x = x.view(
            B,
            C,
            (H + 1) // self.patch_size,
            self.patch_size,
            (W + 1) // self.patch_size,
            self.patch_size,
        )
        x = x.permute(0, 2, 4, 1, 3, 5).flatten(-3).flatten(1, 2)
        return x

    def apply_pos_embeds(self, x, h, w):
        h = (h + 1) // self.patch_size
        w = (w + 1) // self.patch_size
        max_dim = max(h, w)

        cur_dim = self.h_max
        pos_encoding = comfy.ops.cast_to_input(self.positional_encoding.reshape(1, cur_dim, cur_dim, -1), x)

        if max_dim > cur_dim:
            pos_encoding = F.interpolate(pos_encoding.movedim(-1, 1), (max_dim, max_dim), mode="bilinear").movedim(1, -1)
            cur_dim = max_dim

        from_h = (cur_dim - h) // 2
        from_w = (cur_dim - w) // 2
        pos_encoding = pos_encoding[:,from_h:from_h+h,from_w:from_w+w]
        return x + pos_encoding.reshape(1, -1, self.positional_encoding.shape[-1])

    def forward(self, x, timestep, context, transformer_options={}, **kwargs):
        
        x_orig       = x.clone()
        context_orig = context.clone()
        
        SIGMA = timestep[0].unsqueeze(0) #/ 1000
        EO = transformer_options.get("ExtraOptions", ExtraOptions(""))
        if EO is not None:
            EO.mute = True
            
        y0_style_pos        = transformer_options.get("y0_style_pos")
        y0_style_neg        = transformer_options.get("y0_style_neg")

        y0_style_pos_weight    = transformer_options.get("y0_style_pos_weight", 0.0)
        y0_style_pos_synweight = transformer_options.get("y0_style_pos_synweight", 0.0)
        y0_style_pos_synweight *= y0_style_pos_weight

        y0_style_neg_weight    = transformer_options.get("y0_style_neg_weight", 0.0)
        y0_style_neg_synweight = transformer_options.get("y0_style_neg_synweight", 0.0)
        y0_style_neg_synweight *= y0_style_neg_weight
                
        
        out_list = []
        for i in range(len(transformer_options['cond_or_uncond'])):
            UNCOND = transformer_options['cond_or_uncond'][i] == 1

            x       = x_orig[i][None,...].clone()
            context = context_orig.clone()

            patches_replace = transformer_options.get("patches_replace", {})
            # patchify x, add PE
            b, c, h, w = x.shape
            
            h_len = ((h + (self.patch_size // 2)) // self.patch_size) # h_len 96
            w_len = ((w + (self.patch_size // 2)) // self.patch_size) # w_len 96


            x = self.init_x_linear(self.patchify(x))  # B, T_x, D
            x = self.apply_pos_embeds(x, h, w)

            if UNCOND:

                transformer_options['reg_cond_weight'] = transformer_options.get("regional_conditioning_weight", 0.0) 
                transformer_options['reg_cond_floor']  = transformer_options.get("regional_conditioning_floor",  0.0) 
                transformer_options['reg_cond_mask_orig'] = transformer_options.get('regional_conditioning_mask_orig')
                
                AttnMask   = transformer_options.get('AttnMask',   None)                    
                RegContext = transformer_options.get('RegContext', None)
                
                if AttnMask is not None and transformer_options['reg_cond_weight'] > 0.0:
                    AttnMask.attn_mask_recast(x.dtype)
                    context_tmp = RegContext.get().to(context.dtype)
                    #context_tmp = 0 * context_tmp.clone()
                    
                    # If it's not a perfect factor, repeat and slice:
                    A = context[i][None,...].clone()
                    B = context_tmp
                    context_tmp = A.repeat(1, (B.shape[1] // A.shape[1]) + 1, 1)[:, :B.shape[1], :]

                else:
                    context_tmp = context[i][None,...].clone()
                    
            elif UNCOND == False:
                transformer_options['reg_cond_weight'] = transformer_options.get("regional_conditioning_weight", 0.0) 
                transformer_options['reg_cond_floor']  = transformer_options.get("regional_conditioning_floor", 0.0) 
                transformer_options['reg_cond_mask_orig'] = transformer_options.get('regional_conditioning_mask_orig')
                
                AttnMask   = transformer_options.get('AttnMask',   None)                    
                RegContext = transformer_options.get('RegContext', None)
                
                if AttnMask is not None and transformer_options['reg_cond_weight'] > 0.0:
                    AttnMask.attn_mask_recast(x.dtype)
                    context_tmp = RegContext.get().to(context.dtype)
                else:
                    context_tmp = context[i][None,...].clone()
            
            if context_tmp is None:
                context_tmp = context[i][None,...].clone()
                



            # process conditions for MMDiT Blocks
            #c_seq = context  # B, T_c, D_c
            c_seq = context_tmp  # B, T_c, D_c

            t = timestep

            c = self.cond_seq_linear(c_seq)  # B, T_c, D         # 1,256,2048 -> 
            c = torch.cat([comfy.ops.cast_to_input(self.register_tokens, c).repeat(c.size(0), 1, 1), c], dim=1)   #1,256,3072 -> 1,264,3072

            global_cond = self.t_embedder(t, x.dtype)  # B, D

            global_cond = global_cond[i][None]

            

            weight    = transformer_options['reg_cond_weight'] if 'reg_cond_weight' in transformer_options else 0.0
            floor     = transformer_options['reg_cond_floor']  if 'reg_cond_floor'  in transformer_options else 0.0
            
            floor     = min(floor, weight)
            
            reg_cond_mask_expanded = transformer_options.get('reg_cond_mask_expanded')
            reg_cond_mask_expanded = reg_cond_mask_expanded.to(img.dtype).to(img.device) if reg_cond_mask_expanded is not None else None
            reg_cond_mask = None



            AttnMask = transformer_options.get('AttnMask')
            mask     = None
            if AttnMask is not None and weight > 0:
                mask                      = AttnMask.get(weight=weight) #mask_obj[0](transformer_options, weight.item())
                
                mask_type_bool = type(mask[0][0].item()) == bool if mask is not None else False
                if not mask_type_bool:
                    mask = mask.to(x.dtype)
                    
                if mask_type_bool:
                    mask = F.pad(mask, (8, 0, 8, 0), value=True)
                    #mask = F.pad(mask, (0, 8, 0, 8), value=True)
                else:
                    mask = F.pad(mask, (8, 0, 8, 0), value=1.0)
                
                text_len                  = context.shape[1] # mask_obj[0].text_len
                
                mask[text_len:,text_len:] = torch.clamp(mask[text_len:,text_len:], min=floor.to(mask.device))   #ORIGINAL SELF-ATTN REGION BLEED
                reg_cond_mask = reg_cond_mask_expanded.unsqueeze(0).clone() if reg_cond_mask_expanded is not None else None

            mask_type_bool = type(mask[0][0].item()) == bool if mask is not None else False

            total_layers = len(self.double_layers) + len(self.single_layers)

            blocks_replace = patches_replace.get("dit", {})       # context 1,259,2048      x 1,4032,3072
            if len(self.double_layers) > 0:
                for i, layer in enumerate(self.double_layers):
                    if mask_type_bool and weight < (i / (total_layers-1)) and mask is not None:
                        mask = mask.to(x.dtype)
                        
                    if ("double_block", i) in blocks_replace:
                        def block_wrap(args):
                            out = {}
                            out["txt"], out["img"] = layer( args["txt"],
                                                            args["img"],
                                                            args["vec"])
                            return out
                        out = blocks_replace[("double_block", i)]({"img": x, "txt": c, "vec": global_cond}, {"original_block": block_wrap})
                        c = out["txt"]
                        x = out["img"]
                    else:
                        c, x = layer(c, x, global_cond, mask=mask, **kwargs)

            if len(self.single_layers) > 0:
                c_len = c.size(1)
                cx = torch.cat([c, x], dim=1)
                for i, layer in enumerate(self.single_layers):
                    if mask_type_bool and weight < ((len(self.double_layers) + i) / (total_layers-1)) and mask is not None:
                        mask = mask.to(x.dtype)
                    
                    if ("single_block", i) in blocks_replace:
                        def block_wrap(args):
                            out = {}
                            out["img"] = layer(args["img"], args["vec"])
                            return out

                        out = blocks_replace[("single_block", i)]({"img": cx, "vec": global_cond}, {"original_block": block_wrap})
                        cx = out["img"]
                    else:
                        cx = layer(cx, global_cond, mask=mask, **kwargs)

                x = cx[:, c_len:]

            fshift, fscale = self.modF(global_cond).chunk(2, dim=1)

            x = modulate(x, fshift, fscale)
            x = self.final_linear(x)
            x = self.unpatchify(x, (h + 1) // self.patch_size, (w + 1) // self.patch_size)[:,:,:h,:w]
            
            out_list.append(x)
            
        eps = torch.stack(out_list, dim=0).squeeze(dim=1)
        
        
        
        
        
        
        freqsep_lowpass_method = transformer_options.get("freqsep_lowpass_method")
        freqsep_sigma          = transformer_options.get("freqsep_sigma")
        freqsep_kernel_size    = transformer_options.get("freqsep_kernel_size")
        freqsep_inner_kernel_size    = transformer_options.get("freqsep_inner_kernel_size")
        freqsep_stride    = transformer_options.get("freqsep_stride")
        
        freqsep_lowpass_weight = transformer_options.get("freqsep_lowpass_weight")
        freqsep_highpass_weight= transformer_options.get("freqsep_highpass_weight")
        freqsep_mask           = transformer_options.get("freqsep_mask")
        
        dtype = eps.dtype if self.style_dtype is None else self.style_dtype
        
        if y0_style_pos is not None:
            y0_style_pos_weight    = transformer_options.get("y0_style_pos_weight")
            y0_style_pos_synweight = transformer_options.get("y0_style_pos_synweight")
            y0_style_pos_synweight *= y0_style_pos_weight
            y0_style_pos_mask = transformer_options.get("y0_style_pos_mask")
            y0_style_pos_mask_edge = transformer_options.get("y0_style_pos_mask_edge")

            y0_style_pos = y0_style_pos.to(dtype)
            x   = x_orig.clone().to(dtype)
            #x   = x.to(dtype)
            eps = eps.to(dtype)
            eps_orig = eps.clone()
            
            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000
            denoised = x - sigma * eps
            
            denoised_embed = self.Retrojector.embed(denoised)
            y0_adain_embed = self.Retrojector.embed(y0_style_pos)
            
            if transformer_options['y0_style_method'] == "scattersort":
                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')
                pad = transformer_options.get('y0_style_tile_padding')
                if pad is not None and tile_h is not None and tile_w is not None:
                    
                    denoised_spatial = rearrange(denoised_embed, "b (h w) c -> b c h w", h=h_len, w=w_len)
                    y0_adain_spatial = rearrange(y0_adain_embed, "b (h w) c -> b c h w", h=h_len, w=w_len)
                    
                    if EO("scattersort_median_LP"):
                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=EO("scattersort_median_LP",7))
                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=EO("scattersort_median_LP",7))
                        
                        denoised_spatial_HP = denoised_spatial - denoised_spatial_LP
                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP
                        
                        denoised_spatial_LP = apply_scattersort_tiled(denoised_spatial_LP, y0_adain_spatial_LP, tile_h, tile_w, pad)
                        
                        denoised_spatial = denoised_spatial_LP + denoised_spatial_HP
                        denoised_embed = rearrange(denoised_spatial, "b c h w -> b (h w) c")
                    else:
                        denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)
                    
                    denoised_embed = rearrange(denoised_spatial, "b c h w -> b (h w) c")
                    
                else:
                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_pos_mask, y0_style_pos_mask_edge, h_len, w_len)



            elif transformer_options['y0_style_method'] == "AdaIN":
                if freqsep_mask is not None:
                    freqsep_mask = freqsep_mask.view(1, 1, *freqsep_mask.shape[-2:]).float()
                    freqsep_mask = F.interpolate(freqsep_mask.float(), size=(h_len, w_len), mode='nearest-exact')
                
                if hasattr(self, "adain_tile"):
                    tile_h, tile_w = self.adain_tile
                    
                    denoised_pretile = rearrange(denoised_embed, "b (h w) c -> b c h w", h=h_len, w=w_len)
                    y0_adain_pretile = rearrange(y0_adain_embed, "b (h w) c -> b c h w", h=h_len, w=w_len)
                    
                    if self.adain_flag:
                        h_off = tile_h // 2
                        w_off = tile_w // 2
                        denoised_pretile = denoised_pretile[:,:,h_off:-h_off, w_off:-w_off]
                        self.adain_flag = False
                    else:
                        h_off = 0
                        w_off = 0
                        self.adain_flag = True
                    
                    tiles,    orig_shape, grid, strides = tile_latent(denoised_pretile, tile_size=(tile_h,tile_w))
                    y0_tiles, orig_shape, grid, strides = tile_latent(y0_adain_pretile, tile_size=(tile_h,tile_w))
                    
                    tiles_out = []
                    for i in range(tiles.shape[0]):
                        tile = tiles[i].unsqueeze(0)
                        y0_tile = y0_tiles[i].unsqueeze(0)
                        
                        tile    = rearrange(tile,    "b c h w -> b (h w) c", h=tile_h, w=tile_w)
                        y0_tile = rearrange(y0_tile, "b c h w -> b (h w) c", h=tile_h, w=tile_w)
                        
                        tile = adain_seq_inplace(tile, y0_tile)
                        tiles_out.append(rearrange(tile, "b (h w) c -> b c h w", h=tile_h, w=tile_w))
                    
                    tiles_out_tensor = torch.cat(tiles_out, dim=0)
                    tiles_out_tensor = untile_latent(tiles_out_tensor, orig_shape, grid, strides)

                    if h_off == 0:
                        denoised_pretile = tiles_out_tensor
                    else:
                        denoised_pretile[:,:,h_off:-h_off, w_off:-w_off] = tiles_out_tensor
                    denoised_embed = rearrange(denoised_pretile, "b c h w -> b (h w) c", h=h_len, w=w_len)

                elif freqsep_lowpass_method is not None and freqsep_lowpass_method.endswith("pw"): #EO("adain_pw"):

                    denoised_spatial = rearrange(denoised_embed, "b (h w) c -> b c h w", h=h_len, w=w_len)
                    y0_adain_spatial = rearrange(y0_adain_embed, "b (h w) c -> b c h w", h=h_len, w=w_len)

                    if   freqsep_lowpass_method == "median_pw":
                        denoised_spatial_new = adain_patchwise_row_batch_med(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size, use_median_blur=True, lowpass_weight=freqsep_lowpass_weight, highpass_weight=freqsep_highpass_weight)
                    elif freqsep_lowpass_method == "gaussian_pw": 
                        denoised_spatial_new = adain_patchwise_row_batch(denoised_spatial.clone(), y0_adain_spatial.clone().repeat(denoised_spatial.shape[0],1,1,1), sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)
                    
                    denoised_embed = rearrange(denoised_spatial_new, "b c h w -> b (h w) c", h=h_len, w=w_len)

                elif freqsep_lowpass_method is not None: 
                    denoised_spatial = rearrange(denoised_embed, "b (h w) c -> b c h w", h=h_len, w=w_len)
                    y0_adain_spatial = rearrange(y0_adain_embed, "b (h w) c -> b c h w", h=h_len, w=w_len)
                    
                    if   freqsep_lowpass_method == "median":
                        denoised_spatial_LP = median_blur_2d(denoised_spatial, kernel_size=freqsep_kernel_size)
                        y0_adain_spatial_LP = median_blur_2d(y0_adain_spatial, kernel_size=freqsep_kernel_size)
                    elif freqsep_lowpass_method == "gaussian":
                        denoised_spatial_LP = gaussian_blur_2d(denoised_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)
                        y0_adain_spatial_LP = gaussian_blur_2d(y0_adain_spatial, sigma=freqsep_sigma, kernel_size=freqsep_kernel_size)
                    
                    denoised_spatial_HP = denoised_spatial - denoised_spatial_LP
                    
                    if EO("adain_fs_uhp"):
                        y0_adain_spatial_HP = y0_adain_spatial - y0_adain_spatial_LP
                        
                        denoised_spatial_ULP = gaussian_blur_2d(denoised_spatial, sigma=EO("adain_fs_uhp_sigma", 1.0), kernel_size=EO("adain_fs_uhp_kernel_size", 3))
                        y0_adain_spatial_ULP = gaussian_blur_2d(y0_adain_spatial, sigma=EO("adain_fs_uhp_sigma", 1.0), kernel_size=EO("adain_fs_uhp_kernel_size", 3))
                        
                        denoised_spatial_UHP = denoised_spatial_HP  - denoised_spatial_ULP
                        y0_adain_spatial_UHP = y0_adain_spatial_HP  - y0_adain_spatial_ULP
                        
                        #denoised_spatial_HP  = y0_adain_spatial_ULP + denoised_spatial_UHP
                        denoised_spatial_HP  = denoised_spatial_ULP + y0_adain_spatial_UHP
                    
                    denoised_spatial_new = freqsep_lowpass_weight * y0_adain_spatial_LP + freqsep_highpass_weight * denoised_spatial_HP
                    denoised_embed = rearrange(denoised_spatial_new, "b c h w -> b (h w) c", h=h_len, w=w_len)

                else:
                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)
                    
                for adain_iter in range(EO("style_iter", 0)):
                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)
                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))
                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)
            
            elif transformer_options['y0_style_method'] == "WCT":
                self.StyleWCT.set(y0_adain_embed)
                denoised_embed = self.StyleWCT.get(denoised_embed)
                
                if transformer_options.get('y0_standard_guide') is not None:
                    y0_standard_guide = transformer_options.get('y0_standard_guide')
                    
                    y0_standard_guide_embed = self.Retrojector.embed(y0_standard_guide)
                    f_cs = self.StyleWCT.get(y0_standard_guide_embed)
                    self.y0_standard_guide = self.Retrojector.unembed(f_cs)

                if transformer_options.get('y0_inv_standard_guide') is not None:
                    y0_inv_standard_guide = transformer_options.get('y0_inv_standard_guide')

                    y0_inv_standard_guide_embed = self.Retrojector.embed(y0_inv_standard_guide)
                    f_cs = self.StyleWCT.get(y0_inv_standard_guide_embed)
                    self.y0_inv_standard_guide = self.Retrojector.unembed(f_cs)

            denoised_approx = self.Retrojector.unembed(denoised_embed)
            
            eps = (x - denoised_approx) / sigma

            if not UNCOND:
                if eps.shape[0] == 2:
                    eps[1] = eps_orig[1] + y0_style_pos_weight * (eps[1] - eps_orig[1])
                    eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])
                else:
                    eps[0] = eps_orig[0] + y0_style_pos_weight * (eps[0] - eps_orig[0])
            elif eps.shape[0] == 1 and UNCOND:
                eps[0] = eps_orig[0] + y0_style_pos_synweight * (eps[0] - eps_orig[0])
            
            eps = eps.float()
        
        if y0_style_neg is not None:
            y0_style_neg_weight    = transformer_options.get("y0_style_neg_weight")
            y0_style_neg_synweight = transformer_options.get("y0_style_neg_synweight")
            y0_style_neg_synweight *= y0_style_neg_weight
            y0_style_neg_mask = transformer_options.get("y0_style_neg_mask")
            y0_style_neg_mask_edge = transformer_options.get("y0_style_neg_mask_edge")
            
            y0_style_neg = y0_style_neg.to(dtype)
            x   = x.to(dtype)
            eps = eps.to(dtype)
            eps_orig = eps.clone()
            
            sigma = SIGMA #t_orig[0].to(torch.float32) / 1000
            denoised = x - sigma * eps

            denoised_embed = self.Retrojector.embed(denoised)
            y0_adain_embed = self.Retrojector.embed(y0_style_neg)
            
            if transformer_options['y0_style_method'] == "scattersort":
                tile_h, tile_w = transformer_options.get('y0_style_tile_height'), transformer_options.get('y0_style_tile_width')
                pad = transformer_options.get('y0_style_tile_padding')
                if pad is not None and tile_h is not None and tile_w is not None:
                    
                    denoised_spatial = rearrange(denoised_embed, "b (h w) c -> b c h w", h=h_len, w=w_len)
                    y0_adain_spatial = rearrange(y0_adain_embed, "b (h w) c -> b c h w", h=h_len, w=w_len)
                    
                    denoised_spatial = apply_scattersort_tiled(denoised_spatial, y0_adain_spatial, tile_h, tile_w, pad)
                    
                    denoised_embed = rearrange(denoised_spatial, "b c h w -> b (h w) c")

                else:
                    denoised_embed = apply_scattersort_masked(denoised_embed, y0_adain_embed, y0_style_neg_mask, y0_style_neg_mask_edge, h_len, w_len)
            
            
            elif transformer_options['y0_style_method'] == "AdaIN":
                denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)
                for adain_iter in range(EO("style_iter", 0)):
                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)
                    denoised_embed = self.Retrojector.embed(self.Retrojector.unembed(denoised_embed))
                    denoised_embed = adain_seq_inplace(denoised_embed, y0_adain_embed)
                    
            elif transformer_options['y0_style_method'] == "WCT":
                self.StyleWCT.set(y0_adain_embed)
                denoised_embed = self.StyleWCT.get(denoised_embed)

            denoised_approx = self.Retrojector.unembed(denoised_embed)

            if UNCOND:
                eps = (x - denoised_approx) / sigma
                eps[0] = eps_orig[0] + y0_style_neg_weight * (eps[0] - eps_orig[0])
                if eps.shape[0] == 2:
                    eps[1] = eps_orig[1] + y0_style_neg_synweight * (eps[1] - eps_orig[1])
            elif eps.shape[0] == 1 and not UNCOND:
                eps[0] = eps_orig[0] + y0_style_neg_synweight * (eps[0] - eps_orig[0])
            
            eps = eps.float()
            
        return eps




def unpatchify2(x: torch.Tensor, H: int, W: int, patch_size: int) -> torch.Tensor:
    """
    Invert patchify:
      x: (B, N, C*p*p)
      returns: (B, C, H, W), slicing off any padding
    """
    B, N, CPP = x.shape
    p = patch_size
    Hp = math.ceil(H / p)
    Wp = math.ceil(W / p)
    C = CPP // (p * p)
    assert N == Hp * Wp, f"Expected N={Hp*Wp} patches, got {N}"

    x = x.view(B, Hp, Wp, CPP)       
    x = x.view(B, Hp, Wp, C, p, p)     
    x = x.permute(0, 3, 1, 4, 2, 5)      
    imgs = x.reshape(B, C, Hp * p, Wp * p) 
    return imgs[:, :, :H, :W]



================================================
FILE: beta/__init__.py
================================================

from . import rk_sampler_beta
from . import samplers
from . import samplers_extensions


def add_beta(NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers):
    
    NODE_CLASS_MAPPINGS.update({
        #"SharkSampler"                    : samplers.SharkSampler,
        #"SharkSamplerAdvanced_Beta"       : samplers.SharkSampler, #SharkSamplerAdvanced_Beta,
        "SharkOptions_Beta"               : samplers_extensions.SharkOptions_Beta,
        "ClownOptions_SDE_Beta"           : samplers_extensions.ClownOptions_SDE_Beta,
        "ClownOptions_DetailBoost_Beta"   : samplers_extensions.ClownOptions_DetailBoost_Beta,
        "ClownGuide_Style_Beta"           : samplers_extensions.ClownGuide_Style_Beta,
        "ClownGuide_Style_EdgeWidth"      : samplers_extensions.ClownGuide_Style_EdgeWidth,
        "ClownGuide_Style_TileSize"       : samplers_extensions.ClownGuide_Style_TileSize,

        "ClownGuide_Beta"                 : samplers_extensions.ClownGuide_Beta,
        "ClownGuides_Beta"                : samplers_extensions.ClownGuides_Beta,
        "ClownGuidesAB_Beta"              : samplers_extensions.ClownGuidesAB_Beta,
        
        "ClownGuides_Sync"                : samplers_extensions.ClownGuides_Sync,
        "ClownGuides_Sync_Advanced"       : samplers_extensions.ClownGuides_Sync_Advanced,
        "ClownGuide_FrequencySeparation"  : samplers_extensions.ClownGuide_FrequencySeparation,

        
        "SharkOptions_GuiderInput"        : samplers_extensions.SharkOptions_GuiderInput,
        "ClownOptions_ImplicitSteps_Beta" : samplers_extensions.ClownOptions_ImplicitSteps_Beta,
        "ClownOptions_Cycles_Beta"        : samplers_extensions.ClownOptions_Cycles_Beta,

        "SharkOptions_GuideCond_Beta"     : samplers_extensions.SharkOptions_GuideCond_Beta,
        "SharkOptions_GuideConds_Beta"    : samplers_extensions.SharkOptions_GuideConds_Beta,
        
        "ClownOptions_Tile_Beta"          : samplers_extensions.ClownOptions_Tile_Beta,
        "ClownOptions_Tile_Advanced_Beta" : samplers_extensions.ClownOptions_Tile_Advanced_Beta,


        "ClownGuide_Mean_Beta"            : samplers_extensions.ClownGuide_Mean_Beta,
        "ClownGuide_AdaIN_MMDiT_Beta"     : samplers_extensions.ClownGuide_AdaIN_MMDiT_Beta,
        "ClownGuide_AttnInj_MMDiT_Beta"   : samplers_extensions.ClownGuide_AttnInj_MMDiT_Beta,
        "ClownGuide_StyleNorm_Advanced_HiDream" : samplers_extensions.ClownGuide_StyleNorm_Advanced_HiDream,

        "ClownOptions_SDE_Mask_Beta"      : samplers_extensions.ClownOptions_SDE_Mask_Beta,
        
        "ClownOptions_StepSize_Beta"      : samplers_extensions.ClownOptions_StepSize_Beta,
        "ClownOptions_SigmaScaling_Beta"  : samplers_extensions.ClownOptions_SigmaScaling_Beta,

        "ClownOptions_Momentum_Beta"      : samplers_extensions.ClownOptions_Momentum_Beta,
        "ClownOptions_SwapSampler_Beta"   : samplers_extensions.ClownOptions_SwapSampler_Beta,
        "ClownOptions_ExtraOptions_Beta"  : samplers_extensions.ClownOptions_ExtraOptions_Beta,
        "ClownOptions_Automation_Beta"    : samplers_extensions.ClownOptions_Automation_Beta,

        "SharkOptions_UltraCascade_Latent_Beta"  : samplers_extensions.SharkOptions_UltraCascade_Latent_Beta,
        "SharkOptions_StartStep_Beta"     : samplers_extensions.SharkOptions_StartStep_Beta,
        
        "ClownOptions_Combine"            : samplers_extensions.ClownOptions_Combine,
        "ClownOptions_Frameweights"       : samplers_extensions.ClownOptions_Frameweights,
        "ClownOptions_FlowGuide"          : samplers_extensions.ClownOptions_FlowGuide,
        
        "ClownStyle_Block_MMDiT"          : samplers_extensions.ClownStyle_Block_MMDiT,
        "ClownStyle_MMDiT"                : samplers_extensions.ClownStyle_MMDiT,
        "ClownStyle_Attn_MMDiT"           : samplers_extensions.ClownStyle_Attn_MMDiT,
        "ClownStyle_Boost"                : samplers_extensions.ClownStyle_Boost,

        "ClownStyle_UNet"                 : samplers_extensions.ClownStyle_UNet,
        "ClownStyle_Block_UNet"           : samplers_extensions.ClownStyle_Block_UNet,
        "ClownStyle_Attn_UNet"            : samplers_extensions.ClownStyle_Attn_UNet,
        "ClownStyle_ResBlock_UNet"        : samplers_extensions.ClownStyle_ResBlock_UNet,
        "ClownStyle_SpatialBlock_UNet"    : samplers_extensions.ClownStyle_SpatialBlock_UNet,
        "ClownStyle_TransformerBlock_UNet": samplers_extensions.ClownStyle_TransformerBlock_UNet,


        "ClownSamplerSelector_Beta"       : samplers_extensions.ClownSamplerSelector_Beta,

        "SharkSampler_Beta"               : samplers.SharkSampler_Beta,
        
        "SharkChainsampler_Beta"          : samplers.SharkChainsampler_Beta,

        "ClownsharKSampler_Beta"          : samplers.ClownsharKSampler_Beta,
        "ClownsharkChainsampler_Beta"     : samplers.ClownsharkChainsampler_Beta,
        
        "ClownSampler_Beta"               : samplers.ClownSampler_Beta,
        "ClownSamplerAdvanced_Beta"       : samplers.ClownSamplerAdvanced_Beta,
        
        "BongSampler"                     : samplers.BongSampler,

    })

    extra_samplers.update({
        "res_2m"     : sample_res_2m,
        "res_3m"     : sample_res_3m,
        "res_2s"     : sample_res_2s,
        "res_3s"     : sample_res_3s,
        "res_5s"     : sample_res_5s,
        "res_6s"     : sample_res_6s,
        "res_2m_ode" : sample_res_2m_ode,
        "res_3m_ode" : sample_res_3m_ode,
        "res_2s_ode" : sample_res_2s_ode,
        "res_3s_ode" : sample_res_3s_ode,
        "res_5s_ode" : sample_res_5s_ode,
        "res_6s_ode" : sample_res_6s_ode,

        "deis_2m"    : sample_deis_2m,
        "deis_3m"    : sample_deis_3m,
        "deis_2m_ode": sample_deis_2m_ode,
        "deis_3m_ode": sample_deis_3m_ode,
        "rk_beta": rk_sampler_beta.sample_rk_beta,
    })
    
    NODE_DISPLAY_NAME_MAPPINGS.update({
            #"SharkSampler"                          : "SharkSampler",
            #"SharkSamplerAdvanced_Beta"             : "SharkSamplerAdvanced",
            "SharkSampler_Beta"                     : "SharkSampler",
            "SharkChainsampler_Beta"                : "SharkChainsampler",
            "BongSampler"                           : "BongSampler",
            "ClownsharKSampler_Beta"                : "ClownsharKSampler",
            "ClownsharkChainsampler_Beta"           : "ClownsharkChainsampler",
            "ClownSampler_Beta"                     : "ClownSampler",
            "ClownSamplerAdvanced_Beta"             : "ClownSamplerAdvanced",
            "ClownGuide_Mean_Beta"                  : "ClownGuide Mean",
            "ClownGuide_AdaIN_MMDiT_Beta"           : "ClownGuide AdaIN (HiDream)",
            "ClownGuide_AttnInj_MMDiT_Beta"         : "ClownGuide AttnInj (HiDream)",
            "ClownGuide_StyleNorm_Advanced_HiDream" : "ClownGuide_StyleNorm_Advanced_HiDream",
            "ClownGuide_Style_Beta"                 : "ClownGuide Style",
            "ClownGuide_Beta"                       : "ClownGuide",
            "ClownGuides_Beta"                      : "ClownGuides",
            "ClownGuides_Sync"                      : "ClownGuides Sync",
            "ClownGuides_Sync_Advanced"             : "ClownGuides Sync_Advanced",


            "ClownGuidesAB_Beta"                    : "ClownGuidesAB",
            "ClownSamplerSelector_Beta"             : "ClownSamplerSelector",
            "ClownOptions_SDE_Mask_Beta"            : "ClownOptions SDE Mask",
            "ClownOptions_SDE_Beta"                 : "ClownOptions SDE",
            "ClownOptions_StepSize_Beta"            : "ClownOptions Step Size",
            "ClownOptions_DetailBoost_Beta"         : "ClownOptions Detail Boost",
            "ClownOptions_SigmaScaling_Beta"        : "ClownOptions Sigma Scaling",
            "ClownOptions_Momentum_Beta"            : "ClownOptions Momentum",
            "ClownOptions_ImplicitSteps_Beta"       : "ClownOptions Implicit Steps",
            "ClownOptions_Cycles_Beta"              : "ClownOptions Cycles",
            "ClownOptions_SwapSampler_Beta"         : "ClownOptions Swap Sampler",
            "ClownOptions_ExtraOptions_Beta"        : "ClownOptions Extra Options",
            "ClownOptions_Automation_Beta"          : "ClownOptions Automation",
            "SharkOptions_GuideCond_Beta"           : "SharkOptions Guide Cond",
            "SharkOptions_GuideConds_Beta"          : "SharkOptions Guide Conds",
            "SharkOptions_Beta"                     : "SharkOptions",
            "SharkOptions_StartStep_Beta"           : "SharkOptions Start Step",
            "SharkOptions_UltraCascade_Latent_Beta" : "SharkOptions UltraCascade Latent",
            "ClownOptions_Combine"                  : "ClownOptions Combine",
            "ClownOptions_Frameweights"             : "ClownOptions Frameweights",
            "SharkOptions_GuiderInput"              : "SharkOptions Guider Input",
            "ClownOptions_Tile_Beta"                : "ClownOptions Tile",
            "ClownOptions_Tile_Advanced_Beta"       : "ClownOptions Tile Advanced",

    })
    
    return NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samplers



def sample_res_2m(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="res_2m",)
def sample_res_3m(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="res_3m",)
def sample_res_2s(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="res_2s",)
def sample_res_3s(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="res_3s",)
def sample_res_5s(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="res_5s",)
def sample_res_6s(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="res_6s",)

def sample_res_2m_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="res_2m", eta=0.0, eta_substep=0.0, )
def sample_res_3m_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="res_3m", eta=0.0, eta_substep=0.0, )
def sample_res_2s_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="res_2s", eta=0.0, eta_substep=0.0, )
def sample_res_3s_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="res_3s", eta=0.0, eta_substep=0.0, )
def sample_res_5s_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="res_5s", eta=0.0, eta_substep=0.0, )
def sample_res_6s_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="res_6s", eta=0.0, eta_substep=0.0, )

def sample_deis_2m(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="deis_2m",)
def sample_deis_3m(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="deis_3m",)

def sample_deis_2m_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="deis_2m", eta=0.0, eta_substep=0.0, )
def sample_deis_3m_ode(model, x, sigmas, extra_args=None, callback=None, disable=None):
    return rk_sampler_beta.sample_rk_beta(model, x, sigmas, None, extra_args, callback, disable, rk_type="deis_3m", eta=0.0, eta_substep=0.0, )



================================================
FILE: beta/constants.py
================================================
MAX_STEPS = 10000


IMPLICIT_TYPE_NAMES = [
    "rebound",
    "retro-eta",
    "bongmath",
    "predictor-corrector",
]




GUIDE_MODE_NAMES_BETA_SIMPLE = [
    "flow",
    "sync",
    "lure",
    "data",
    "epsilon",
    "inversion",
    "pseudoimplicit",
    "fully_pseudoimplicit",
    "none",
]

FRAME_WEIGHTS_CONFIG_NAMES = [
    "frame_weights",
    "frame_weights_inv",
    "frame_targets"
]

FRAME_WEIGHTS_DYNAMICS_NAMES = [
    "constant",
    "linear",
    "ease_out",
    "ease_in",
    "middle",
    "trough",
]


FRAME_WEIGHTS_SCHEDULE_NAMES = [
    "moderate_early",
    "moderate_late",
    "fast_early",
    "fast_late",
    "slow_early",
    "slow_late",
]



GUIDE_MODE_NAMES_PSEUDOIMPLICIT = [
    "pseudoimplicit",
    "pseudoimplicit_cw",
    "pseudoimplicit_projection",
    "pseudoimplicit_projection_cw",
    "fully_pseudoimplicit",
    "fully_pseudoimplicit_projection",
    "fully_pseudoimplicit_cw", 
    "fully_pseudoimplicit_projection_cw"
]


================================================
FILE: beta/deis_coefficients.py
================================================
# Adapted from: https://github.com/zju-pi/diff-sampler/blob/main/gits-main/solver_utils.py
# fixed the calcs for "rhoab" which suffered from an off-by-one error and made some other minor corrections

import torch
import numpy as np

# A pytorch reimplementation of DEIS (https://github.com/qsh-zh/deis).
#############################
### Utils for DEIS solver ###
#############################
#----------------------------------------------------------------------------
# Transfer from the input time (sigma) used in EDM to that (t) used in DEIS.

def edm2t(edm_steps, epsilon_s=1e-3, sigma_min=0.002, sigma_max=80):
    vp_sigma = lambda beta_d, beta_min: lambda t: (np.e ** (0.5 * beta_d * (t ** 2) + beta_min * t) - 1) ** 0.5
    vp_sigma_inv = lambda beta_d, beta_min: lambda sigma: ((beta_min ** 2 + 2 * beta_d * (sigma ** 2 + 1).log()).sqrt() - beta_min) / beta_d
    vp_beta_d = 2 * (np.log(torch.tensor(sigma_min).cpu() ** 2 + 1) / epsilon_s - np.log(torch.tensor(sigma_max).cpu() ** 2 + 1)) / (epsilon_s - 1)
    vp_beta_min = np.log(torch.tensor(sigma_max).cpu() ** 2 + 1) - 0.5 * vp_beta_d
    t_steps = vp_sigma_inv(vp_beta_d.clone().detach().cpu(), vp_beta_min.clone().detach().cpu())(edm_steps.clone().detach().cpu())
    return t_steps, vp_beta_min, vp_beta_d + vp_beta_min

#----------------------------------------------------------------------------

def cal_poly(prev_t, j, taus):
    poly = 1
    for k in range(prev_t.shape[0]):
        if k == j:
            continue
        poly *= (taus - prev_t[k]) / (prev_t[j] - prev_t[k])
    return poly

#----------------------------------------------------------------------------
# Transfer from t to alpha_t.

def t2alpha_fn(beta_0, beta_1, t):
    return torch.exp(-0.5 * t ** 2 * (beta_1 - beta_0) - t * beta_0)

#----------------------------------------------------------------------------

def cal_integrand(beta_0, beta_1, taus):
    with torch.inference_mode(mode=False):
        taus = taus.clone()
        beta_0 = beta_0.clone()
        beta_1 = beta_1.clone()
        with torch.enable_grad():
            taus.requires_grad_(True)
            alpha = t2alpha_fn(beta_0, beta_1, taus)
            log_alpha = alpha.log()
            log_alpha.sum().backward()
            d_log_alpha_dtau = taus.grad
    integrand = -0.5 * d_log_alpha_dtau / torch.sqrt(alpha * (1 - alpha))
    return integrand

#----------------------------------------------------------------------------

def get_deis_coeff_list(t_steps, max_order, N=10000, deis_mode='tab'):
    """
    Get the coefficient list for DEIS sampling.

    Args:
        t_steps: A pytorch tensor. The time steps for sampling.
        max_order: A `int`. Maximum order of the solver. 1 <= max_order <= 4
        N: A `int`. Use how many points to perform the numerical integration when deis_mode=='tab'.
        deis_mode: A `str`. Select between 'tab' and 'rhoab'. Type of DEIS.
    Returns:
        A pytorch tensor. A batch of generated samples or sampling trajectories if return_inters=True.
    """
    if deis_mode == 'tab':
        t_steps, beta_0, beta_1 = edm2t(t_steps)
        C = []
        for i, (t_cur, t_next) in enumerate(zip(t_steps[:-1], t_steps[1:])):
            order = min(i+1, max_order)
            if order == 1:
                C.append([])
            else:
                taus = torch.linspace(t_cur, t_next, N)   # split the interval for integral approximation
                dtau = (t_next - t_cur) / N
                prev_t = t_steps[[i - k for k in range(order)]]
                coeff_temp = []
                integrand = cal_integrand(beta_0, beta_1, taus)
                for j in range(order):
                    poly = cal_poly(prev_t, j, taus)
                    coeff_temp.append(torch.sum(integrand * poly) * dtau)
                C.append(coeff_temp)

    elif deis_mode == 'rhoab':
        # Analytical solution, second order
        def get_def_integral_2(a, b, start, end, c):
            coeff = (end**3 - start**3) / 3 - (end**2 - start**2) * (a + b) / 2 + (end - start) * a * b
            return coeff / ((c - a) * (c - b))

        # Analytical solution, third order
        def get_def_integral_3(a, b, c, start, end, d):
            coeff = (end**4 - start**4) / 4 - (end**3 - start**3) * (a + b + c) / 3 \
                    + (end**2 - start**2) * (a*b + a*c + b*c) / 2 - (end - start) * a * b * c
            return coeff / ((d - a) * (d - b) * (d - c))

        C = []
        for i, (t_cur, t_next) in enumerate(zip(t_steps[:-1], t_steps[1:])):
            order = min(i+1, max_order) #fixed order calcs
            if order == 1:
                C.append([])
            else:
                prev_t = t_steps[[i - k for k in range(order+1)]]
                if order == 2:
                    coeff_cur = ((t_next - prev_t[1])**2 - (t_cur - prev_t[1])**2) / (2 * (t_cur - prev_t[1]))
                    coeff_prev1 = (t_next - t_cur)**2 / (2 * (prev_t[1] - t_cur))
                    coeff_temp = [coeff_cur, coeff_prev1]
                elif order == 3:
                    coeff_cur = get_def_integral_2(prev_t[1], prev_t[2], t_cur, t_next, t_cur)
                    coeff_prev1 = get_def_integral_2(t_cur, prev_t[2], t_cur, t_next, prev_t[1])
                    coeff_prev2 = get_def_integral_2(t_cur, prev_t[1], t_cur, t_next, prev_t[2])
                    coeff_temp = [coeff_cur, coeff_prev1, coeff_prev2]
                elif order == 4:
                    coeff_cur = get_def_integral_3(prev_t[1], prev_t[2], prev_t[3], t_cur, t_next, t_cur)
                    coeff_prev1 = get_def_integral_3(t_cur, prev_t[2], prev_t[3], t_cur, t_next, prev_t[1])
                    coeff_prev2 = get_def_integral_3(t_cur, prev_t[1], prev_t[3], t_cur, t_next, prev_t[2])
                    coeff_prev3 = get_def_integral_3(t_cur, prev_t[1], prev_t[2], t_cur, t_next, prev_t[3])
                    coeff_temp = [coeff_cur, coeff_prev1, coeff_prev2, coeff_prev3]
                C.append(coeff_temp)
 
    return C



================================================
FILE: beta/noise_classes.py
================================================
import torch
import torch.nn.functional as F

from torch               import nn, Tensor, Generator, lerp
from torch.nn.functional import unfold
from torch.distributions import StudentT, Laplace

import numpy as np
import pywt
import functools

from typing import Callable, Tuple
from math   import pi

from comfy.k_diffusion.sampling import BrownianTreeNoiseSampler

from ..res4lyf import RESplain

# Set this to "True" if you have installed OpenSimplex. Recommended to install without dependencies due to conflicting packages: pip3 install opensimplex --no-deps 
OPENSIMPLEX_ENABLE = False

if OPENSIMPLEX_ENABLE:
    from opensimplex import OpenSimplex

class PrecisionTool:
    def __init__(self, cast_type='fp64'):
        self.cast_type = cast_type

    def cast_tensor(self, func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            if self.cast_type not in ['fp64', 'fp32', 'fp16']:
                return func(*args, **kwargs)

            target_device = None
            for arg in args:
                if torch.is_tensor(arg):
                    target_device = arg.device
                    break
            if target_device is None:
                for v in kwargs.values():
                    if torch.is_tensor(v):
                        target_device = v.device
                        break
            
        # recursively zs_recast tensors in nested dictionaries
            def cast_and_move_to_device(data):
                if torch.is_tensor(data):
                    if self.cast_type == 'fp64':
                        return data.to(torch.float64).to(target_device)
                    elif self.cast_type == 'fp32':
                        return data.to(torch.float32).to(target_device)
                    elif self.cast_type == 'fp16':
                        return data.to(torch.float16).to(target_device)
                elif isinstance(data, dict):
                    return {k: cast_and_move_to_device(v) for k, v in data.items()}
                return data

            new_args = [cast_and_move_to_device(arg) for arg in args]
            new_kwargs = {k: cast_and_move_to_device(v) for k, v in kwargs.items()}
            
            return func(*new_args, **new_kwargs)
        return wrapper

    def set_cast_type(self, new_value):
        if new_value in ['fp64', 'fp32', 'fp16']:
            self.cast_type = new_value
        else:
            self.cast_type = 'fp64'

precision_tool = PrecisionTool(cast_type='fp64')


def noise_generator_factory(cls, **fixed_params):
    def create_instance(**kwargs):
        params = {**fixed_params, **kwargs}
        return cls(**params)
    return create_instance

def like(x):
    return {'size': x.shape, 'dtype': x.dtype, 'layout': x.layout, 'device': x.device}

def scale_to_range(x, scaled_min = -1.73, scaled_max = 1.73): #1.73 is roughly the square root of 3
    return scaled_min + (x - x.min()) * (scaled_max - scaled_min) / (x.max() - x.min())

def normalize(x):
    return (x - x.mean())/ x.std()

class NoiseGenerator:
    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None):
        self.seed = seed

        if x is not None:
            self.x      = x
            self.size   = x.shape
            self.dtype  = x.dtype
            self.layout = x.layout
            self.device = x.device
        else:   
            self.x      = torch.zeros(size, dtype, layout, device)

        # allow overriding parameters imported from latent 'x' if specified
        if size is not None:
            self.size   = size
        if dtype is not None:
            self.dtype  = dtype
        if layout is not None:
            self.layout = layout
        if device is not None:
            self.device = device

        self.sigma_max = sigma_max.to(device) if isinstance(sigma_max, torch.Tensor) else sigma_max
        self.sigma_min = sigma_min.to(device) if isinstance(sigma_min, torch.Tensor) else sigma_min

        self.last_seed = seed #- 1 #adapt for update being called during initialization, which increments last_seed
        
        if generator is None:
            self.generator = torch.Generator(device=self.device).manual_seed(seed)
        else:
            self.generator = generator

    def __call__(self):
        raise NotImplementedError("This method got clownsharked!")
    
    def update(self, **kwargs):
        
        #if not isinstance(self, BrownianNoiseGenerator):
        #    self.last_seed += 1
                    
        updated_values = []
        for attribute_name, value in kwargs.items():
            if value is not None:
                setattr(self, attribute_name, value)
            updated_values.append(getattr(self, attribute_name))
        return tuple(updated_values)



class BrownianNoiseGenerator(NoiseGenerator):
    def __call__(self, *, sigma=None, sigma_next=None, **kwargs):
        return BrownianTreeNoiseSampler(self.x, self.sigma_min, self.sigma_max, seed=self.seed, cpu = self.device.type=='cpu')(sigma, sigma_next)



class FractalNoiseGenerator(NoiseGenerator):
    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, 
                alpha=0.0, k=1.0, scale=0.1): 
        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)
        self.update(alpha=alpha, k=k, scale=scale)

    def __call__(self, *, alpha=None, k=None, scale=None, **kwargs):
        self.update(alpha=alpha, k=k, scale=scale)
        self.last_seed += 1
        
        if len(self.size) == 5:
            b, c, t, h, w = self.size
        else:
            b, c, h, w = self.size
        
        noise = torch.normal(mean=0.0, std=1.0, size=self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)
        
        y_freq = torch.fft.fftfreq(h, 1/h, device=self.device)
        x_freq = torch.fft.fftfreq(w, 1/w, device=self.device)

        if len(self.size) == 5:
            t_freq = torch.fft.fftfreq(t, 1/t, device=self.device)
            freq = torch.sqrt(t_freq[:, None, None]**2 + y_freq[None, :, None]**2 + x_freq[None, None, :]**2).clamp(min=1e-10)
        else:
            freq = torch.sqrt(y_freq[:, None]**2 + x_freq[None, :]**2).clamp(min=1e-10)
        
        spectral_density = self.k / torch.pow(freq, self.alpha * self.scale)
        spectral_density[0, 0] = 0

        noise_fft = torch.fft.fftn(noise)
        modified_fft = noise_fft * spectral_density
        noise = torch.fft.ifftn(modified_fft).real

        return noise / torch.std(noise)
    
    

class SimplexNoiseGenerator(NoiseGenerator):
    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, 
                scale=0.01):
        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)
        self.noise = OpenSimplex(seed=seed)
        self.scale = scale
        
    def __call__(self, *, scale=None, **kwargs):
        self.update(scale=scale)
        self.last_seed += 1
        
        if len(self.size) == 5:
            b, c, t, h, w = self.size
        else:
            b, c, h, w = self.size

        noise_array = self.noise.noise3array(np.arange(w),np.arange(h),np.arange(c))
        self.noise = OpenSimplex(seed=self.noise.get_seed()+1)
        
        noise_tensor = torch.from_numpy(noise_array).to(self.device)
        noise_tensor = torch.unsqueeze(noise_tensor, dim=0)
        if len(self.size) == 5:
            noise_tensor = torch.unsqueeze(noise_tensor, dim=0)
        
        return noise_tensor / noise_tensor.std()
        #return normalize(scale_to_range(noise_tensor))



class HiresPyramidNoiseGenerator(NoiseGenerator):
    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, 
                discount=0.7, mode='nearest-exact'):
        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)
        self.update(discount=discount, mode=mode)

    def __call__(self, *, discount=None, mode=None, **kwargs):
        self.update(discount=discount, mode=mode)
        self.last_seed += 1

        if len(self.size) == 5:
            b, c, t, h, w = self.size
            orig_h, orig_w, orig_t = h, w, t
            u = nn.Upsample(size=(orig_h, orig_w, orig_t), mode=self.mode).to(self.device)
        else:
            b, c, h, w = self.size
            orig_h, orig_w = h, w
            orig_t = t = 1
            u = nn.Upsample(size=(orig_h, orig_w), mode=self.mode).to(self.device)

        noise = ((torch.rand(size=self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator) - 0.5) * 2 * 1.73)

        for i in range(4):
            r = torch.rand(1, device=self.device, generator=self.generator).item() * 2 + 2
            h, w = min(orig_h * 15, int(h * (r ** i))), min(orig_w * 15, int(w * (r ** i)))
            if len(self.size) == 5:
                t = min(orig_t * 15, int(t * (r ** i)))
                new_noise = torch.randn((b, c, t, h, w), dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)
            else:
                new_noise = torch.randn((b, c, h, w), dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)

            upsampled_noise = u(new_noise)
            noise += upsampled_noise * self.discount ** i
            
            if h >= orig_h * 15 or w >= orig_w * 15 or t >= orig_t * 15:
                break  # if resolution is too high
        
        return noise / noise.std()



class PyramidNoiseGenerator(NoiseGenerator):
    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, 
                discount=0.8, mode='nearest-exact'):
        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)
        self.update(discount=discount, mode=mode)

    def __call__(self, *, discount=None, mode=None, **kwargs):
        self.update(discount=discount, mode=mode)
        self.last_seed += 1

        x = torch.zeros(self.size, dtype=self.dtype, layout=self.layout, device=self.device)

        if len(self.size) == 5:
            b, c, t, h, w = self.size
            orig_h, orig_w, orig_t = h, w, t
        else:
            b, c, h, w = self.size
            orig_h, orig_w = h, w

        r = 1
        for i in range(5):
            r *= 2

            if len(self.size) == 5:
                scaledSize = (b, c, t * r, h * r, w * r)
                origSize = (orig_h, orig_w, orig_t)
            else:
                scaledSize = (b, c, h * r, w * r)
                origSize = (orig_h, orig_w)

            x += torch.nn.functional.interpolate(
                torch.normal(mean=0, std=0.5 ** i, size=scaledSize, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator),
                size=origSize, mode=self.mode
            ) * self.discount ** i
        return x / x.std()



class InterpolatedPyramidNoiseGenerator(NoiseGenerator):
    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, 
                discount=0.7, mode='nearest-exact'):
        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)
        self.update(discount=discount, mode=mode)

    def __call__(self, *, discount=None, mode=None, **kwargs):
        self.update(discount=discount, mode=mode)
        self.last_seed += 1

        if len(self.size) == 5:
            b, c, t, h, w = self.size
            orig_t, orig_h, orig_w = t, h, w
        else:
            b, c, h, w = self.size
            orig_h, orig_w = h, w
            t = orig_t = 1

        noise = ((torch.rand(size=self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator) - 0.5) * 2 * 1.73)
        multipliers = [1]

        for i in range(4):
            r = torch.rand(1, device=self.device, generator=self.generator).item() * 2 + 2
            h, w = min(orig_h * 15, int(h * (r ** i))), min(orig_w * 15, int(w * (r ** i)))
            
            if len(self.size) == 5:
                t = min(orig_t * 15, int(t * (r ** i)))
                new_noise = torch.randn((b, c, t, h, w), dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)
                upsampled_noise = nn.functional.interpolate(new_noise, size=(orig_t, orig_h, orig_w), mode=self.mode)
            else:
                new_noise = torch.randn((b, c, h, w), dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)
                upsampled_noise = nn.functional.interpolate(new_noise, size=(orig_h, orig_w), mode=self.mode)

            noise += upsampled_noise * self.discount ** i
            multipliers.append(        self.discount ** i)
            
            if h >= orig_h * 15 or w >= orig_w * 15 or (len(self.size) == 5 and t >= orig_t * 15):
                break  # if resolution is too high
        
        noise = noise / sum([m ** 2 for m in multipliers]) ** 0.5 
        return noise / noise.std()



class CascadeBPyramidNoiseGenerator(NoiseGenerator):
    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, 
                levels=10, mode='nearest', size_range=[1,16]):
        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)
        self.update(epsilon=x, levels=levels, mode=mode, size_range=size_range)

    def __call__(self, *, levels=10, mode='nearest', size_range=[1,16], **kwargs):
        self.update(levels=levels, mode=mode)
        if len(self.size) == 5:
            raise NotImplementedError("CascadeBPyramidNoiseGenerator is not implemented for 5D tensors (eg. video).") 
        self.last_seed += 1

        b, c, h, w = self.size

        epsilon = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)
        multipliers = [1]
        for i in range(1, levels):
            m = 0.75 ** i

            h, w = int(epsilon.size(-2) // (2 ** i)), int(epsilon.size(-2) // (2 ** i))
            if size_range is None or (size_range[0] <= h <= size_range[1] or size_range[0] <= w <= size_range[1]):
                offset = torch.randn(epsilon.size(0), epsilon.size(1), h, w, device=self.device, generator=self.generator)
                epsilon = epsilon + torch.nn.functional.interpolate(offset, size=epsilon.shape[-2:], mode=self.mode) * m
                multipliers.append(m)

            if h <= 1 or w <= 1:
                break
        epsilon = epsilon / sum([m ** 2 for m in multipliers]) ** 0.5 #divides the epsilon tensor by the square root of the sum of the squared multipliers.

        return epsilon


class UniformNoiseGenerator(NoiseGenerator):
    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, 
                mean=0.0, scale=1.73):
        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)
        self.update(mean=mean, scale=scale)

    def __call__(self, *, mean=None, scale=None, **kwargs):
        self.update(mean=mean, scale=scale)
        self.last_seed += 1

        noise = torch.rand(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)

        return self.scale * 2 * (noise - 0.5) + self.mean

class GaussianNoiseGenerator(NoiseGenerator):
    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, 
                mean=0.0, std=1.0):
        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)
        self.update(mean=mean, std=std)

    def __call__(self, *, mean=None, std=None, **kwargs):
        self.update(mean=mean, std=std)
        self.last_seed += 1

        noise = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)

        return (noise - noise.mean()) / noise.std()

class GaussianBackwardsNoiseGenerator(NoiseGenerator):
    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, 
                mean=0.0, std=1.0):
        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)
        self.update(mean=mean, std=std)

    def __call__(self, *, mean=None, std=None, **kwargs):
        self.update(mean=mean, std=std)
        self.last_seed += 1
        RESplain("GaussianBackwards last seed:", self.generator.initial_seed())
        self.generator.manual_seed(self.generator.initial_seed() - 1)
        noise = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator)

        return (noise - noise.mean()) / noise.std()

class LaplacianNoiseGenerator(NoiseGenerator):
    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, 
                loc=0, scale=1.0):
        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)
        self.update(loc=loc, scale=scale)

    def __call__(self, *, loc=None, scale=None, **kwargs):
        self.update(loc=loc, scale=scale)
        self.last_seed += 1

        # b, c, h, w = self.size
        # orig_h, orig_w = h, w

        noise = torch.randn(self.size, dtype=self.dtype, layout=self.layout, device=self.device, generator=self.generator) / 4.0

        rng_state = torch.random.get_rng_state()
        torch.manual_seed(self.generator.initial_seed())
        laplacian_noise = Laplace(loc=self.loc, scale=self.scale).rsample(self.size).to(self.device)
        self.generator.manual_seed(self.generator.initial_seed() + 1)
        torch.random.set_rng_state(rng_state)

        noise += laplacian_noise
        return noise / noise.std()

class StudentTNoiseGenerator(NoiseGenerator):
    def __init__(self, x=None, size=None, dtype=None, layout=None, device=None, seed=42, generator=None, sigma_min=None, sigma_max=None, 
                loc=0, scale=0.2, df=1):
        super().__init__(x, size, dtype, layout, device, seed, generator, sigma_min, sigma_max)
        self.update(loc=loc, scale=scale, df=df)

    def __call__(self, *, loc=None, scale=None, df=None, **kwargs):
        self.update(loc=loc, scale=scale, df=df)
        self.last_seed += 1

        # b, c, h, w = self.size
        # orig_h, orig_w = h, w

        rng_state = torch.random.get_rng_state()
        torch.manual_seed(self.generator.initial_seed())

        noise = StudentT(loc=self.loc, scale=self.scale, df=self.df).rsample(self.size)
        if not isinstance(self, BrownianNoiseGenerator):
            self.last_seed += 1
                    
        s = torch.quantile(noise.flatten(start_dim=1).abs(), 0.75, dim=-1)
        
        if len(self.size) == 5:
            s = s.reshape(*s.shape, 1, 1, 1, 1)
        else:
            s = s.reshape(*s.shape, 1, 1, 1)

        noise = noise.clamp(-s, s)

        noise_latent = torch.copysign(torch.pow(torch.abs(noise), 0.5), noise).to(self.device)

        self.generator.manual_seed(self.generator.initial_seed() + 1)
        torch
Download .txt
gitextract__f8xgulp/

├── .gitignore
├── LICENSE
├── README.md
├── __init__.py
├── attention_masks.py
├── aura/
│   └── mmdit.py
├── beta/
│   ├── __init__.py
│   ├── constants.py
│   ├── deis_coefficients.py
│   ├── noise_classes.py
│   ├── phi_functions.py
│   ├── rk_coefficients_beta.py
│   ├── rk_guide_func_beta.py
│   ├── rk_method_beta.py
│   ├── rk_noise_sampler_beta.py
│   ├── rk_sampler_beta.py
│   ├── samplers.py
│   └── samplers_extensions.py
├── chroma/
│   ├── layers.py
│   ├── math.py
│   └── model.py
├── conditioning.py
├── example_workflows/
│   ├── chroma regional antiblur.json
│   ├── chroma txt2img.json
│   ├── comparison ksampler vs csksampler chain workflows.json
│   ├── flux faceswap sync pulid.json
│   ├── flux faceswap sync.json
│   ├── flux faceswap.json
│   ├── flux inpaint area.json
│   ├── flux inpaint bongmath.json
│   ├── flux inpainting.json
│   ├── flux regional antiblur.json
│   ├── flux regional redux (2 zone).json
│   ├── flux regional redux (3 zone, nested).json
│   ├── flux regional redux (3 zone, overlapping).json
│   ├── flux regional redux (3 zones).json
│   ├── flux style antiblur.json
│   ├── flux style transfer gguf.json
│   ├── flux upscale thumbnail large multistage.json
│   ├── flux upscale thumbnail large.json
│   ├── flux upscale thumbnail widescreen.json
│   ├── hidream guide data projection.json
│   ├── hidream guide epsilon projection.json
│   ├── hidream guide flow.json
│   ├── hidream guide fully_pseudoimplicit.json
│   ├── hidream guide lure.json
│   ├── hidream guide pseudoimplicit.json
│   ├── hidream hires fix.json
│   ├── hidream regional 3 zones.json
│   ├── hidream regional antiblur.json
│   ├── hidream style antiblur.json
│   ├── hidream style transfer txt2img.json
│   ├── hidream style transfer v2.json
│   ├── hidream style transfer.json
│   ├── hidream txt2img.json
│   ├── hidream unsampling data WF.json
│   ├── hidream unsampling data.json
│   ├── hidream unsampling pseudoimplicit.json
│   ├── hidream unsampling.json
│   ├── intro to clownsampling.json
│   ├── sd35 medium unsampling data.json
│   ├── sd35 medium unsampling.json
│   ├── sdxl regional antiblur.json
│   ├── sdxl style transfer.json
│   ├── style transfer.json
│   ├── ultracascade txt2img style transfer.json
│   ├── ultracascade txt2img.json
│   ├── wan img2vid 720p (fp8 fast).json
│   ├── wan txt2img (fp8 fast).json
│   └── wan vid2vid.json
├── flux/
│   ├── controlnet.py
│   ├── layers.py
│   ├── math.py
│   ├── model.py
│   └── redux.py
├── helper.py
├── helper_sigma_preview_image_preproc.py
├── hidream/
│   └── model.py
├── images.py
├── latent_images.py
├── latents.py
├── legacy/
│   ├── __init__.py
│   ├── conditioning.py
│   ├── constants.py
│   ├── deis_coefficients.py
│   ├── flux/
│   │   ├── controlnet.py
│   │   ├── layers.py
│   │   ├── math.py
│   │   ├── model.py
│   │   └── redux.py
│   ├── helper.py
│   ├── latents.py
│   ├── legacy_sampler_rk.py
│   ├── legacy_samplers.py
│   ├── models.py
│   ├── noise_classes.py
│   ├── noise_sigmas_timesteps_scaling.py
│   ├── phi_functions.py
│   ├── rk_coefficients.py
│   ├── rk_guide_func.py
│   ├── rk_method.py
│   ├── rk_sampler.py
│   ├── samplers.py
│   ├── samplers_extensions.py
│   ├── samplers_tiled.py
│   ├── sigmas.py
│   └── tiling.py
├── lightricks/
│   ├── model.py
│   ├── symmetric_patchifier.py
│   └── vae/
│       ├── causal_conv3d.py
│       ├── causal_video_autoencoder.py
│       ├── conv_nd_factory.py
│       ├── dual_conv3d.py
│       └── pixel_norm.py
├── loaders.py
├── misc_scripts/
│   └── replace_metadata.py
├── models.py
├── nodes_latents.py
├── nodes_misc.py
├── nodes_precision.py
├── requirements.txt
├── res4lyf.py
├── rk_method_beta.py
├── samplers_extensions.py
├── sd/
│   ├── attention.py
│   └── openaimodel.py
├── sd35/
│   └── mmdit.py
├── sigmas.py
├── style_transfer.py
├── wan/
│   ├── model.py
│   └── vae.py
└── web/
    └── js/
        ├── RES4LYF_dynamicWidgets.js
        ├── conditioningToBase64.js
        └── res4lyf.default.json
Download .txt
Showing preview only (233K chars total). Download the full file or copy to clipboard to get everything.
SYMBOL INDEX (3031 symbols across 79 files)

FILE: __init__.py
  function add_samplers (line 40) | def add_samplers():

FILE: attention_masks.py
  function fp_not (line 26) | def fp_not(tensor):
  function fp_or (line 29) | def fp_or(tensor1, tensor2):
  function fp_and (line 32) | def fp_and(tensor1, tensor2):
  function fp_and2 (line 35) | def fp_and2(tensor1, tensor2):
  class CoreAttnMask (line 47) | class CoreAttnMask:
    method __init__ (line 48) | def __init__(self, mask, mask_type=None, start_sigma=None, end_sigma=N...
    method set_sigma_range (line 58) | def set_sigma_range(self, start_sigma, end_sigma):
    method set_block_range (line 62) | def set_block_range(self, start_block, end_block):
    method __call__ (line 66) | def __call__(self, weight=1.0, mask_type=None, transformer_options=Non...
  class BaseAttentionMask (line 92) | class BaseAttentionMask:
    method __init__ (line 93) | def __init__(self, mask_type="gradient", edge_width=0, edge_width_list...
    method set_latent (line 123) | def set_latent(self, latent):
    method add_region (line 136) | def add_region(self, context, mask):
    method add_region_sizes (line 145) | def add_region_sizes(self, context_size_list, mask):
    method add_regions (line 156) | def add_regions(self, contexts, masks):
    method clear_regions (line 160) | def clear_regions(self):
    method generate (line 167) | def generate(self):
    method get (line 170) | def get(self, **kwargs):
    method attn_mask_recast (line 173) | def attn_mask_recast(self, dtype):
  class FullAttentionMask (line 180) | class FullAttentionMask(BaseAttentionMask):
    method generate (line 181) | def generate(self, mask_type=None, dtype=None):
  class FullAttentionMaskHiDream (line 284) | class FullAttentionMaskHiDream(BaseAttentionMask):
    method generate (line 285) | def generate(self, mask_type=None, dtype=None):
    method gen_edge_mask (line 419) | def gen_edge_mask(self, block_idx):
  class RegionalContext (line 514) | class RegionalContext:
    method __init__ (line 515) | def __init__(self, idle_device='cpu', work_device='cuda'):
    method add_region (line 528) | def add_region(self, context, pooled_output=None, clip_fea=None):
    method add_region_clip_fea (line 547) | def add_region_clip_fea(self, clip_fea):
    method add_region_llama3 (line 554) | def add_region_llama3(self, llama3):
    method add_region_hidream (line 560) | def add_region_hidream(self, t5, llama3):
    method clear_regions (line 564) | def clear_regions(self):
    method get (line 580) | def get(self):
    method get_clip_fea (line 583) | def get_clip_fea(self):
    method get_llama3 (line 589) | def get_llama3(self):
  class CrossAttentionMask (line 596) | class CrossAttentionMask(BaseAttentionMask):
    method generate (line 597) | def generate(self, mask_type=None, dtype=None):
  class SplitAttentionMask (line 658) | class SplitAttentionMask(BaseAttentionMask):
    method generate (line 659) | def generate(self, mask_type=None, dtype=None):

FILE: aura/mmdit.py
  function modulate (line 22) | def modulate(x, shift, scale):
  function find_multiple (line 26) | def find_multiple(n: int, k: int) -> int:
  class MLP (line 32) | class MLP(nn.Module): # not executed directly with ReAura?
    method __init__ (line 33) | def __init__(self, dim, hidden_dim=None, dtype=None, device=None, oper...
    method forward (line 46) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class MultiHeadLayerNorm (line 52) | class MultiHeadLayerNorm(nn.Module):
    method __init__ (line 53) | def __init__(self, hidden_size=None, eps=1e-5, dtype=None, device=None):
    method forward (line 61) | def forward(self, hidden_states):
  class ReSingleAttention (line 72) | class ReSingleAttention(nn.Module):
    method __init__ (line 73) | def __init__(self, dim, n_heads, mh_qknorm=False, dtype=None, device=N...
    method forward (line 97) | def forward(self, c, mask=None):
  class ReDoubleAttention (line 113) | class ReDoubleAttention(nn.Module):
    method __init__ (line 114) | def __init__(self, dim, n_heads, mh_qknorm=False, dtype=None, device=N...
    method forward (line 156) | def forward(self, c, x, mask=None):
  class ReMMDiTBlock (line 192) | class ReMMDiTBlock(nn.Module):
    method __init__ (line 193) | def __init__(self, dim, heads=8, global_conddim=1024, is_last=False, d...
    method forward (line 222) | def forward(self, c, x, global_cond, mask=None, **kwargs):
  class ReDiTBlock (line 252) | class ReDiTBlock(nn.Module):
    method __init__ (line 254) | def __init__(self, dim, heads=8, global_conddim=1024, dtype=None, devi...
    method forward (line 269) | def forward(self, cx, global_cond, mask=None, **kwargs):
  class TimestepEmbedder (line 286) | class TimestepEmbedder(nn.Module):
    method __init__ (line 287) | def __init__(self, hidden_size, frequency_embedding_size=256, dtype=No...
    method timestep_embedding (line 297) | def timestep_embedding(t, dim, max_period=10000):
    method forward (line 311) | def forward(self, t, dtype):
  class ReMMDiT (line 317) | class ReMMDiT(nn.Module):
    method __init__ (line 318) | def __init__(
    method extend_pe (line 382) | def extend_pe(self, init_dim=(16, 16), target_dim=(64, 64)):
    method pe_selection_index_based_on_dim (line 397) | def pe_selection_index_based_on_dim(self, h, w):
    method unpatchify (line 410) | def unpatchify(self, x, h, w):
    method patchify (line 419) | def patchify(self, x):
    method apply_pos_embeds (line 433) | def apply_pos_embeds(self, x, h, w):
    method forward (line 450) | def forward(self, x, timestep, context, transformer_options={}, **kwar...
  function unpatchify2 (line 883) | def unpatchify2(x: torch.Tensor, H: int, W: int, patch_size: int) -> tor...

FILE: beta/__init__.py
  function add_beta (line 7) | def add_beta(NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_samp...
  function sample_res_2m (line 162) | def sample_res_2m(model, x, sigmas, extra_args=None, callback=None, disa...
  function sample_res_3m (line 164) | def sample_res_3m(model, x, sigmas, extra_args=None, callback=None, disa...
  function sample_res_2s (line 166) | def sample_res_2s(model, x, sigmas, extra_args=None, callback=None, disa...
  function sample_res_3s (line 168) | def sample_res_3s(model, x, sigmas, extra_args=None, callback=None, disa...
  function sample_res_5s (line 170) | def sample_res_5s(model, x, sigmas, extra_args=None, callback=None, disa...
  function sample_res_6s (line 172) | def sample_res_6s(model, x, sigmas, extra_args=None, callback=None, disa...
  function sample_res_2m_ode (line 175) | def sample_res_2m_ode(model, x, sigmas, extra_args=None, callback=None, ...
  function sample_res_3m_ode (line 177) | def sample_res_3m_ode(model, x, sigmas, extra_args=None, callback=None, ...
  function sample_res_2s_ode (line 179) | def sample_res_2s_ode(model, x, sigmas, extra_args=None, callback=None, ...
  function sample_res_3s_ode (line 181) | def sample_res_3s_ode(model, x, sigmas, extra_args=None, callback=None, ...
  function sample_res_5s_ode (line 183) | def sample_res_5s_ode(model, x, sigmas, extra_args=None, callback=None, ...
  function sample_res_6s_ode (line 185) | def sample_res_6s_ode(model, x, sigmas, extra_args=None, callback=None, ...
  function sample_deis_2m (line 188) | def sample_deis_2m(model, x, sigmas, extra_args=None, callback=None, dis...
  function sample_deis_3m (line 190) | def sample_deis_3m(model, x, sigmas, extra_args=None, callback=None, dis...
  function sample_deis_2m_ode (line 193) | def sample_deis_2m_ode(model, x, sigmas, extra_args=None, callback=None,...
  function sample_deis_3m_ode (line 195) | def sample_deis_3m_ode(model, x, sigmas, extra_args=None, callback=None,...

FILE: beta/deis_coefficients.py
  function edm2t (line 14) | def edm2t(edm_steps, epsilon_s=1e-3, sigma_min=0.002, sigma_max=80):
  function cal_poly (line 24) | def cal_poly(prev_t, j, taus):
  function t2alpha_fn (line 35) | def t2alpha_fn(beta_0, beta_1, t):
  function cal_integrand (line 40) | def cal_integrand(beta_0, beta_1, taus):
  function get_deis_coeff_list (line 56) | def get_deis_coeff_list(t_steps, max_order, N=10000, deis_mode='tab'):

FILE: beta/noise_classes.py
  class PrecisionTool (line 25) | class PrecisionTool:
    method __init__ (line 26) | def __init__(self, cast_type='fp64'):
    method cast_tensor (line 29) | def cast_tensor(self, func):
    method set_cast_type (line 65) | def set_cast_type(self, new_value):
  function noise_generator_factory (line 74) | def noise_generator_factory(cls, **fixed_params):
  function like (line 80) | def like(x):
  function scale_to_range (line 83) | def scale_to_range(x, scaled_min = -1.73, scaled_max = 1.73): #1.73 is r...
  function normalize (line 86) | def normalize(x):
  class NoiseGenerator (line 89) | class NoiseGenerator:
    method __init__ (line 90) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 122) | def __call__(self):
    method update (line 125) | def update(self, **kwargs):
  class BrownianNoiseGenerator (line 139) | class BrownianNoiseGenerator(NoiseGenerator):
    method __call__ (line 140) | def __call__(self, *, sigma=None, sigma_next=None, **kwargs):
  class FractalNoiseGenerator (line 145) | class FractalNoiseGenerator(NoiseGenerator):
    method __init__ (line 146) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 151) | def __call__(self, *, alpha=None, k=None, scale=None, **kwargs):
  class SimplexNoiseGenerator (line 182) | class SimplexNoiseGenerator(NoiseGenerator):
    method __init__ (line 183) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 189) | def __call__(self, *, scale=None, **kwargs):
  class HiresPyramidNoiseGenerator (line 211) | class HiresPyramidNoiseGenerator(NoiseGenerator):
    method __init__ (line 212) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 217) | def __call__(self, *, discount=None, mode=None, **kwargs):
  class PyramidNoiseGenerator (line 252) | class PyramidNoiseGenerator(NoiseGenerator):
    method __init__ (line 253) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 258) | def __call__(self, *, discount=None, mode=None, **kwargs):
  class InterpolatedPyramidNoiseGenerator (line 290) | class InterpolatedPyramidNoiseGenerator(NoiseGenerator):
    method __init__ (line 291) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 296) | def __call__(self, *, discount=None, mode=None, **kwargs):
  class CascadeBPyramidNoiseGenerator (line 334) | class CascadeBPyramidNoiseGenerator(NoiseGenerator):
    method __init__ (line 335) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 340) | def __call__(self, *, levels=10, mode='nearest', size_range=[1,16], **...
  class UniformNoiseGenerator (line 366) | class UniformNoiseGenerator(NoiseGenerator):
    method __init__ (line 367) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 372) | def __call__(self, *, mean=None, scale=None, **kwargs):
  class GaussianNoiseGenerator (line 380) | class GaussianNoiseGenerator(NoiseGenerator):
    method __init__ (line 381) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 386) | def __call__(self, *, mean=None, std=None, **kwargs):
  class GaussianBackwardsNoiseGenerator (line 394) | class GaussianBackwardsNoiseGenerator(NoiseGenerator):
    method __init__ (line 395) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 400) | def __call__(self, *, mean=None, std=None, **kwargs):
  class LaplacianNoiseGenerator (line 409) | class LaplacianNoiseGenerator(NoiseGenerator):
    method __init__ (line 410) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 415) | def __call__(self, *, loc=None, scale=None, **kwargs):
  class StudentTNoiseGenerator (line 433) | class StudentTNoiseGenerator(NoiseGenerator):
    method __init__ (line 434) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 439) | def __call__(self, *, loc=None, scale=None, df=None, **kwargs):
  class WaveletNoiseGenerator (line 468) | class WaveletNoiseGenerator(NoiseGenerator):
    method __init__ (line 469) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 474) | def __call__(self, *, wavelet=None, **kwargs):
  class PerlinNoiseGenerator (line 489) | class PerlinNoiseGenerator(NoiseGenerator):
    method __init__ (line 490) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method get_positions (line 496) | def get_positions(block_shape: Tuple[int, int]) -> Tensor:
    method unfold_grid (line 508) | def unfold_grid(vectors: Tensor) -> Tensor:
    method smooth_step (line 518) | def smooth_step(t: Tensor) -> Tensor:
    method perlin_noise_tensor (line 522) | def perlin_noise_tensor(
    method perlin_noise (line 575) | def perlin_noise(
    method __call__ (line 602) | def __call__(self, *, detail=None, **kwargs):
  function prepare_noise (line 687) | def prepare_noise(latent_image, seed, noise_type, noise_inds=None, alpha...

FILE: beta/phi_functions.py
  function _phi (line 7) | def _phi(j, neg_h):
  function calculate_gamma (line 16) | def calculate_gamma(c2, c3):
  function _gamma (line 20) | def _gamma(n: int,) -> int:
  function _incomplete_gamma (line 28) | def _incomplete_gamma(s: int, x: float, gamma_s: Optional[int] = None) -...
  function phi (line 47) | def phi(j: int, neg_h: float, ):
  function phi_mpmath_series (line 79) | def phi_mpmath_series(j: int, neg_h: float) -> float:
  class Phi (line 94) | class Phi:
    method __init__ (line 95) | def __init__(self, h, c, analytic_solution=False):
    method __call__ (line 110) | def __call__(self, j, i=-1):
  function superphi (line 135) | def superphi(j: int, neg_h: float, ):

FILE: beta/rk_coefficients_beta.py
  function is_exponential (line 42) | def is_exponential(rk_type:str) -> bool:
  class DualFormatList (line 241) | class DualFormatList(list):
    method __contains__ (line 243) | def __contains__(self, item):
  function get_sampler_name_list (line 253) | def get_sampler_name_list(nameOnly = False) -> list:
  function get_default_sampler_name (line 266) | def get_default_sampler_name(nameOnly = False) -> str:
  function get_implicit_sampler_name_list (line 278) | def get_implicit_sampler_name_list(nameOnly = False) -> list:
  function get_default_implicit_sampler_name (line 291) | def get_default_implicit_sampler_name(nameOnly = False) -> str:
  function get_full_sampler_name (line 303) | def get_full_sampler_name(sampler_name_in: str) -> str:
  function process_sampler_name (line 312) | def process_sampler_name(sampler_name_in):
  function get_rk_methods_beta (line 1317) | def get_rk_methods_beta(rk_type       : str,
  function scale_all (line 3191) | def scale_all(data, scalar):
  function gen_first_col_exp (line 3202) | def gen_first_col_exp(a, b, c, φ):
  function gen_first_col_exp_uv (line 3209) | def gen_first_col_exp_uv(a, b, c, u, v, φ):
  function rho (line 3216) | def rho(j, ci, ck, cl):
  function mu (line 3226) | def mu(j, cd, ci, ck, cl):
  function mu_numerator (line 3237) | def mu_numerator(j, cd, ci, ck, cl):
  function theta_numerator (line 3250) | def theta_numerator(j, cd, ci, ck, cj, cl):
  function theta (line 3264) | def theta(j, cd, ci, ck, cj, cl):
  function prod_diff (line 3279) | def prod_diff(cj, ck, cl=None, cd=None):
  function denominator (line 3287) | def denominator(ci, *args):
  function check_condition_4_2 (line 3295) | def check_condition_4_2(nodes):

FILE: beta/rk_guide_func_beta.py
  class LatentGuide (line 31) | class LatentGuide:
    method __init__ (line 32) | def __init__(self,
    method init_guides (line 125) | def init_guides(self,
    method prepare_weighted_masks (line 884) | def prepare_weighted_masks(self, step:int, lgw_type="default") -> Tupl...
    method get_masks_for_step (line 937) | def get_masks_for_step(self, step:int, lgw_type="default") -> Tuple[Te...
    method get_cossim_adjusted_lgw_masks (line 955) | def get_cossim_adjusted_lgw_masks(self, data:Tensor, step:int) -> Tupl...
    method process_pseudoimplicit_guides_substep (line 998) | def process_pseudoimplicit_guides_substep(self,
    method prepare_fully_pseudoimplicit_guides_substep (line 1146) | def prepare_fully_pseudoimplicit_guides_substep(self,
    method process_guides_data_substep (line 1279) | def process_guides_data_substep(self,
    method get_data_substep (line 1325) | def get_data_substep(self,
    method swap_data (line 1401) | def swap_data(self,
    method process_guides_eps_substep (line 1415) | def process_guides_eps_substep(self,
    method get_eps_substep (line 1474) | def get_eps_substep(self,
    method process_guides_substep (line 1546) | def process_guides_substep(self,
    method process_channelwise (line 1710) | def process_channelwise(self,
    method normalize_inputs (line 1783) | def normalize_inputs(self, x:Tensor, y0:Tensor, y0_inv:Tensor):
  function apply_frame_weights (line 1815) | def apply_frame_weights(mask, frame_weights, normalize=False):
  function prepare_mask (line 1827) | def prepare_mask(x, mask, LGW_MASK_RESCALE_MIN) -> tuple[torch.Tensor, b...
  function apply_temporal_smoothing (line 1872) | def apply_temporal_smoothing(tensor, temporal_smoothing):
  function get_guide_epsilon_substep (line 1898) | def get_guide_epsilon_substep(x_0, x_, y0, y0_inv, s_, row, row_offset, ...
  function get_guide_epsilon (line 1917) | def get_guide_epsilon(x_0, x_, y0, sigma, rk_type, b=None, c=None):
  function noise_cossim_guide_tiled (line 1937) | def noise_cossim_guide_tiled(x_list, guide, cossim_mode="forward", tile_...
  function noise_cossim_eps_tiled (line 1991) | def noise_cossim_eps_tiled(x_list, eps, noise_list, cossim_mode="forward...
  function noise_cossim_guide_eps_tiled (line 2076) | def noise_cossim_guide_eps_tiled(x_0, x_list, y0, noise_list, cossim_mod...
  class NoiseStepHandlerOSDE (line 2170) | class NoiseStepHandlerOSDE:
    method __init__ (line 2171) | def __init__(self, x, eps=None, data=None, x_init=None, guide=None, gu...
    method check_cossim_source (line 2205) | def check_cossim_source(self, source):
    method get_ortho_noise (line 2208) | def get_ortho_noise(self, noise, prev_noises=None, max_iter=100, max_s...
  function handle_tiled_etc_noise_steps (line 2226) | def handle_tiled_etc_noise_steps(
  function get_masked_epsilon_projection (line 2340) | def get_masked_epsilon_projection(x_0, x_, eps_, y0, y0_inv, s_, row, ro...

FILE: beta/rk_method_beta.py
  function get_data_from_step (line 20) | def get_data_from_step   (x:Tensor, x_next:Tensor, sigma:Tensor, sigma_n...
  function get_epsilon_from_step (line 24) | def get_epsilon_from_step(x:Tensor, x_next:Tensor, sigma:Tensor, sigma_n...
  class RK_Method_Beta (line 30) | class RK_Method_Beta:
    method __init__ (line 31) | def __init__(self,
    method is_exponential (line 97) | def is_exponential(rk_type:str) -> bool:
    method create (line 111) | def create(model,
    method __call__ (line 127) | def __call__(self):
    method model_epsilon (line 130) | def model_epsilon(self, x:Tensor, sigma:Tensor, **extra_args) -> Tuple...
    method model_denoised (line 137) | def model_denoised(self, x:Tensor, sigma:Tensor, **extra_args) -> Tensor:
    method update_transformer_options (line 256) | def update_transformer_options(self,
    method set_coeff (line 263) | def set_coeff(self,
    method reorder_tableau (line 316) | def reorder_tableau(self, indices:list[int]) -> None:
    method update_substep (line 326) | def update_substep(self,
    method zum2 (line 380) | def zum2(self, row:int, k:Tensor, k_prev:Tensor=None, h_new:Tensor=Non...
    method a_k_einsum2 (line 387) | def a_k_einsum2(self, row:int, k:Tensor, h:Tensor, sigma:Tensor) -> Te...
    method b_k_einsum2 (line 390) | def b_k_einsum2(self, row:int, k:Tensor, h:Tensor, sigma:Tensor) -> Te...
    method a_k_einsum (line 394) | def a_k_einsum(self, row:int, k     :Tensor) -> Tensor:
    method b_k_einsum (line 397) | def b_k_einsum(self, row:int, k     :Tensor) -> Tensor:
    method u_k_einsum (line 400) | def u_k_einsum(self, row:int, k_prev:Tensor) -> Tensor:
    method v_k_einsum (line 403) | def v_k_einsum(self, row:int, k_prev:Tensor) -> Tensor:
    method zum (line 408) | def zum(self, row:int, k:Tensor, k_prev:Tensor=None,) -> Tensor:
    method zum_tableau (line 415) | def zum_tableau(self,  k:Tensor, k_prev:Tensor=None,) -> Tensor:
    method get_x (line 420) | def get_x(self, data:Tensor, noise:Tensor, sigma:Tensor):
    method init_cfg_channelwise (line 426) | def init_cfg_channelwise(self, x:Tensor, cfg_cw:float=1.0, **extra_arg...
    method calc_cfg_channelwise (line 438) | def calc_cfg_channelwise(self, denoised:Tensor) -> Tensor:
    method calculate_res_2m_step (line 454) | def calculate_res_2m_step(
    method calculate_res_3m_step (line 493) | def calculate_res_3m_step(
    method swap_rk_type_at_step_or_threshold (line 537) | def swap_rk_type_at_step_or_threshold(self,
    method bong_iter (line 607) | def bong_iter(self,
    method newton_iter (line 728) | def newton_iter(self,
  class RK_Method_Exponential (line 843) | class RK_Method_Exponential(RK_Method_Beta):
    method __init__ (line 844) | def __init__(self,
    method alpha_fn (line 869) | def alpha_fn(neg_h:Tensor) -> Tensor:
    method sigma_fn (line 873) | def sigma_fn(t:Tensor) -> Tensor:
    method t_fn (line 878) | def t_fn(sigma:Tensor) -> Tensor:
    method h_fn (line 883) | def h_fn(sigma_down:Tensor, sigma:Tensor) -> Tensor:
    method __call__ (line 887) | def __call__(self,
    method get_eps (line 919) | def get_eps(self, *args):
    method get_epsilon (line 937) | def get_epsilon(self,
    method get_epsilon_anchored (line 958) | def get_epsilon_anchored(self, x_0:Tensor, denoised:Tensor, sigma:Tens...
    method get_guide_epsilon (line 963) | def get_guide_epsilon(self,
  class RK_Method_Linear (line 993) | class RK_Method_Linear(RK_Method_Beta):
    method __init__ (line 994) | def __init__(self,
    method alpha_fn (line 1018) | def alpha_fn(neg_h:Tensor) -> Tensor:
    method sigma_fn (line 1022) | def sigma_fn(t:Tensor) -> Tensor:
    method t_fn (line 1026) | def t_fn(sigma:Tensor) -> Tensor:
    method h_fn (line 1030) | def h_fn(sigma_down:Tensor, sigma:Tensor) -> Tensor:
    method __call__ (line 1033) | def __call__(self,
    method get_eps (line 1056) | def get_eps(self, *args):
    method get_epsilon (line 1068) | def get_epsilon(self,
    method get_epsilon_anchored (line 1083) | def get_epsilon_anchored(self, x_0:Tensor, denoised:Tensor, sigma:Tens...
    method get_guide_epsilon (line 1088) | def get_guide_epsilon(self,

FILE: beta/rk_noise_sampler_beta.py
  function get_data_from_step (line 39) | def get_data_from_step(x, x_next, sigma, sigma_next): # assumes 100% lin...
  function get_epsilon_from_step (line 43) | def get_epsilon_from_step(x, x_next, sigma, sigma_next):
  class RK_NoiseSampler (line 49) | class RK_NoiseSampler:
    method __init__ (line 50) | def __init__(self,
    method init_noise_samplers (line 103) | def init_noise_samplers(self,
    method set_substep_list (line 167) | def set_substep_list(self, RK:Union["RK_Method_Exponential", "RK_Metho...
    method get_substep_list (line 175) | def get_substep_list(self, RK:Union["RK_Method_Exponential", "RK_Metho...
    method get_sde_coeff (line 180) | def get_sde_coeff(self, sigma_next:Tensor, sigma_down:Tensor=None, sig...
    method set_sde_step (line 216) | def set_sde_step(self, sigma:Tensor, sigma_next:Tensor, eta:float, ove...
    method set_sde_substep (line 238) | def set_sde_substep(self,
    method get_sde_substep (line 308) | def get_sde_substep(self,
    method get_sde_step (line 318) | def get_sde_step(self,
    method get_vpsde_step_RF (line 414) | def get_vpsde_step_RF(self, sigma:Tensor, sigma_next:Tensor, eta:float...
    method linear_noise_init (line 421) | def linear_noise_init(self, y:Tensor, sigma_curr:Tensor, x_base:Option...
    method linear_noise_step (line 435) | def linear_noise_step(self, y:Tensor, sigma_curr:Optional[Tensor]=None...
    method linear_noise_substep (line 466) | def linear_noise_substep(self, y:Tensor, sigma_curr:Optional[Tensor]=N...
    method swap_noise_step (line 497) | def swap_noise_step(self, x_0:Tensor, x_next:Tensor, brownian_sigma:Op...
    method swap_noise_substep (line 526) | def swap_noise_substep(self, x_0:Tensor, x_next:Tensor, brownian_sigma...
    method swap_noise_inv_substep (line 557) | def swap_noise_inv_substep(self, x_0:Tensor, x_next:Tensor, eta_subste...
    method swap_noise (line 592) | def swap_noise(self,
    method add_noise_pre (line 637) | def add_noise_pre(self,
    method add_noise_post (line 681) | def add_noise_post(self,
    method add_noise (line 723) | def add_noise(self,
    method sigma_from_to (line 760) | def sigma_from_to(self,
    method rebound_overshoot_step (line 772) | def rebound_overshoot_step(self, x_0:Tensor, x:Tensor) -> Tensor:
    method rebound_overshoot_substep (line 778) | def rebound_overshoot_substep(self, x_0:Tensor, x:Tensor) -> Tensor:
    method prepare_sigmas (line 785) | def prepare_sigmas(self,
  function extract_latent_swap_noise (line 841) | def extract_latent_swap_noise(self, x:Tensor, x_noise_swapped:Tensor, si...
  function update_latent_swap_noise (line 844) | def update_latent_swap_noise(self, x:Tensor, sigma:Tensor, old_noise:Ten...

FILE: beta/rk_sampler_beta.py
  function init_implicit_sampling (line 24) | def init_implicit_sampling(
  function sample_rk_beta (line 111) | def sample_rk_beta(
  function noise_fn (line 2181) | def noise_fn(x, sigma, sigma_next, noise_sampler, cossim_iter=1):
  function preview_callback (line 2197) | def preview_callback(

FILE: beta/samplers.py
  function copy_cond (line 34) | def copy_cond(conditioning):
  function generate_init_noise (line 61) | def generate_init_noise(x, seed, noise_type_init, noise_stdev, noise_mea...
  class SharkGuider (line 87) | class SharkGuider(CFGGuider):
    method __init__ (line 88) | def __init__(self, model_patcher):
    method set_conds (line 92) | def set_conds(self, **kwargs):
    method set_cfgs (line 95) | def set_cfgs(self, **kwargs):
    method predict_noise (line 99) | def predict_noise(self, x, timestep, model_options={}, seed=None):
  class SharkSampler (line 114) | class SharkSampler:
    method INPUT_TYPES (line 116) | def INPUT_TYPES(cls):
    method main (line 153) | def main(self,
  class SharkSampler_Beta (line 1201) | class SharkSampler_Beta:
    method INPUT_TYPES (line 1204) | def INPUT_TYPES(cls):
    method main (line 1237) | def main(self,
  class SharkChainsampler_Beta (line 1323) | class SharkChainsampler_Beta(SharkSampler_Beta):
    method INPUT_TYPES (line 1325) | def INPUT_TYPES(cls):
    method main (line 1343) | def main(self,
  class ClownSamplerAdvanced_Beta (line 1364) | class ClownSamplerAdvanced_Beta:
    method INPUT_TYPES (line 1366) | def INPUT_TYPES(cls):
    method main (line 1411) | def main(self,
  class ClownsharKSampler_Beta (line 1702) | class ClownsharKSampler_Beta:
    method INPUT_TYPES (line 1704) | def INPUT_TYPES(cls):
    method main (line 1745) | def main(self,
  class ClownsharkChainsampler_Beta (line 2114) | class ClownsharkChainsampler_Beta(ClownsharKSampler_Beta):
    method INPUT_TYPES (line 2116) | def INPUT_TYPES(cls):
    method main (line 2138) | def main(self,
  class ClownSampler_Beta (line 2163) | class ClownSampler_Beta:
    method INPUT_TYPES (line 2165) | def INPUT_TYPES(cls):
    method main (line 2189) | def main(self,
  class BongSampler (line 2448) | class BongSampler:
    method INPUT_TYPES (line 2450) | def INPUT_TYPES(cls):
    method main (line 2478) | def main(self,

FILE: beta/samplers_extensions.py
  class ClownSamplerSelector_Beta (line 21) | class ClownSamplerSelector_Beta:
    method INPUT_TYPES (line 23) | def INPUT_TYPES(cls):
    method main (line 38) | def main(self,
  class ClownOptions_SDE_Beta (line 50) | class ClownOptions_SDE_Beta:
    method INPUT_TYPES (line 52) | def INPUT_TYPES(cls):
    method main (line 76) | def main(self,
  class ClownOptions_StepSize_Beta (line 122) | class ClownOptions_StepSize_Beta:
    method INPUT_TYPES (line 124) | def INPUT_TYPES(cls):
    method main (line 143) | def main(self,
  class DetailBoostOptions (line 163) | class DetailBoostOptions:
  class ClownOptions_DetailBoost_Beta (line 181) | class ClownOptions_DetailBoost_Beta:
    method INPUT_TYPES (line 183) | def INPUT_TYPES(cls):
    method main (line 213) | def main(self,
  class ClownOptions_SigmaScaling_Beta (line 312) | class ClownOptions_SigmaScaling_Beta:
    method INPUT_TYPES (line 314) | def INPUT_TYPES(cls):
    method main (line 340) | def main(self,
  class ClownOptions_FlowGuide (line 378) | class ClownOptions_FlowGuide:
    method INPUT_TYPES (line 380) | def INPUT_TYPES(cls):
    method main (line 396) | def main(self,
  class ClownOptions_Momentum_Beta (line 409) | class ClownOptions_Momentum_Beta:
    method INPUT_TYPES (line 411) | def INPUT_TYPES(cls):
    method main (line 427) | def main(self,
  class ClownOptions_ImplicitSteps_Beta (line 440) | class ClownOptions_ImplicitSteps_Beta:
    method INPUT_TYPES (line 442) | def INPUT_TYPES(cls):
    method main (line 461) | def main(self,
  class ClownOptions_Cycles_Beta (line 480) | class ClownOptions_Cycles_Beta:
    method INPUT_TYPES (line 482) | def INPUT_TYPES(cls):
    method main (line 504) | def main(self,
  class SharkOptions_StartStep_Beta (line 530) | class SharkOptions_StartStep_Beta:
    method INPUT_TYPES (line 532) | def INPUT_TYPES(cls):
    method main (line 549) | def main(self,
  class ClownOptions_Tile_Beta (line 563) | class ClownOptions_Tile_Beta:
    method INPUT_TYPES (line 565) | def INPUT_TYPES(cls):
    method main (line 582) | def main(self,
  class ClownOptions_Tile_Advanced_Beta (line 598) | class ClownOptions_Tile_Advanced_Beta:
    method INPUT_TYPES (line 600) | def INPUT_TYPES(cls):
    method main (line 616) | def main(self,
  class ClownOptions_ExtraOptions_Beta (line 631) | class ClownOptions_ExtraOptions_Beta:
    method INPUT_TYPES (line 633) | def INPUT_TYPES(cls):
    method main (line 649) | def main(self,
  class ClownOptions_DenoisedSampling_Beta (line 666) | class ClownOptions_DenoisedSampling_Beta:
    method INPUT_TYPES (line 668) | def INPUT_TYPES(cls):
    method main (line 690) | def main(self,
  class ClownOptions_Automation_Beta (line 707) | class ClownOptions_Automation_Beta:
    method INPUT_TYPES (line 709) | def INPUT_TYPES(cls):
    method main (line 726) | def main(self,
  class SharkOptions_GuideCond_Beta (line 762) | class SharkOptions_GuideCond_Beta:
    method INPUT_TYPES (line 764) | def INPUT_TYPES(cls):
    method main (line 778) | def main(self,
  class SharkOptions_GuideConds_Beta (line 800) | class SharkOptions_GuideConds_Beta:
    method INPUT_TYPES (line 802) | def INPUT_TYPES(cls):
    method main (line 819) | def main(self,
  class SharkOptions_Beta (line 848) | class SharkOptions_Beta:
    method INPUT_TYPES (line 850) | def INPUT_TYPES(cls):
    method main (line 868) | def main(self,
  class SharkOptions_UltraCascade_Latent_Beta (line 888) | class SharkOptions_UltraCascade_Latent_Beta:
    method INPUT_TYPES (line 890) | def INPUT_TYPES(cls):
    method main (line 906) | def main(self,
  class ClownOptions_SwapSampler_Beta (line 922) | class ClownOptions_SwapSampler_Beta:
    method INPUT_TYPES (line 924) | def INPUT_TYPES(cls):
    method main (line 942) | def main(self,
  class ClownOptions_SDE_Mask_Beta (line 967) | class ClownOptions_SDE_Mask_Beta:
    method INPUT_TYPES (line 969) | def INPUT_TYPES(cls):
    method main (line 987) | def main(self,
  class ClownGuide_Mean_Beta (line 1009) | class ClownGuide_Mean_Beta:
    method INPUT_TYPES (line 1011) | def INPUT_TYPES(cls):
    method main (line 1035) | def main(self,
  class ClownGuide_FrequencySeparation (line 1087) | class ClownGuide_FrequencySeparation:
    method INPUT_TYPES (line 1089) | def INPUT_TYPES(cls):
    method main (line 1117) | def main(self,
  class ClownGuide_Style_Beta (line 1148) | class ClownGuide_Style_Beta:
    method INPUT_TYPES (line 1150) | def INPUT_TYPES(cls):
    method main (line 1179) | def main(self,
  class ClownGuide_Style_EdgeWidth (line 1267) | class ClownGuide_Style_EdgeWidth:
    method INPUT_TYPES (line 1269) | def INPUT_TYPES(cls):
    method main (line 1286) | def main(self,
  class ClownGuide_Style_TileSize (line 1305) | class ClownGuide_Style_TileSize:
    method INPUT_TYPES (line 1307) | def INPUT_TYPES(cls):
    method main (line 1326) | def main(self,
  class ClownGuides_Sync (line 1349) | class ClownGuides_Sync:
    method INPUT_TYPES (line 1351) | def INPUT_TYPES(cls):
    method main (line 1391) | def main(self,
  class ClownGuides_Sync_Advanced (line 1565) | class ClownGuides_Sync_Advanced:
    method INPUT_TYPES (line 1567) | def INPUT_TYPES(cls):
    method main (line 1671) | def main(self,
  class ClownGuide_Beta (line 2103) | class ClownGuide_Beta:
    method INPUT_TYPES (line 2105) | def INPUT_TYPES(cls):
    method main (line 2131) | def main(self,
  class ClownGuides_Beta (line 2209) | class ClownGuides_Beta:
    method INPUT_TYPES (line 2211) | def INPUT_TYPES(cls):
    method main (line 2244) | def main(self,
  class ClownGuidesAB_Beta (line 2362) | class ClownGuidesAB_Beta:
    method INPUT_TYPES (line 2364) | def INPUT_TYPES(cls):
    method main (line 2398) | def main(self,
  class ClownOptions_Combine (line 2508) | class ClownOptions_Combine:
    method INPUT_TYPES (line 2510) | def INPUT_TYPES(s):
    method main (line 2522) | def main(self, options, **kwargs):
  class ClownOptions_Frameweights (line 2528) | class ClownOptions_Frameweights:
    method INPUT_TYPES (line 2530) | def INPUT_TYPES(s):
    method main (line 2551) | def main(self,
  class SharkOptions_GuiderInput (line 2586) | class SharkOptions_GuiderInput:
    method INPUT_TYPES (line 2588) | def INPUT_TYPES(s):
    method main (line 2601) | def main(self, guider, options=None):
  class ClownGuide_AdaIN_MMDiT_Beta (line 2621) | class ClownGuide_AdaIN_MMDiT_Beta:
    method INPUT_TYPES (line 2623) | def INPUT_TYPES(cls):
    method main (line 2650) | def main(self,
  class ClownGuide_AttnInj_MMDiT_Beta (line 2757) | class ClownGuide_AttnInj_MMDiT_Beta:
    method INPUT_TYPES (line 2759) | def INPUT_TYPES(cls):
    method main (line 2803) | def main(self,
  class ClownGuide_StyleNorm_Advanced_HiDream (line 2944) | class ClownGuide_StyleNorm_Advanced_HiDream:
    method INPUT_TYPES (line 2947) | def INPUT_TYPES(cls):
    method main (line 3034) | def main(self,
  class ClownStyle_Boost (line 3269) | class ClownStyle_Boost:
    method INPUT_TYPES (line 3272) | def INPUT_TYPES(cls):
    method main (line 3297) | def main(self,
  class ClownStyle_MMDiT (line 3343) | class ClownStyle_MMDiT:
    method INPUT_TYPES (line 3346) | def INPUT_TYPES(cls):
    method main (line 3378) | def main(self,
  class ClownStyle_Block_MMDiT (line 3437) | class ClownStyle_Block_MMDiT:
    method INPUT_TYPES (line 3440) | def INPUT_TYPES(cls):
    method main (line 3478) | def main(self,
  class ClownStyle_Attn_MMDiT (line 3605) | class ClownStyle_Attn_MMDiT:
    method INPUT_TYPES (line 3608) | def INPUT_TYPES(cls):
    method main (line 3641) | def main(self,
  class ClownStyle_UNet (line 3768) | class ClownStyle_UNet:
    method INPUT_TYPES (line 3771) | def INPUT_TYPES(cls):
    method main (line 3803) | def main(self,
  class ClownStyle_Block_UNet (line 3873) | class ClownStyle_Block_UNet:
    method INPUT_TYPES (line 3876) | def INPUT_TYPES(cls):
    method main (line 3906) | def main(self,
  class ClownStyle_Attn_UNet (line 4006) | class ClownStyle_Attn_UNet:
    method INPUT_TYPES (line 4009) | def INPUT_TYPES(cls):
    method main (line 4041) | def main(self,
  class ClownStyle_ResBlock_UNet (line 4165) | class ClownStyle_ResBlock_UNet:
    method INPUT_TYPES (line 4168) | def INPUT_TYPES(cls):
    method main (line 4208) | def main(self,
  class ClownStyle_SpatialBlock_UNet (line 4331) | class ClownStyle_SpatialBlock_UNet:
    method INPUT_TYPES (line 4334) | def INPUT_TYPES(cls):
    method main (line 4367) | def main(self,
  class ClownStyle_TransformerBlock_UNet (line 4483) | class ClownStyle_TransformerBlock_UNet:
    method INPUT_TYPES (line 4486) | def INPUT_TYPES(cls):
    method main (line 4524) | def main(self,

FILE: chroma/layers.py
  class ChromaModulationOut (line 15) | class ChromaModulationOut(ModulationOut):
    method from_offset (line 17) | def from_offset(cls, tensor: torch.Tensor, offset: int = 0) -> Modulat...
  class Approximator (line 27) | class Approximator(nn.Module):
    method __init__ (line 28) | def __init__(self, in_dim: int, out_dim: int, hidden_dim: int, n_layer...
    method device (line 36) | def device(self):
    method forward (line 40) | def forward(self, x: Tensor) -> Tensor:
  class ReChromaDoubleStreamBlock (line 51) | class ReChromaDoubleStreamBlock(nn.Module):
    method __init__ (line 52) | def __init__(self, hidden_size: int, num_heads: int, mlp_ratio: float,...
    method forward (line 79) | def forward(self, img: Tensor, txt: Tensor, pe: Tensor, vec: Tensor, a...
  class ReChromaSingleStreamBlock (line 118) | class ReChromaSingleStreamBlock(nn.Module):
    method __init__ (line 124) | def __init__(
    method forward (line 153) | def forward(self, x: Tensor, pe: Tensor, vec: Tensor, attn_mask=None) ...
  class LastLayer (line 171) | class LastLayer(nn.Module):
    method __init__ (line 172) | def __init__(self, hidden_size: int, patch_size: int, out_channels: in...
    method forward (line 177) | def forward(self, x: Tensor, vec: Tensor) -> Tensor:

FILE: chroma/math.py
  function attention (line 8) | def attention(q: Tensor, k: Tensor, v: Tensor, pe: Tensor, mask=None) ->...
  function rope (line 21) | def rope(pos: Tensor, dim: int, theta: int) -> Tensor:
  function apply_rope (line 36) | def apply_rope(xq: Tensor, xk: Tensor, freqs_cis: Tensor):

FILE: chroma/model.py
  class ChromaParams (line 31) | class ChromaParams:
  class ReChroma (line 52) | class ReChroma(nn.Module):
    method __init__ (line 57) | def __init__(self, image_model=None, final_layer=True, dtype=None, dev...
    method get_modulations (line 118) | def get_modulations(self, tensor: torch.Tensor, block_type: str, *, id...
    method forward_blocks (line 146) | def forward_blocks(
    method forward_chroma_depr (line 355) | def forward_chroma_depr(self, x, timestep, context, guidance, control=...
    method _get_img_ids (line 374) | def _get_img_ids(self, x, bs, h_len, w_len, h_start, h_end, w_start, w...
    method forward (line 383) | def forward(self,
  function adain_seq (line 1072) | def adain_seq(content: torch.Tensor, style: torch.Tensor, eps: float = 1...
  function adain_seq_inplace (line 1077) | def adain_seq_inplace(content: torch.Tensor, style: torch.Tensor, eps: f...
  function gaussian_blur_2d (line 1094) | def gaussian_blur_2d(img: torch.Tensor, sigma: float, kernel_size: int =...
  function median_blur_2d (line 1120) | def median_blur_2d(img: torch.Tensor, kernel_size: int = 3) -> torch.Ten...
  function adain_patchwise (line 1135) | def adain_patchwise(content: torch.Tensor, style: torch.Tensor, sigma: f...
  function adain_patchwise_row_batch (line 1178) | def adain_patchwise_row_batch(content: torch.Tensor, style: torch.Tensor...
  function adain_patchwise_row_batch_medblur (line 1233) | def adain_patchwise_row_batch_medblur(content: torch.Tensor, style: torc...
  function adain_patchwise_row_batch_realmedblur (line 1320) | def adain_patchwise_row_batch_realmedblur(content: torch.Tensor, style: ...
  function patchwise_sort_transfer9 (line 1410) | def patchwise_sort_transfer9(src: torch.Tensor, ref: torch.Tensor) -> to...
  function masked_patchwise_sort_transfer9 (line 1421) | def masked_patchwise_sort_transfer9(
  function adain_patchwise_strict_sortmatch9 (line 1444) | def adain_patchwise_strict_sortmatch9(

FILE: conditioning.py
  function multiply_nested_tensors (line 29) | def multiply_nested_tensors(structure, scalar):
  function pad_to_same_tokens (line 41) | def pad_to_same_tokens(x1, x2, pad_value=0.0):
  class ConditioningOrthoCollin (line 52) | class ConditioningOrthoCollin:
    method INPUT_TYPES (line 54) | def INPUT_TYPES(cls):
    method combine (line 67) | def combine(self, conditioning_0, conditioning_1, t5_strength, clip_st...
  class CLIPTextEncodeFluxUnguided (line 96) | class CLIPTextEncodeFluxUnguided:
    method INPUT_TYPES (line 98) | def INPUT_TYPES(cls):
    method encode (line 111) | def encode(self, clip, clip_l, t5xxl):
  class StyleModelApplyStyle (line 134) | class StyleModelApplyStyle:
    method INPUT_TYPES (line 136) | def INPUT_TYPES(cls):
    method main (line 153) | def main(self, clip_vision_output, style_model, conditioning, strength...
  class ConditioningZeroAndTruncate (line 164) | class ConditioningZeroAndTruncate:
    method INPUT_TYPES (line 168) | def INPUT_TYPES(cls):
    method zero_out (line 177) | def zero_out(self, conditioning):
  class ConditioningTruncate (line 189) | class ConditioningTruncate:
    method INPUT_TYPES (line 192) | def INPUT_TYPES(cls):
    method zero_out (line 201) | def zero_out(self, conditioning):
  class ConditioningMultiply (line 213) | class ConditioningMultiply:
    method INPUT_TYPES (line 215) | def INPUT_TYPES(cls):
    method main (line 224) | def main(self, conditioning, multiplier):
  class ConditioningAdd (line 230) | class ConditioningAdd:
    method INPUT_TYPES (line 232) | def INPUT_TYPES(cls):
    method main (line 242) | def main(self, conditioning_1, conditioning_2, multiplier):
  class ConditioningCombine (line 252) | class ConditioningCombine:
    method INPUT_TYPES (line 254) | def INPUT_TYPES(cls):
    method combine (line 261) | def combine(self, conditioning_1, conditioning_2):
  class ConditioningAverage (line 266) | class ConditioningAverage :
    method INPUT_TYPES (line 268) | def INPUT_TYPES(cls):
    method addWeighted (line 281) | def addWeighted(self, conditioning_to, conditioning_from, conditioning...
  class ConditioningSetTimestepRange (line 308) | class ConditioningSetTimestepRange:
    method INPUT_TYPES (line 310) | def INPUT_TYPES(cls):
    method set_range (line 320) | def set_range(self, conditioning, start, end):
  class ConditioningAverageScheduler (line 325) | class ConditioningAverageScheduler: # don't think this is implemented co...
    method INPUT_TYPES (line 327) | def INPUT_TYPES(cls):
    method addWeighted (line 343) | def addWeighted(conditioning_to, conditioning_from, conditioning_to_st...
    method create_percent_array (line 371) | def create_percent_array(steps):
    method main (line 375) | def main(self, conditioning_0, conditioning_1, ratio):
  class StableCascade_StageB_Conditioning64 (line 389) | class StableCascade_StageB_Conditioning64:
    method INPUT_TYPES (line 391) | def INPUT_TYPES(cls):
    method set_prior (line 404) | def set_prior(self, conditioning, stage_c):
  class Conditioning_Recast64 (line 415) | class Conditioning_Recast64:
    method INPUT_TYPES (line 417) | def INPUT_TYPES(cls):
    method main (line 429) | def main(self, cond_0, cond_1 = None):
  class ConditioningToBase64 (line 442) | class ConditioningToBase64:
    method INPUT_TYPES (line 444) | def INPUT_TYPES(cls):
    method notify (line 462) | def notify(self, unique_id=None, extra_pnginfo=None, conditioning=None):
  class Base64ToConditioning (line 487) | class Base64ToConditioning:
    method INPUT_TYPES (line 489) | def INPUT_TYPES(cls):
    method main (line 500) | def main(self, data):
  class ConditioningDownsampleT5 (line 508) | class ConditioningDownsampleT5:
    method INPUT_TYPES (line 510) | def INPUT_TYPES(cls):
    method main (line 526) | def main(self, conditioning, token_limit):
  class ConditioningBatch4 (line 568) | class ConditioningBatch4:
    method INPUT_TYPES (line 570) | def INPUT_TYPES(cls):
    method main (line 587) | def main(self, conditioning_0, conditioning_1=None, conditioning_2=Non...
  class ConditioningBatch8 (line 604) | class ConditioningBatch8:
    method INPUT_TYPES (line 606) | def INPUT_TYPES(cls):
    method main (line 627) | def main(self, conditioning_0, conditioning_1=None, conditioning_2=Non...
  class EmptyConditioningGenerator (line 656) | class EmptyConditioningGenerator:
    method __init__ (line 657) | def __init__(self, model=None, conditioning=None, device=None, dtype=N...
    method get_empty_conditioning (line 725) | def get_empty_conditioning(self):
    method get_empty_conditionings (line 746) | def get_empty_conditionings(self, count):
    method zero_none_conditionings_ (line 749) | def zero_none_conditionings_(self, *conds):
  function zero_conditioning_from_list (line 770) | def zero_conditioning_from_list(conds):
  class TemporalMaskGenerator (line 790) | class TemporalMaskGenerator:
    method INPUT_TYPES (line 792) | def INPUT_TYPES(cls):
    method main (line 811) | def main(self,
  class TemporalSplitAttnMask_Midframe (line 832) | class TemporalSplitAttnMask_Midframe:
    method INPUT_TYPES (line 834) | def INPUT_TYPES(cls):
    method main (line 855) | def main(self,
  class TemporalSplitAttnMask (line 888) | class TemporalSplitAttnMask:
    method INPUT_TYPES (line 890) | def INPUT_TYPES(cls):
    method main (line 910) | def main(self,
  class TemporalCrossAttnMask (line 940) | class TemporalCrossAttnMask:
    method INPUT_TYPES (line 942) | def INPUT_TYPES(cls):
    method main (line 958) | def main(self,
  class RegionalParameters (line 979) | class RegionalParameters:
  class ClownRegionalConditioning_AB (line 1043) | class ClownRegionalConditioning_AB:
    method INPUT_TYPES (line 1045) | def INPUT_TYPES(cls):
    method create_callback (line 1073) | def create_callback(self, **kwargs):
    method main (line 1080) | def main(self,
    method prepare_regional_cond (line 1133) | def prepare_regional_cond(self,
  class ClownRegionalConditioning_ABC (line 1262) | class ClownRegionalConditioning_ABC:
    method INPUT_TYPES (line 1264) | def INPUT_TYPES(cls):
    method create_callback (line 1294) | def create_callback(self, **kwargs):
    method main (line 1301) | def main(self,
    method prepare_regional_cond (line 1356) | def prepare_regional_cond(self,
  class ClownRegionalConditioning2 (line 1492) | class ClownRegionalConditioning2(ClownRegionalConditioning_AB):
    method INPUT_TYPES (line 1494) | def INPUT_TYPES(cls):
    method main (line 1516) | def main(self, conditioning_masked, conditioning_unmasked, mask, **kwa...
  class ClownRegionalConditioning3 (line 1527) | class ClownRegionalConditioning3(ClownRegionalConditioning_ABC):
    method INPUT_TYPES (line 1529) | def INPUT_TYPES(cls):
    method main (line 1553) | def main(self, conditioning_unmasked, mask_A, mask_B, **kwargs):
  class ClownRegionalConditioning (line 1569) | class ClownRegionalConditioning:
    method INPUT_TYPES (line 1571) | def INPUT_TYPES(cls):
    method main (line 1590) | def main(self,
  class ClownRegionalConditionings (line 1625) | class ClownRegionalConditionings:
    method INPUT_TYPES (line 1627) | def INPUT_TYPES(cls):
    method create_callback (line 1651) | def create_callback(self, **kwargs):
    method main (line 1658) | def main(self,
    method prepare_regional_cond (line 1701) | def prepare_regional_cond(self,
  function merge_with_base (line 1808) | def merge_with_base(
  function best_hw (line 1869) | def best_hw(n): # get factor pair closesst to a true square
  function downsample_tokens (line 1880) | def downsample_tokens(cond: torch.Tensor, target_tokens: int, mode="bicu...
  class CrossAttn_EraseReplace_HiDream (line 1908) | class CrossAttn_EraseReplace_HiDream:
    method INPUT_TYPES (line 1910) | def INPUT_TYPES(s):
    method encode (line 1929) | def encode(self, clip, t5xxl_erase, llama_erase, t5xxl_replace, llama_...
  class CrossAttn_EraseReplace_Flux (line 1965) | class CrossAttn_EraseReplace_Flux:
    method INPUT_TYPES (line 1967) | def INPUT_TYPES(s):
    method encode (line 1982) | def encode(self, clip, t5xxl_erase, llama_erase, t5xxl_replace, llama_...

FILE: flux/controlnet.py
  class MistolineCondDownsamplBlock (line 16) | class MistolineCondDownsamplBlock(nn.Module):
    method __init__ (line 17) | def __init__(self, dtype=None, device=None, operations=None):
    method forward (line 41) | def forward(self, x):
  class MistolineControlnetBlock (line 44) | class MistolineControlnetBlock(nn.Module):
    method __init__ (line 45) | def __init__(self, hidden_size, dtype=None, device=None, operations=No...
    method forward (line 50) | def forward(self, x):
  class ControlNetFlux (line 54) | class ControlNetFlux(Flux):
    method __init__ (line 55) | def __init__(self, latent_input=False, num_union_modes=0, mistoline=Fa...
    method forward_orig (line 111) | def forward_orig(
    method forward (line 179) | def forward(self, x, timesteps, context, y, guidance=None, hint=None, ...

FILE: flux/layers.py
  class EmbedND (line 18) | class EmbedND(nn.Module):
    method __init__ (line 19) | def __init__(self, dim: int, theta: int, axes_dim: list):
    method forward (line 25) | def forward(self, ids: Tensor) -> Tensor:
  function timestep_embedding (line 33) | def timestep_embedding(t: Tensor, dim, max_period=10000, time_factor: fl...
  class MLPEmbedder (line 54) | class MLPEmbedder(nn.Module):
    method __init__ (line 55) | def __init__(self, in_dim: int, hidden_dim: int, dtype=None, device=No...
    method forward (line 61) | def forward(self, x: Tensor) -> Tensor:
  class RMSNorm (line 65) | class RMSNorm(torch.nn.Module):
    method __init__ (line 66) | def __init__(self, dim: int, dtype=None, device=None, operations=None):
    method forward (line 70) | def forward(self, x: Tensor):
  class QKNorm (line 74) | class QKNorm(torch.nn.Module):
    method __init__ (line 75) | def __init__(self, dim: int, dtype=None, device=None, operations=None):
    method forward (line 80) | def forward(self, q: Tensor, k: Tensor, v: Tensor) -> tuple:
  class SelfAttention (line 86) | class SelfAttention(nn.Module):
    method __init__ (line 87) | def __init__(self, dim: int, num_heads: int = 8, qkv_bias: bool = Fals...
  class ModulationOut (line 98) | class ModulationOut:
  class Modulation (line 103) | class Modulation(nn.Module):
    method __init__ (line 104) | def __init__(self, dim: int, double: bool, dtype=None, device=None, op...
    method forward (line 110) | def forward(self, vec: Tensor) -> tuple:
  class DoubleStreamBlock (line 115) | class DoubleStreamBlock(nn.Module):
    method __init__ (line 116) | def __init__(self, hidden_size: int, num_heads: int, mlp_ratio: float,...
    method forward (line 149) | def forward(self, img: Tensor, txt: Tensor, vec: Tensor, pe: Tensor, m...
  class SingleStreamBlock (line 332) | class SingleStreamBlock(nn.Module):      #attn.shape = 1,4608,3072      ...
    method __init__ (line 337) | def __init__(self, hidden_size: int,  num_heads: int, mlp_ratio: float...
    method forward (line 361) | def forward(self, img: Tensor, vec: Tensor, pe: Tensor, mask=None, idx...
  class LastLayer (line 410) | class LastLayer(nn.Module):
    method __init__ (line 411) | def __init__(self, hidden_size: int, patch_size: int, out_channels: in...
    method forward (line 417) | def forward(self, x: Tensor, vec: Tensor) -> Tensor:
    method forward_scale_shift (line 424) | def forward_scale_shift(self, x: Tensor, vec: Tensor) -> Tensor:
    method forward_linear (line 430) | def forward_linear(self, x: Tensor, vec: Tensor) -> Tensor:

FILE: flux/math.py
  function attention (line 11) | def attention(q: Tensor, k: Tensor, v: Tensor, pe: Tensor, mask=None) ->...
  function rope (line 24) | def rope(pos: Tensor, dim: int, theta: int) -> Tensor:
  function apply_rope (line 39) | def apply_rope(xq: Tensor, xk: Tensor, freqs_cis: Tensor):

FILE: flux/model.py
  class FluxParams (line 37) | class FluxParams:
  class ReFlux (line 53) | class ReFlux(Flux):
    method __init__ (line 54) | def __init__(self, image_model=None, final_layer=True, dtype=None, dev...
    method forward_blocks (line 92) | def forward_blocks(self,
    method process_img (line 264) | def process_img(self, x, index=0, h_offset=0, w_offset=0):
    method _get_img_ids (line 283) | def _get_img_ids(self, x, bs, h_len, w_len, h_start, h_end, w_start, w...
    method forward (line 290) | def forward(self,
    method expand_timesteps (line 947) | def expand_timesteps(self, t, batch_size, device):
  function clone_inputs (line 961) | def clone_inputs(*args, index: int=None):

FILE: flux/redux.py
  class ReReduxImageEncoder (line 8) | class ReReduxImageEncoder(torch.nn.Module):
    method __init__ (line 9) | def __init__(
    method forward (line 27) | def forward(self, sigclip_embeds) -> torch.Tensor:
    method feature_match (line 31) | def feature_match(self, cond, clip_vision_output, mode="WCT"):
  function adain_seq_inplace (line 113) | def adain_seq_inplace(content: torch.Tensor, style: torch.Tensor, eps: f...
  function adain_seq (line 123) | def adain_seq(content: torch.Tensor, style: torch.Tensor, eps: float = 1...

FILE: helper.py
  class ExtraOptions (line 18) | class ExtraOptions():
    method __init__ (line 19) | def __init__(self, extra_options):
    method __call__ (line 26) | def __call__(self, option, default=None, ret_type=None, match_all_flag...
  function extra_options_flag (line 89) | def extra_options_flag(flag, extra_options):
  function get_extra_options_kv (line 93) | def get_extra_options_kv(key, default, extra_options, ret_type=None):
  function get_extra_options_list (line 106) | def get_extra_options_list(key, default, extra_options, ret_type=None):
  class OptionsManager (line 129) | class OptionsManager:
    method __init__ (line 132) | def __init__(self, options, **kwargs):
    method add_option (line 143) | def add_option(self, option):
    method merged (line 150) | def merged(self):
    method update (line 201) | def update(self, key_or_dict, value=None, append=False):
    method get (line 238) | def get(self, key, default=None):
    method _deep_update (line 241) | def _deep_update(self, target_dict, source_dict):
    method __getitem__ (line 249) | def __getitem__(self, key):
    method __contains__ (line 253) | def __contains__(self, key):
    method as_dict (line 257) | def as_dict(self):
    method __bool__ (line 261) | def __bool__(self):
    method debug_print_options (line 265) | def debug_print_options(self):
  function has_nested_attr (line 279) | def has_nested_attr(obj, attr_path):
  function safe_get_nested (line 287) | def safe_get_nested(d, keys, default=None):
  class AlwaysTrueList (line 295) | class AlwaysTrueList:
    method __contains__ (line 296) | def __contains__(self, item):
    method __iter__ (line 299) | def __iter__(self):
  function parse_range_string (line 304) | def parse_range_string(s):
  function parse_range_string_int (line 317) | def parse_range_string_int(s):
  function parse_tile_sizes (line 330) | def parse_tile_sizes(tile_sizes: str):
  function is_video_model (line 345) | def is_video_model(model):
  function is_RF_model (line 356) | def is_RF_model(model):
  function get_res4lyf_scheduler_list (line 361) | def get_res4lyf_scheduler_list():
  function move_to_same_device (line 367) | def move_to_same_device(*tensors):
  function conditioning_set_values (line 373) | def conditioning_set_values(conditioning, values={}):
  function initialize_or_scale (line 388) | def initialize_or_scale(tensor, value, steps):
  function pad_tensor_list_to_max_len (line 395) | def pad_tensor_list_to_max_len(tensors: List[torch.Tensor], dim: int = -...
  class PrecisionTool (line 411) | class PrecisionTool:
    method __init__ (line 412) | def __init__(self, cast_type='fp64'):
    method cast_tensor (line 415) | def cast_tensor(self, func):
    method set_cast_type (line 451) | def set_cast_type(self, new_value):
  class FrameWeightsManager (line 462) | class FrameWeightsManager:
    method __init__ (line 463) | def __init__(self):
    method set_device_and_dtype (line 477) | def set_device_and_dtype(self, device=None, dtype=None):
    method set_custom_weights (line 485) | def set_custom_weights(self, config_name, weights):
    method add_weight_config (line 493) | def add_weight_config(self, name, **kwargs):
    method get_weight_config (line 504) | def get_weight_config(self, name):
    method get_frame_weights_by_name (line 509) | def get_frame_weights_by_name(self, name, num_frames, step=None):
    method _generate_custom_weights (line 539) | def _generate_custom_weights(self, num_frames, custom_string, step=None):
    method _generate_frame_weights (line 658) | def _generate_frame_weights(self, num_frames, dynamics, schedule, scal...
    method _generate_constant_schedule (line 745) | def _generate_constant_schedule(self, change_start, change_frames, low...
    method _generate_linear_schedule (line 749) | def _generate_linear_schedule(self, change_start, change_frames, low_v...
    method _generate_easeout_schedule (line 758) | def _generate_easeout_schedule(self, change_start, change_frames, low_...
    method _generate_easein_schedule (line 768) | def _generate_easein_schedule(self, change_start, change_frames, low_v...
    method _generate_middle_schedule (line 784) | def _generate_middle_schedule(self, change_start, change_frames, low_v...
    method _generate_trough_schedule (line 803) | def _generate_trough_schedule(self, change_start, change_frames, low_v...
  function check_projection_consistency (line 840) | def check_projection_consistency(x, W, b):
  function get_max_dtype (line 851) | def get_max_dtype(device='cpu'):

FILE: helper_sigma_preview_image_preproc.py
  class SaveImage (line 38) | class SaveImage:
    method __init__ (line 39) | def __init__(self):
    method INPUT_TYPES (line 46) | def INPUT_TYPES(cls):
    method save_images (line 65) | def save_images(self,
  class SigmasPreview (line 102) | class SigmasPreview(SaveImage):
    method __init__ (line 103) | def __init__(self):
    method INPUT_TYPES (line 110) | def INPUT_TYPES(self):
    method tensor_to_graph_image (line 126) | def tensor_to_graph_image(tensor, color='blue'):
    method sigmas_preview (line 142) | def sigmas_preview(self, sigmas, print_as_list, line_color):
  class VAEEncodeAdvanced (line 201) | class VAEEncodeAdvanced:
    method INPUT_TYPES (line 203) | def INPUT_TYPES(cls):
    method main (line 242) | def main(self,
  class VAEStyleTransferLatent (line 330) | class VAEStyleTransferLatent:
    method INPUT_TYPES (line 332) | def INPUT_TYPES(cls):
    method main (line 352) | def main(self,
  function apply_style_to_latent (line 428) | def apply_style_to_latent(denoised_embed, y0_embed, method="WCT"):
  function invert_conv2d (line 477) | def invert_conv2d(
  function adain_seq_inplace (line 585) | def adain_seq_inplace(content: torch.Tensor, style: torch.Tensor, eps: f...
  class LatentUpscaleWithVAE (line 625) | class LatentUpscaleWithVAE:
    method __init__ (line 626) | def __init__(self):
    method INPUT_TYPES (line 629) | def INPUT_TYPES(cls):
    method main (line 644) | def main(self,
  class SigmasSchedulePreview (line 700) | class SigmasSchedulePreview(SaveImage):
    method __init__ (line 701) | def __init__(self):
    method INPUT_TYPES (line 708) | def INPUT_TYPES(cls):
    method tensor_to_graph_image (line 733) | def tensor_to_graph_image(tensors, labels, colors, plot_min, plot_max,...
    method plot_schedule (line 774) | def plot_schedule(self, model, noise_mode, eta, s_noise, denoise, deno...

FILE: hidream/model.py
  class ModulationOut (line 29) | class ModulationOut:
  class BlockType (line 36) | class BlockType:
  class HDBlock (line 43) | class HDBlock(nn.Module):
    method __init__ (line 44) | def __init__(
    method forward (line 61) | def forward(
  class EmbedND (line 77) | class EmbedND(nn.Module):
    method __init__ (line 78) | def __init__(self, theta: int, axes_dim: List[int]):
    method forward (line 83) | def forward(self, ids: Tensor) -> Tensor:
  class PatchEmbed (line 88) | class PatchEmbed(nn.Module):
    method __init__ (line 89) | def __init__(
    method forward (line 101) | def forward(self, latent):
  class PooledEmbed (line 105) | class PooledEmbed(nn.Module):
    method __init__ (line 106) | def __init__(self, text_emb_dim, hidden_size, dtype=None, device=None,...
    method forward (line 110) | def forward(self, pooled_embed):
  class TimestepEmbed (line 113) | class TimestepEmbed(nn.Module):
    method __init__ (line 114) | def __init__(self, hidden_size, frequency_embedding_size=256, dtype=No...
    method forward (line 119) | def forward(self, t, wdtype):
  class TextProjection (line 124) | class TextProjection(nn.Module):
    method __init__ (line 125) | def __init__(self, in_features, hidden_size, dtype=None, device=None, ...
    method forward (line 129) | def forward(self, caption):
  class HDFeedForwardSwiGLU (line 138) | class HDFeedForwardSwiGLU(nn.Module):
    method __init__ (line 139) | def __init__(
    method forward (line 158) | def forward(self, x, style_block=None): # 1,4096,2560 ->
  class HDMoEGate (line 180) | class HDMoEGate(nn.Module):
    method __init__ (line 181) | def __init__(self, dim, num_routed_experts=4, num_activated_experts=2,...
    method forward (line 188) | def forward(self, x):
  class HDMOEFeedForwardSwiGLU (line 198) | class HDMOEFeedForwardSwiGLU(nn.Module):
    method __init__ (line 199) | def __init__(
    method forward (line 213) | def forward(self, x, style_block=None):
  function apply_passthrough (line 257) | def apply_passthrough(denoised_embed, *args, **kwargs):
  class AttentionBuffer (line 260) | class AttentionBuffer:
  function attention (line 264) | def attention(q: Tensor, k: Tensor, v: Tensor, rope: Tensor, mask: Optio...
  class HDAttention (line 284) | class HDAttention(nn.Module):
    method __init__ (line 285) | def __init__(
    method forward (line 320) | def forward(
  class HDBlockDouble (line 446) | class HDBlockDouble(nn.Module):
    method __init__ (line 449) | def __init__(
    method forward (line 475) | def forward(
  class HDBlockSingle (line 555) | class HDBlockSingle(nn.Module):
    method __init__ (line 558) | def __init__(
    method forward (line 579) | def forward(
  class HDModel (line 627) | class HDModel(nn.Module):
    method __init__ (line 631) | def __init__(
    method prepare_contexts (line 713) | def prepare_contexts(self, llama3, context, bsz, img_num_fea):
    method forward (line 732) | def forward(
    method expand_timesteps (line 1364) | def expand_timesteps(self, t, batch_size, device):
    method unpatchify (line 1379) | def unpatchify(self, x: Tensor, img_sizes: List[Tuple[int, int]]) -> L...
    method patchify (line 1391) | def patchify(self, x, max_seq, img_sizes=None):
  function clone_inputs (line 1417) | def clone_inputs(*args, index: int=None):
  function attention_rescale (line 1426) | def attention_rescale(
  class HDLastLayer (line 1446) | class HDLastLayer(nn.Module):
    method __init__ (line 1447) | def __init__(self, hidden_size: int, patch_size: int, out_channels: in...
    method forward (line 1453) | def forward(self, x: Tensor, vec: Tensor, modulation_dims=None) -> Ten...
  function apply_mod (line 1474) | def apply_mod(tensor, m_mult, m_add=None, modulation_dims=None):

FILE: images.py
  function tensor2pil (line 19) | def tensor2pil(image):
  function pil2tensor (line 23) | def pil2tensor(image):
  function freq_sep_fft (line 27) | def freq_sep_fft(img, cutoff=5, sigma=10):
  function color_dodge_blend (line 53) | def color_dodge_blend(base, blend):
  function color_scorch_blend (line 56) | def color_scorch_blend(base, blend):
  function divide_blend (line 59) | def divide_blend(base, blend):
  function color_burn_blend (line 62) | def color_burn_blend(base, blend):
  function hard_light_blend (line 65) | def hard_light_blend(base, blend):
  function hard_light_freq_sep (line 70) | def hard_light_freq_sep(original, low_pass):
  function linear_light_blend (line 74) | def linear_light_blend(base, blend):
  function linear_light_freq_sep (line 79) | def linear_light_freq_sep(base, blend):
  function scale_to_range (line 82) | def scale_to_range(value, min_old, max_old, min_new, max_new):
  function normalize_lab (line 86) | def normalize_lab(lab_image):
  function denormalize_lab (line 97) | def denormalize_lab(lab_normalized):
  function rgb_to_lab (line 108) | def rgb_to_lab(image):
  function lab_to_rgb (line 111) | def lab_to_rgb(image):
  function cv2_layer (line 116) | def cv2_layer(tensor, function):
  function image_resize (line 152) | def image_resize(image,
  class ImageRepeatTileToSize (line 258) | class ImageRepeatTileToSize:
    method __init__ (line 259) | def __init__(self):
    method INPUT_TYPES (line 263) | def INPUT_TYPES(cls):
    method main (line 278) | def main(self, image, width, height, crop,
  class Film_Grain (line 307) | class Film_Grain:
    method __init__ (line 308) | def __init__(self):
    method INPUT_TYPES (line 312) | def INPUT_TYPES(cls):
    method main (line 328) | def main(self, image, density, intensity, highlights, supersample_fact...
    method apply_film_grain (line 332) | def apply_film_grain(self, img, density=0.1, intensity=1.0, highlights...
  class Image_Grain_Add (line 375) | class Image_Grain_Add:
    method __init__ (line 376) | def __init__(self):
    method INPUT_TYPES (line 380) | def INPUT_TYPES(cls):
    method main (line 397) | def main(self, image, weight=0.5, density=1.0, intensity=1.0, highligh...
    method apply_film_grain (line 404) | def apply_film_grain(self, img, density=0.1, intensity=1.0, highlights...
  class Frequency_Separation_Hard_Light (line 448) | class Frequency_Separation_Hard_Light:
    method __init__ (line 449) | def __init__(self):
    method INPUT_TYPES (line 453) | def INPUT_TYPES(cls):
    method main (line 469) | def main(self, high_pass=None, original=None, low_pass=None):
  class Frequency_Separation_Hard_Light_LAB (line 480) | class Frequency_Separation_Hard_Light_LAB:
    method __init__ (line 481) | def __init__(self):
    method INPUT_TYPES (line 485) | def INPUT_TYPES(cls):
    method main (line 501) | def main(self, high_pass=None, original=None, low_pass=None):
  class Frame_Select (line 530) | class Frame_Select:
    method __init__ (line 531) | def __init__(self):
    method INPUT_TYPES (line 535) | def INPUT_TYPES(cls):
    method main (line 552) | def main(self, frames=None, select=0):
  class Frames_Slice (line 557) | class Frames_Slice:
    method __init__ (line 558) | def __init__(self):
    method INPUT_TYPES (line 562) | def INPUT_TYPES(cls):
    method main (line 579) | def main(self, frames=None, start=0, stop=1):
  class Frames_Concat (line 584) | class Frames_Concat:
    method __init__ (line 585) | def __init__(self):
    method INPUT_TYPES (line 589) | def INPUT_TYPES(cls):
    method main (line 605) | def main(self, frames_0, frames_1):
  class Image_Channels_LAB (line 611) | class Image_Channels_LAB:
    method __init__ (line 612) | def __init__(self):
    method INPUT_TYPES (line 616) | def INPUT_TYPES(cls):
    method main (line 633) | def main(self, RGB=None, L=None, A=None, B=None):
  class Frequency_Separation_Vivid_Light (line 646) | class Frequency_Separation_Vivid_Light:
    method __init__ (line 647) | def __init__(self):
    method INPUT_TYPES (line 651) | def INPUT_TYPES(cls):
    method main (line 666) | def main(self, high_pass=None, original=None, low_pass=None):
  class Frequency_Separation_Linear_Light (line 677) | class Frequency_Separation_Linear_Light:
    method __init__ (line 678) | def __init__(self):
    method INPUT_TYPES (line 682) | def INPUT_TYPES(cls):
    method main (line 698) | def main(self, high_pass=None, original=None, low_pass=None):
  class Frequency_Separation_FFT (line 709) | class Frequency_Separation_FFT:
    method __init__ (line 710) | def __init__(self):
    method INPUT_TYPES (line 714) | def INPUT_TYPES(cls):
    method main (line 732) | def main(self, high_pass=None, original=None, low_pass=None, cutoff=5....
  class ImageSharpenFS (line 745) | class ImageSharpenFS:
    method __init__ (line 746) | def __init__(self):
    method INPUT_TYPES (line 750) | def INPUT_TYPES(cls):
    method main (line 767) | def main(self, images, method, type, intensity):
  class ImageMedianBlur (line 792) | class ImageMedianBlur:
    method __init__ (line 793) | def __init__(self):
    method INPUT_TYPES (line 797) | def INPUT_TYPES(cls):
    method main (line 810) | def main(self, images, size):
  class ImageGaussianBlur (line 820) | class ImageGaussianBlur:
    method __init__ (line 821) | def __init__(self):
    method INPUT_TYPES (line 825) | def INPUT_TYPES(cls):
    method main (line 838) | def main(self, images, size):
  function fast_smudge_blur_comfyui (line 848) | def fast_smudge_blur_comfyui(img, kernel_size=51):
  class FastSmudgeBlur (line 869) | class FastSmudgeBlur:
    method __init__ (line 870) | def __init__(self):
    method INPUT_TYPES (line 874) | def INPUT_TYPES(cls):
    method main (line 887) | def main(self, images, kernel_size):
  class Image_Pair_Split (line 915) | class Image_Pair_Split:
    method INPUT_TYPES (line 917) | def INPUT_TYPES(s):
    method main (line 928) | def main(self, img_pair):
  class Image_Crop_Location_Exact (line 935) | class Image_Crop_Location_Exact:
    method __init__ (line 936) | def __init__(self):
    method INPUT_TYPES (line 940) | def INPUT_TYPES(cls):
    method main (line 957) | def main(self, image, x=0, y=0, width=256, height=256, edge="original"):
  class Masks_Unpack4 (line 989) | class Masks_Unpack4:
    method INPUT_TYPES (line 991) | def INPUT_TYPES(s):
    method main (line 1003) | def main(self, masks,):
  class Masks_Unpack8 (line 1006) | class Masks_Unpack8:
    method INPUT_TYPES (line 1008) | def INPUT_TYPES(s):
    method main (line 1020) | def main(self, masks,):
  class Masks_Unpack16 (line 1023) | class Masks_Unpack16:
    method INPUT_TYPES (line 1025) | def INPUT_TYPES(s):
    method main (line 1037) | def main(self, masks,):
  class Image_Get_Color_Swatches (line 1045) | class Image_Get_Color_Swatches:
    method INPUT_TYPES (line 1047) | def INPUT_TYPES(s):
    method main (line 1059) | def main(self, image_color_swatches):
  class Masks_From_Color_Swatches (line 1066) | class Masks_From_Color_Swatches:
    method INPUT_TYPES (line 1068) | def INPUT_TYPES(s):
    method main (line 1081) | def main(self, image_color_mask, color_swatches):
  class Masks_From_Colors (line 1090) | class Masks_From_Colors:
    method INPUT_TYPES (line 1092) | def INPUT_TYPES(s):
    method main (line 1105) | def main(self, image_color_swatches, image_color_mask, ):
  function read_swatch_colors (line 1131) | def read_swatch_colors(
  function build_masks_from_swatch (line 1181) | def build_masks_from_swatch(
  function _remove_small_components (line 1259) | def _remove_small_components(
  function cleanup_and_fill_masks (line 1305) | def cleanup_and_fill_masks(
  class MaskSketch (line 1348) | class MaskSketch:
    method INPUT_TYPES (line 1350) | def INPUT_TYPES(s):
    method load_image (line 1362) | def load_image(self, image):
    method load_image_orig (line 1373) | def load_image_orig(self, image):
    method IS_CHANGED (line 1418) | def IS_CHANGED(s, image):
    method VALIDATE_INPUTS (line 1426) | def VALIDATE_INPUTS(s, image):
  class MaskBoundingBoxAspectRatio (line 1442) | class MaskBoundingBoxAspectRatio:
    method INPUT_TYPES (line 1444) | def INPUT_TYPES(s):
    method execute (line 1463) | def execute(self, mask, padding, blur, aspect_ratio, transpose, image=...

FILE: latent_images.py
  function initialize_or_scale (line 14) | def initialize_or_scale(tensor, value, steps):
  function latent_normalize_channels (line 20) | def latent_normalize_channels(x):
  function latent_stdize_channels (line 25) | def latent_stdize_channels(x):
  function latent_meancenter_channels (line 29) | def latent_meancenter_channels(x):
  class latent_channelwise_match (line 34) | class latent_channelwise_match:
    method __init__ (line 35) | def __init__(self):
    method INPUT_TYPES (line 38) | def INPUT_TYPES(s):
    method main (line 58) | def main(self, model, latent_target, mask_target, latent_source, mask_...

FILE: latents.py
  function get_cosine_similarity_manual (line 9) | def get_cosine_similarity_manual(a, b):
  function get_cosine_similarity (line 12) | def get_cosine_similarity(a, b, mask=None, dim=0):
  function get_pearson_similarity (line 21) | def get_pearson_similarity(a, b, mask=None, dim=0, norm_dim=None):
  function get_collinear (line 41) | def get_collinear(x, y):
  function get_orthogonal (line 44) | def get_orthogonal(x, y):
  function get_collinear_flat (line 49) | def get_collinear_flat(x, y):
  function get_orthogonal_noise_from_channelwise (line 61) | def get_orthogonal_noise_from_channelwise(*refs, max_iter=500, max_score...
  function gram_schmidt_channels_optimized (line 87) | def gram_schmidt_channels_optimized(A, *refs):
  function attention_weights (line 110) | def attention_weights(
  function attention_weights_orig (line 132) | def attention_weights_orig(q, k):
  function get_slerp_weight_for_cossim (line 143) | def get_slerp_weight_for_cossim(cos_sim, target_cos):
  function get_slerp_ratio (line 173) | def get_slerp_ratio(cos_sim_A, cos_sim_B, target_cos):
  function find_slerp_ratio_grid (line 186) | def find_slerp_ratio_grid(A: torch.Tensor, B: torch.Tensor, D: torch.Ten...
  function compute_slerp_ratio_for_target (line 209) | def compute_slerp_ratio_for_target(A: torch.Tensor, B: torch.Tensor, D: ...
  function normalize_zscore (line 246) | def normalize_zscore(x, channelwise=False, inplace=False):
  function latent_normalize_channels (line 258) | def latent_normalize_channels(x):
  function latent_stdize_channels (line 263) | def latent_stdize_channels(x):
  function latent_meancenter_channels (line 267) | def latent_meancenter_channels(x):
  function lagrange_interpolation (line 275) | def lagrange_interpolation(x_values, y_values, x_new):
  function line_intersection (line 314) | def line_intersection(a: torch.Tensor, d1: torch.Tensor, b: torch.Tensor...
  function slerp_direction (line 351) | def slerp_direction(t: float, u0: torch.Tensor, u1: torch.Tensor, DOT_TH...
  function magnitude_aware_interpolation (line 363) | def magnitude_aware_interpolation(t: float, v0: torch.Tensor, v1: torch....
  function slerp_tensor (line 377) | def slerp_tensor(val: torch.Tensor, low: torch.Tensor, high: torch.Tenso...
  function slerp (line 439) | def slerp(v0: FloatTensor, v1: FloatTensor, t: float|FloatTensor, DOT_TH...
  function normalize_latent (line 500) | def normalize_latent(target, source=None, mean=True, std=True, set_mean=...
  function hard_light_blend (line 553) | def hard_light_blend(base_latent, blend_latent):
  function make_checkerboard (line 591) | def make_checkerboard(tile_size: int, num_tiles: int, dtype=torch.float1...
  function get_edge_mask_slug (line 599) | def get_edge_mask_slug(mask: torch.Tensor, dilation: int = 3) -> torch.T...
  function get_edge_mask (line 616) | def get_edge_mask(mask: torch.Tensor, dilation: int = 3) -> torch.Tensor:
  function checkerboard_variable (line 635) | def checkerboard_variable(widths, dtype=torch.float16, device='cpu'):
  function interpolate_spd (line 654) | def interpolate_spd(cov1, cov2, t, eps=1e-5):
  function tile_latent (line 699) | def tile_latent(latent: torch.Tensor,
  function untile_latent (line 749) | def untile_latent(tiles: torch.Tensor,
  function upscale_to_match_spatial (line 801) | def upscale_to_match_spatial(tensor_5d, ref_4d, mode='bicubic'):
  function gaussian_blur_2d (line 824) | def gaussian_blur_2d(img: torch.Tensor, sigma: float, kernel_size: int =...
  function median_blur_2d (line 850) | def median_blur_2d(img: torch.Tensor, kernel_size: int = 3) -> torch.Ten...
  function apply_to_state_info_tensors (line 864) | def apply_to_state_info_tensors(obj, ref_shape, modify_func, *args, **kw...

FILE: legacy/__init__.py
  function add_legacy (line 11) | def add_legacy(NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS, extra_sa...

FILE: legacy/conditioning.py
  function multiply_nested_tensors (line 22) | def multiply_nested_tensors(structure, scalar):
  class ConditioningOrthoCollin (line 35) | class ConditioningOrthoCollin:
    method INPUT_TYPES (line 37) | def INPUT_TYPES(s):
    method combine (line 49) | def combine(self, conditioning_0, conditioning_1, t5_strength, clip_st...
  class CLIPTextEncodeFluxUnguided (line 78) | class CLIPTextEncodeFluxUnguided:
    method INPUT_TYPES (line 80) | def INPUT_TYPES(s):
    method encode (line 92) | def encode(self, clip, clip_l, t5xxl):
  class StyleModelApplyAdvanced (line 115) | class StyleModelApplyAdvanced:
    method INPUT_TYPES (line 117) | def INPUT_TYPES(s):
    method main (line 128) | def main(self, clip_vision_output, style_model, conditioning, strength...
  class ConditioningZeroAndTruncate (line 138) | class ConditioningZeroAndTruncate:
    method INPUT_TYPES (line 142) | def INPUT_TYPES(s):
    method zero_out (line 151) | def zero_out(self, conditioning):
  class ConditioningTruncate (line 163) | class ConditioningTruncate:
    method INPUT_TYPES (line 166) | def INPUT_TYPES(s):
    method zero_out (line 174) | def zero_out(self, conditioning):
  class ConditioningMultiply (line 186) | class ConditioningMultiply:
    method INPUT_TYPES (line 188) | def INPUT_TYPES(s):
    method main (line 197) | def main(self, conditioning, multiplier):
  class ConditioningAdd (line 203) | class ConditioningAdd:
    method INPUT_TYPES (line 205) | def INPUT_TYPES(s):
    method main (line 215) | def main(self, conditioning_1, conditioning_2, multiplier):
  class ConditioningCombine (line 225) | class ConditioningCombine:
    method INPUT_TYPES (line 227) | def INPUT_TYPES(s):
    method combine (line 234) | def combine(self, conditioning_1, conditioning_2):
  class ConditioningAverage (line 239) | class ConditioningAverage :
    method INPUT_TYPES (line 241) | def INPUT_TYPES(s):
    method addWeighted (line 250) | def addWeighted(self, conditioning_to, conditioning_from, conditioning...
  class ConditioningSetTimestepRange (line 277) | class ConditioningSetTimestepRange:
    method INPUT_TYPES (line 279) | def INPUT_TYPES(s):
    method set_range (line 289) | def set_range(self, conditioning, start, end):
  class ConditioningAverageScheduler (line 294) | class ConditioningAverageScheduler: # don't think this is implemented co...
    method INPUT_TYPES (line 296) | def INPUT_TYPES(s):
    method addWeighted (line 311) | def addWeighted(conditioning_to, conditioning_from, conditioning_to_st...
    method create_percent_array (line 339) | def create_percent_array(steps):
    method main (line 343) | def main(self, conditioning_0, conditioning_1, ratio):
  class StableCascade_StageB_Conditioning64 (line 357) | class StableCascade_StageB_Conditioning64:
    method INPUT_TYPES (line 359) | def INPUT_TYPES(s):
    method set_prior (line 370) | def set_prior(self, conditioning, stage_c):
  class Conditioning_Recast64 (line 381) | class Conditioning_Recast64:
    method INPUT_TYPES (line 383) | def INPUT_TYPES(s):
    method main (line 396) | def main(self, cond_0, cond_1 = None):
  class ConditioningToBase64 (line 407) | class ConditioningToBase64:
    method INPUT_TYPES (line 409) | def INPUT_TYPES(s):
    method notify (line 427) | def notify(self, unique_id=None, extra_pnginfo=None, conditioning=None):
  class Base64ToConditioning (line 451) | class Base64ToConditioning:
    method INPUT_TYPES (line 453) | def INPUT_TYPES(s):
    method main (line 465) | def main(self, data):
  class RegionalMask (line 477) | class RegionalMask(torch.nn.Module):
    method __init__ (line 478) | def __init__(self, mask: torch.Tensor, conditioning: torch.Tensor, con...
    method __call__ (line 491) | def __call__(self, transformer_options, weight=0, dtype=torch.bfloat16...
  class RegionalConditioning (line 559) | class RegionalConditioning(torch.nn.Module):
    method __init__ (line 560) | def __init__(self, conditioning: torch.Tensor, region_cond: torch.Tens...
    method __call__ (line 568) | def __call__(self, transformer_options, dtype=torch.bfloat16, *args,  ...
    method concat_cond (line 574) | def concat_cond(self, context, transformer_options, dtype=torch.bfloat...
  class FluxRegionalPrompt (line 586) | class FluxRegionalPrompt:
    method INPUT_TYPES (line 588) | def INPUT_TYPES(s):
    method main (line 602) | def main(self, cond, mask, cond_regional=[]):
  function fp_not (line 608) | def fp_not(tensor):
  function fp_or (line 611) | def fp_or(tensor1, tensor2):
  function fp_and (line 614) | def fp_and(tensor1, tensor2):
  class RegionalGenerateConditioningsAndMasks (line 617) | class RegionalGenerateConditioningsAndMasks:
    method __init__ (line 618) | def __init__(self, conditioning, conditioning_regional, weight, start_...
    method __call__ (line 626) | def __call__(self, latent):
  class FluxRegionalConditioning (line 681) | class FluxRegionalConditioning:
    method INPUT_TYPES (line 683) | def INPUT_TYPES(s):
    method main (line 705) | def main(self, conditioning_regional, mask_weight=1.0, start_percent=0...

FILE: legacy/deis_coefficients.py
  function edm2t (line 14) | def edm2t(edm_steps, epsilon_s=1e-3, sigma_min=0.002, sigma_max=80):
  function cal_poly (line 24) | def cal_poly(prev_t, j, taus):
  function t2alpha_fn (line 35) | def t2alpha_fn(beta_0, beta_1, t):
  function cal_integrand (line 40) | def cal_integrand(beta_0, beta_1, taus):
  function get_deis_coeff_list (line 56) | def get_deis_coeff_list(t_steps, max_order, N=10000, deis_mode='tab'):

FILE: legacy/flux/controlnet.py
  class MistolineCondDownsamplBlock (line 16) | class MistolineCondDownsamplBlock(nn.Module):
    method __init__ (line 17) | def __init__(self, dtype=None, device=None, operations=None):
    method forward (line 41) | def forward(self, x):
  class MistolineControlnetBlock (line 44) | class MistolineControlnetBlock(nn.Module):
    method __init__ (line 45) | def __init__(self, hidden_size, dtype=None, device=None, operations=No...
    method forward (line 50) | def forward(self, x):
  class ControlNetFlux (line 54) | class ControlNetFlux(Flux):
    method __init__ (line 55) | def __init__(self, latent_input=False, num_union_modes=0, mistoline=Fa...
    method forward_orig (line 111) | def forward_orig(
    method forward (line 179) | def forward(self, x, timesteps, context, y, guidance=None, hint=None, ...

FILE: legacy/flux/layers.py
  class EmbedND (line 15) | class EmbedND(nn.Module):
    method __init__ (line 16) | def __init__(self, dim: int, theta: int, axes_dim: list):
    method forward (line 22) | def forward(self, ids: Tensor) -> Tensor:
  function attention_weights (line 31) | def attention_weights(q, k):
  function timestep_embedding (line 40) | def timestep_embedding(t: Tensor, dim, max_period=10000, time_factor: fl...
  class MLPEmbedder (line 61) | class MLPEmbedder(nn.Module):
    method __init__ (line 62) | def __init__(self, in_dim: int, hidden_dim: int, dtype=None, device=No...
    method forward (line 68) | def forward(self, x: Tensor) -> Tensor:
  class RMSNorm (line 72) | class RMSNorm(torch.nn.Module):
    method __init__ (line 73) | def __init__(self, dim: int, dtype=None, device=None, operations=None):
    method forward (line 77) | def forward(self, x: Tensor):
  class QKNorm (line 81) | class QKNorm(torch.nn.Module):
    method __init__ (line 82) | def __init__(self, dim: int, dtype=None, device=None, operations=None):
    method forward (line 87) | def forward(self, q: Tensor, k: Tensor, v: Tensor) -> tuple:
  class SelfAttention (line 93) | class SelfAttention(nn.Module):
    method __init__ (line 94) | def __init__(self, dim: int, num_heads: int = 8, qkv_bias: bool = Fals...
  class ModulationOut (line 105) | class ModulationOut:
  class Modulation (line 110) | class Modulation(nn.Module):
    method __init__ (line 111) | def __init__(self, dim: int, double: bool, dtype=None, device=None, op...
    method forward (line 117) | def forward(self, vec: Tensor) -> tuple:
  class DoubleStreamBlock (line 122) | class DoubleStreamBlock(nn.Module):
    method __init__ (line 123) | def __init__(self, hidden_size: int, num_heads: int, mlp_ratio: float,...
    method img_attn_preproc (line 155) | def img_attn_preproc(self, img, img_mod1):
    method txt_attn_preproc (line 163) | def txt_attn_preproc(self, txt, txt_mod1):
    method forward (line 171) | def forward(self, img: Tensor, txt: Tensor, vec: Tensor, pe: Tensor, t...
  class SingleStreamBlock (line 220) | class SingleStreamBlock(nn.Module):
    method __init__ (line 225) | def __init__(self, hidden_size: int,  num_heads: int, mlp_ratio: float...
    method img_attn (line 247) | def img_attn(self, img, mod, pe, mask, weight):
    method forward (line 265) | def forward(self, img: Tensor, vec: Tensor, pe: Tensor, timestep, tran...
  class LastLayer (line 275) | class LastLayer(nn.Module):
    method __init__ (line 276) | def __init__(self, hidden_size: int, patch_size: int, out_channels: in...
    method forward (line 282) | def forward(self, x: Tensor, vec: Tensor) -> Tensor:

FILE: legacy/flux/math.py
  function attention (line 7) | def attention(q: Tensor, k: Tensor, v: Tensor, pe: Tensor, mask=None) ->...
  function rope (line 15) | def rope(pos: Tensor, dim: int, theta: int) -> Tensor:
  function apply_rope (line 30) | def apply_rope(xq: Tensor, xk: Tensor, freqs_cis: Tensor):

FILE: legacy/flux/model.py
  class FluxParams (line 24) | class FluxParams:
  class ReFlux (line 40) | class ReFlux(Flux):
    method __init__ (line 41) | def __init__(self, image_model=None, final_layer=True, dtype=None, dev...
    method forward_blocks (line 79) | def forward_blocks(self, img: Tensor, img_ids: Tensor, txt: Tensor, tx...
    method _get_img_ids (line 151) | def _get_img_ids(self, x, bs, h_len, w_len, h_start, h_end, w_start, w...
    method forward (line 160) | def forward(self, x, timestep, context, y, guidance, control=None, tra...

FILE: legacy/flux/redux.py
  class ReduxImageEncoder (line 6) | class ReduxImageEncoder(torch.nn.Module):
    method __init__ (line 7) | def __init__(
    method forward (line 23) | def forward(self, sigclip_embeds) -> torch.Tensor:

FILE: legacy/helper.py
  function get_extra_options_kv (line 8) | def get_extra_options_kv(key, default, extra_options):
  function get_extra_options_list (line 17) | def get_extra_options_list(key, default, extra_options):
  function extra_options_flag (line 26) | def extra_options_flag(flag, extra_options):
  function safe_get_nested (line 29) | def safe_get_nested(d, keys, default=None):
  function is_video_model (line 37) | def is_video_model(model):
  function is_RF_model (line 46) | def is_RF_model(model):
  function lagrange_interpolation (line 53) | def lagrange_interpolation(x_values, y_values, x_new):
  function get_cosine_similarity_manual (line 93) | def get_cosine_similarity_manual(a, b):
  function get_cosine_similarity (line 98) | def get_cosine_similarity(a, b):
  function get_pearson_similarity (line 104) | def get_pearson_similarity(a, b):
  function initialize_or_scale (line 113) | def initialize_or_scale(tensor, value, steps):
  function has_nested_attr (line 120) | def has_nested_attr(obj, attr_path):
  function get_res4lyf_scheduler_list (line 128) | def get_res4lyf_scheduler_list():
  function conditioning_set_values (line 134) | def conditioning_set_values(conditioning, values={}):
  function get_collinear_alt (line 145) | def get_collinear_alt(x, y):
  function get_collinear (line 156) | def get_collinear(x, y):
  function get_orthogonal (line 167) | def get_orthogonal(x, y):
  function slerp (line 199) | def slerp(v0: FloatTensor, v1: FloatTensor, t: float|FloatTensor, DOT_TH...
  class OptionsManager (line 260) | class OptionsManager:
    method __init__ (line 263) | def __init__(self, options_inputs=None):
    method add_option (line 267) | def add_option(self, option):
    method merged (line 274) | def merged(self):
    method get (line 307) | def get(self, key, default=None):
    method _deep_update (line 310) | def _deep_update(self, target_dict, source_dict):
    method __getitem__ (line 319) | def __getitem__(self, key):
    method __contains__ (line 323) | def __contains__(self, key):
    method as_dict (line 327) | def as_dict(self):
    method __bool__ (line 331) | def __bool__(self):
    method debug_print_options (line 335) | def debug_print_options(self):

FILE: legacy/latents.py
  function initialize_or_scale (line 16) | def initialize_or_scale(tensor, value, steps):
  function latent_normalize_channels (line 22) | def latent_normalize_channels(x):
    method __init__ (line 1655) | def __init__(self):
    method INPUT_TYPES (line 1658) | def INPUT_TYPES(s):
    method main (line 1673) | def main(self, latent, mode, operation):
  function latent_stdize_channels (line 27) | def latent_stdize_channels(x):
  function latent_meancenter_channels (line 31) | def latent_meancenter_channels(x):
  function initialize_or_scale (line 36) | def initialize_or_scale(tensor, value, steps):
  function normalize_latent (line 43) | def normalize_latent(target, source=None, mean=True, std=True, set_mean=...
  class AdvancedNoise (line 97) | class AdvancedNoise:
    method INPUT_TYPES (line 99) | def INPUT_TYPES(s):
    method get_noise (line 113) | def get_noise(self, noise_seed, noise_type, alpha, k):
  class Noise_RandomNoise (line 116) | class Noise_RandomNoise:
    method __init__ (line 117) | def __init__(self, seed, noise_type, alpha, k):
    method generate_noise (line 123) | def generate_noise(self, input_latent):
  class LatentNoised (line 129) | class LatentNoised:
    method INPUT_TYPES (line 131) | def INPUT_TYPES(s):
    method main (line 158) | def main(self, add_noise, noise_is_latent, noise_type, noise_seed, alp...
  class MaskToggle (line 202) | class MaskToggle:
    method __init__ (line 203) | def __init__(self):
    method INPUT_TYPES (line 206) | def INPUT_TYPES(s):
    method main (line 220) | def main(self, enable=True, mask=None):
  class set_precision (line 227) | class set_precision:
    method __init__ (line 228) | def __init__(self):
    method INPUT_TYPES (line 231) | def INPUT_TYPES(s):
    method main (line 246) | def main(self, precision="32", latent_image=None, set_default=False):
  class set_precision_universal (line 263) | class set_precision_universal:
    method __init__ (line 264) | def __init__(self):
    method INPUT_TYPES (line 267) | def INPUT_TYPES(s):
    method main (line 287) | def main(self, precision="fp32", cond_pos=None, cond_neg=None, sigmas=...
  class set_precision_advanced (line 322) | class set_precision_advanced:
    method __init__ (line 323) | def __init__(self):
    method INPUT_TYPES (line 326) | def INPUT_TYPES(s):
    method main (line 341) | def main(self, global_precision="32", shark_precision="64", latent_ima...
  class latent_to_cuda (line 370) | class latent_to_cuda:
    method __init__ (line 371) | def __init__(self):
    method INPUT_TYPES (line 374) | def INPUT_TYPES(s):
    method main (line 388) | def main(self, latent, to_cuda):
  class latent_batch (line 396) | class latent_batch:
    method __init__ (line 397) | def __init__(self):
    method INPUT_TYPES (line 400) | def INPUT_TYPES(s):
    method main (line 414) | def main(self, latent, batch_size):
  class LatentPhaseMagnitude (line 422) | class LatentPhaseMagnitude:
    method INPUT_TYPES (line 424) | def INPUT_TYPES(s):
    method latent_repeat (line 474) | def latent_repeat(latent, batch_size):
    method mix_latent_phase_magnitude (line 482) | def mix_latent_phase_magnitude(latent_0, latent_1, power_phase, power_...
    method main (line 525) | def main(self, #batch_size, latent_1_repeat,
  class LatentPhaseMagnitudeMultiply (line 589) | class LatentPhaseMagnitudeMultiply:
    method INPUT_TYPES (line 591) | def INPUT_TYPES(s):
    method latent_repeat (line 627) | def latent_repeat(latent, batch_size):
    method mix_latent_phase_magnitude (line 635) | def mix_latent_phase_magnitude(latent_0,
    method main (line 665) | def main(self,
  class LatentPhaseMagnitudeOffset (line 705) | class LatentPhaseMagnitudeOffset:
    method INPUT_TYPES (line 707) | def INPUT_TYPES(s):
    method latent_repeat (line 743) | def latent_repeat(latent, batch_size):
    method mix_latent_phase_magnitude (line 751) | def mix_latent_phase_magnitude(latent_0,
    method main (line 781) | def main(self,
  class LatentPhaseMagnitudePower (line 821) | class LatentPhaseMagnitudePower:
    method INPUT_TYPES (line 823) | def INPUT_TYPES(s):
    method latent_repeat (line 859) | def latent_repeat(latent, batch_size):
    method mix_latent_phase_magnitude (line 867) | def mix_latent_phase_magnitude(latent_0,
    method main (line 897) | def main(self,
  class StableCascade_StageC_VAEEncode_Exact (line 937) | class StableCascade_StageC_VAEEncode_Exact:
    method __init__ (line 938) | def __init__(self, device="cpu"):
    method INPUT_TYPES (line 942) | def INPUT_TYPES(s):
    method generate (line 955) | def generate(self, image, vae, width, height):
  class StableCascade_StageC_VAEEncode_Exact_Tiled (line 968) | class StableCascade_StageC_VAEEncode_Exact_Tiled:
    method __init__ (line 969) | def __init__(self, device="cpu"):
    method INPUT_TYPES (line 973) | def INPUT_TYPES(s):
    method generate (line 986) | def generate(self, image, vae, tile_size, overlap):
  function tiled_scale_multidim (line 1009) | def tiled_scale_multidim(samples, function, tile=(64, 64), overlap=8, up...
  class EmptyLatentImageCustom (line 1048) | class EmptyLatentImageCustom:
    method __init__ (line 1049) | def __init__(self):
    method INPUT_TYPES (line 1053) | def INPUT_TYPES(s):
    method generate (line 1071) | def generate(self, width, height, batch_size, channels, mode, compress...
  class EmptyLatentImage64 (line 1097) | class EmptyLatentImage64:
    method __init__ (line 1098) | def __init__(self):
    method INPUT_TYPES (line 1102) | def INPUT_TYPES(s):
    method generate (line 1111) | def generate(self, width, height, batch_size=1):
  class LatentNoiseBatch_perlin (line 1133) | class LatentNoiseBatch_perlin:
    method __init__ (line 1134) | def __init__(self):
    method INPUT_TYPES (line 1138) | def INPUT_TYPES(s):
    method rand_perlin_2d (line 1156) | def rand_perlin_2d(self, shape, res, fade = lambda t: 6*t**5 - 15*t**4...
    method rand_perlin_2d_octaves (line 1174) | def rand_perlin_2d_octaves(self, shape, res, octaves=1, persistence=0.5):
    method scale_tensor (line 1186) | def scale_tensor(self, x):
    method create_noisy_latents_perlin (line 1192) | def create_noisy_latents_perlin(self, seed, width, height, batch_size,...
  class LatentNoiseBatch_gaussian_channels (line 1208) | class LatentNoiseBatch_gaussian_channels:
    method INPUT_TYPES (line 1210) | def INPUT_TYPES(s):
    method gaussian_noise_channels (line 1254) | def gaussian_noise_channels(x, mean_luminosity = -0.1, mean_cyan_red =...
    method main (line 1266) | def main(self, latent, steps, seed,
  class LatentNoiseBatch_gaussian (line 1294) | class LatentNoiseBatch_gaussian:
    method INPUT_TYPES (line 1296) | def INPUT_TYPES(s):
    method main (line 1317) | def main(self, latent, mean, std, steps, seed, means=None, stds=None, ...
  class LatentNoiseBatch_fractal (line 1335) | class LatentNoiseBatch_fractal:
    method INPUT_TYPES (line 1337) | def INPUT_TYPES(s):
    method main (line 1358) | def main(self, latent, alpha, k_flip, steps, seed=42, alphas=None, ks=...
  class LatentNoiseList (line 1377) | class LatentNoiseList:
    method INPUT_TYPES (line 1379) | def INPUT_TYPES(s):
    method main (line 1400) | def main(self, seed, latent, alpha, k_flip, steps, alphas=None, ks=None):
  class LatentBatch_channels (line 1421) | class LatentBatch_channels:
    method INPUT_TYPES (line 1423) | def INPUT_TYPES(s):
    method latent_channels_multiply (line 1447) | def latent_channels_multiply(x, luminosity = -0.1, cyan_red = 0.0, lim...
    method latent_channels_offset (line 1457) | def latent_channels_offset(x, luminosity = -0.1, cyan_red = 0.0, lime_...
    method latent_channels_power (line 1467) | def latent_channels_power(x, luminosity = -0.1, cyan_red = 0.0, lime_p...
    method main (line 1476) | def main(self, latent, mode,
  class LatentBatch_channels_16 (line 1502) | class LatentBatch_channels_16:
    method INPUT_TYPES (line 1504) | def INPUT_TYPES(s):
    method latent_channels_multiply (line 1553) | def latent_channels_multiply(x, chan_1 = 0.0, chan_2 = 0.0, chan_3 = 0...
    method latent_channels_offset (line 1575) | def latent_channels_offset(x, chan_1 = 0.0, chan_2 = 0.0, chan_3 = 0.0...
    method latent_channels_power (line 1597) | def latent_channels_power(x, chan_1 = 0.0, chan_2 = 0.0, chan_3 = 0.0,...
    method main (line 1618) | def main(self, latent, mode,
  class latent_normalize_channels (line 1654) | class latent_normalize_channels:
    method __init__ (line 1655) | def __init__(self):
    method INPUT_TYPES (line 1658) | def INPUT_TYPES(s):
    method main (line 1673) | def main(self, latent, mode, operation):
  function hard_light_blend (line 1704) | def hard_light_blend(base_latent, blend_latent):

FILE: legacy/legacy_sampler_rk.py
  function get_epsilon (line 19) | def get_epsilon(model, x, sigma, **extra_args):
  function get_denoised (line 25) | def get_denoised(model, x, sigma, **extra_args):
  function __phi (line 32) | def __phi(j, neg_h):
  function calculate_gamma (line 42) | def calculate_gamma(c2, c3):
  function _gamma (line 51) | def _gamma(n: int,) -> int:
  function _incomplete_gamma (line 59) | def _incomplete_gamma(s: int, x: float, gamma_s: Optional[int] = None) -...
  function phi (line 80) | def phi(j: int, neg_h: float, ):
  function get_rk_methods (line 379) | def get_rk_methods(rk_type, h, c1=0.0, c2=0.5, c3=1.0, h_prev=None, h_pr...
  function get_rk_methods_order (line 626) | def get_rk_methods_order(rk_type):
  function get_rk_methods_order_and_fn (line 630) | def get_rk_methods_order_and_fn(rk_type, h=None, c1=None, c2=None, c3=No...
  function get_rk_methods_coeff (line 637) | def get_rk_methods_coeff(rk_type, h, c1, c2, c3, h_prev=None, h_prev2=No...
  function legacy_sample_rk (line 651) | def legacy_sample_rk(model, x, sigmas, extra_args=None, callback=None, d...

FILE: legacy/legacy_samplers.py
  function initialize_or_scale (line 21) | def initialize_or_scale(tensor, value, steps):
  function move_to_same_device (line 27) | def move_to_same_device(*tensors):
  class Legacy_ClownsharKSampler (line 97) | class Legacy_ClownsharKSampler:
    method INPUT_TYPES (line 99) | def INPUT_TYPES(s):
    method main (line 151) | def main(self, model, cfg, truncate_conditioning, sampler_mode, schedu...
  class Legacy_SamplerRK (line 378) | class Legacy_SamplerRK:
    method INPUT_TYPES (line 380) | def INPUT_TYPES(s):
    method get_sampler (line 421) | def get_sampler(self, eta=0.25, eta_var=0.0, d_noise=1.0, s_noise=1.0,...
  class Legacy_ClownsharKSamplerGuides (line 444) | class Legacy_ClownsharKSamplerGuides:
    method INPUT_TYPES (line 446) | def INPUT_TYPES(s):
    method get_sampler (line 469) | def get_sampler(self, model=None, scheduler="constant", steps=30, deno...
  class Legacy_SharkSampler (line 494) | class Legacy_SharkSampler:
    method INPUT_TYPES (line 496) | def INPUT_TYPES(s):
    method main (line 538) | def main(self, model, add_noise, noise_stdev, noise_mean, noise_normal...

FILE: legacy/models.py
  class ReFluxPatcher (line 30) | class ReFluxPatcher:
    method INPUT_TYPES (line 32) | def INPUT_TYPES(s):
    method main (line 44) | def main(self, model, enable=True):
  class FluxOrthoCFGPatcher (line 74) | class FluxOrthoCFGPatcher:
    method INPUT_TYPES (line 76) | def INPUT_TYPES(s):
    method new_forward (line 95) | def new_forward(self, x, timestep, context, y, guidance, control=None,...
    method main (line 111) | def main(self, model, enable=True, ortho_T5=True, ortho_clip_L=True, z...
  class FluxGuidanceDisable (line 127) | class FluxGuidanceDisable:
    method INPUT_TYPES (line 129) | def INPUT_TYPES(s):
    method new_forward (line 146) | def new_forward(self, x, timestep, context, y, guidance, control=None,...
    method main (line 152) | def main(self, model, disable=True, zero_clip_L=True):
  function time_snr_shift_exponential (line 167) | def time_snr_shift_exponential(alpha, t):
  function time_snr_shift_linear (line 170) | def time_snr_shift_linear(alpha, t):
  class ModelSamplingAdvanced (line 175) | class ModelSamplingAdvanced:
    method INPUT_TYPES (line 178) | def INPUT_TYPES(s):
    method sigma_exponential (line 193) | def sigma_exponential(self, timestep):
    method sigma_linear (line 196) | def sigma_linear(self, timestep):
    method main (line 199) | def main(self, model, scaling, shift):
  class ModelSamplingAdvancedResolution (line 265) | class ModelSamplingAdvancedResolution:
    method INPUT_TYPES (line 268) | def INPUT_TYPES(s):
    method sigma_exponential (line 283) | def sigma_exponential(self, timestep):
    method sigma_linear (line 286) | def sigma_linear(self, timestep):
    method main (line 289) | def main(self, model, scaling, max_shift, base_shift, latent_image):
  class UNetSave (line 342) | class UNetSave:
    method __init__ (line 343) | def __init__(self):
    method INPUT_TYPES (line 347) | def INPUT_TYPES(s):
    method save (line 358) | def save(self, model, filename_prefix, prompt=None, extra_pnginfo=None):
  function save_checkpoint (line 363) | def save_checkpoint(model, clip=None, vae=None, clip_vision=None, filena...
  function sd_save_checkpoint (line 420) | def sd_save_checkpoint(output_path, model, clip=None, vae=None, clip_vis...
  class TorchCompileModelFluxAdvanced (line 446) | class TorchCompileModelFluxAdvanced: #adapted from https://github.com/ki...
    method __init__ (line 447) | def __init__(self):
    method INPUT_TYPES (line 451) | def INPUT_TYPES(s):
    method parse_blocks (line 467) | def parse_blocks(self, blocks_str):
    method patch (line 478) | def patch(self, model, backend, mode, fullgraph, single_blocks, double...

FILE: legacy/noise_classes.py
  class PrecisionTool (line 20) | class PrecisionTool:
    method __init__ (line 21) | def __init__(self, cast_type='fp64'):
    method cast_tensor (line 24) | def cast_tensor(self, func):
    method set_cast_type (line 60) | def set_cast_type(self, new_value):
  function noise_generator_factory (line 69) | def noise_generator_factory(cls, **fixed_params):
  function like (line 75) | def like(x):
  function scale_to_range (line 78) | def scale_to_range(x, scaled_min = -1.73, scaled_max = 1.73): #1.73 is r...
  function normalize (line 81) | def normalize(x):
  class NoiseGenerator (line 84) | class NoiseGenerator:
    method __init__ (line 85) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 117) | def __call__(self):
    method update (line 120) | def update(self, **kwargs):
  class BrownianNoiseGenerator (line 134) | class BrownianNoiseGenerator(NoiseGenerator):
    method __call__ (line 135) | def __call__(self, *, sigma=None, sigma_next=None, **kwargs):
  class FractalNoiseGenerator (line 140) | class FractalNoiseGenerator(NoiseGenerator):
    method __init__ (line 141) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 146) | def __call__(self, *, alpha=None, k=None, scale=None, **kwargs):
  class SimplexNoiseGenerator (line 175) | class SimplexNoiseGenerator(NoiseGenerator):
    method __init__ (line 176) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 182) | def __call__(self, *, scale=None, **kwargs):
  class HiresPyramidNoiseGenerator (line 203) | class HiresPyramidNoiseGenerator(NoiseGenerator):
    method __init__ (line 204) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 209) | def __call__(self, *, discount=None, mode=None, **kwargs):
  class PyramidNoiseGenerator (line 243) | class PyramidNoiseGenerator(NoiseGenerator):
    method __init__ (line 244) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 249) | def __call__(self, *, discount=None, mode=None, **kwargs):
  class InterpolatedPyramidNoiseGenerator (line 280) | class InterpolatedPyramidNoiseGenerator(NoiseGenerator):
    method __init__ (line 281) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 287) | def __call__(self, *, discount=None, mode=None, **kwargs):
  class CascadeBPyramidNoiseGenerator (line 324) | class CascadeBPyramidNoiseGenerator(NoiseGenerator):
    method __init__ (line 325) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 330) | def __call__(self, *, levels=10, mode='nearest', size_range=[1,16], **...
  class UniformNoiseGenerator (line 353) | class UniformNoiseGenerator(NoiseGenerator):
    method __init__ (line 354) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 359) | def __call__(self, *, mean=None, scale=None, **kwargs):
  class GaussianNoiseGenerator (line 366) | class GaussianNoiseGenerator(NoiseGenerator):
    method __init__ (line 367) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 372) | def __call__(self, *, mean=None, std=None, **kwargs):
  class GaussianBackwardsNoiseGenerator (line 379) | class GaussianBackwardsNoiseGenerator(NoiseGenerator):
    method __init__ (line 380) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 385) | def __call__(self, *, mean=None, std=None, **kwargs):
  class LaplacianNoiseGenerator (line 393) | class LaplacianNoiseGenerator(NoiseGenerator):
    method __init__ (line 394) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 399) | def __call__(self, *, loc=None, scale=None, **kwargs):
  class StudentTNoiseGenerator (line 416) | class StudentTNoiseGenerator(NoiseGenerator):
    method __init__ (line 417) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 422) | def __call__(self, *, loc=None, scale=None, df=None, **kwargs):
  class WaveletNoiseGenerator (line 447) | class WaveletNoiseGenerator(NoiseGenerator):
    method __init__ (line 448) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method __call__ (line 453) | def __call__(self, *, wavelet=None, **kwargs):
  class PerlinNoiseGenerator (line 467) | class PerlinNoiseGenerator(NoiseGenerator):
    method __init__ (line 468) | def __init__(self, x=None, size=None, dtype=None, layout=None, device=...
    method get_positions (line 474) | def get_positions(block_shape: Tuple[int, int]) -> Tensor:
    method unfold_grid (line 486) | def unfold_grid(vectors: Tensor) -> Tensor:
    method smooth_step (line 496) | def smooth_step(t: Tensor) -> Tensor:
    method perlin_noise_tensor (line 500) | def perlin_noise_tensor(
    method perlin_noise (line 553) | def perlin_noise(
    method __call__ (line 580) | def __call__(self, *, detail=None, **kwargs):
  function prepare_noise (line 662) | def prepare_noise(latent_image, seed, noise_type, noise_inds=None, alpha...

FILE: legacy/noise_sigmas_timesteps_scaling.py
  function get_alpha_ratio_from_sigma_up (line 6) | def get_alpha_ratio_from_sigma_up(sigma_up, sigma_next, eta, sigma_max=1...
  function get_alpha_ratio_from_sigma_down (line 26) | def get_alpha_ratio_from_sigma_down(sigma_down, sigma_next, eta, sigma_m...
  function get_ancestral_step_RF_var (line 35) | def get_ancestral_step_RF_var(sigma, sigma_next, eta, sigma_max=1.0):
  function get_ancestral_step_RF_lorentzian (line 50) | def get_ancestral_step_RF_lorentzian(sigma, sigma_next, eta, sigma_max=1...
  function get_ancestral_step_EPS (line 57) | def get_ancestral_step_EPS(sigma, sigma_next, eta=1.):
  function get_ancestral_step_RF_sinusoidal (line 69) | def get_ancestral_step_RF_sinusoidal(sigma_next, eta, sigma_max=1.0):
  function get_ancestral_step_RF_softer (line 74) | def get_ancestral_step_RF_softer(sigma, sigma_next, eta, sigma_max=1.0):
  function get_ancestral_step_RF_soft (line 80) | def get_ancestral_step_RF_soft(sigma, sigma_next, eta, sigma_max=1.0):
  function get_ancestral_step_RF_soft_linear (line 88) | def get_ancestral_step_RF_soft_linear(sigma, sigma_next, eta, sigma_max=...
  function get_ancestral_step_RF_exp (line 96) | def get_ancestral_step_RF_exp(sigma, sigma_next, eta, sigma_max=1.0): # ...
  function get_ancestral_step_RF_sqrd (line 102) | def get_ancestral_step_RF_sqrd(sigma, sigma_next, eta, sigma_max=1.0):
  function get_ancestral_step_RF_hard (line 108) | def get_ancestral_step_RF_hard(sigma_next, eta, sigma_max=1.0):
  function get_vpsde_step_RF (line 113) | def get_vpsde_step_RF(sigma, sigma_next, eta, sigma_max=1.0):
  function get_fuckery_step_RF (line 120) | def get_fuckery_step_RF(sigma, sigma_next, eta, sigma_max=1.0):
  function get_res4lyf_step_with_model (line 127) | def get_res4lyf_step_with_model(model, sigma, sigma_next, eta=0.0, noise...
  function get_res4lyf_half_step3 (line 206) | def get_res4lyf_half_step3(sigma, sigma_next, c2=0.5, c3=1.0, t_fn=None,...

FILE: legacy/phi_functions.py
  function _phi (line 7) | def _phi(j, neg_h):
  function calculate_gamma (line 16) | def calculate_gamma(c2, c3):
  function _gamma (line 20) | def _gamma(n: int,) -> int:
  function _incomplete_gamma (line 28) | def _incomplete_gamma(s: int, x: float, gamma_s: Optional[int] = None) -...
  function phi (line 47) | def phi(j: int, neg_h: float, ):
  class Phi (line 74) | class Phi:
    method __init__ (line 75) | def __init__(self, h, c, analytic_solution=False):
    method __call__ (line 84) | def __call__(self, j, i=-1):

FILE: legacy/rk_coefficients.py
  function get_rk_methods (line 763) | def get_rk_methods(rk_type, h, c1=0.0, c2=0.5, c3=1.0, h_prev=None, h_pr...
  function gen_first_col_exp (line 1695) | def gen_first_col_exp(a, b, c, φ):
  function rho (line 1702) | def rho(j, ci, ck, cl):
  function mu (line 1712) | def mu(j, cd, ci, ck, cl):
  function mu_numerator (line 1723) | def mu_numerator(j, cd, ci, ck, cl):
  function theta_numerator (line 1736) | def theta_numerator(j, cd, ci, ck, cj, cl):
  function theta (line 1750) | def theta(j, cd, ci, ck, cj, cl):
  function prod_diff (line 1765) | def prod_diff(cj, ck, cl=None, cd=None, cblah=None):
  function denominator (line 1773) | def denominator(ci, *args):
  function check_condition_4_2 (line 1781) | def check_condition_4_2(nodes):

FILE: legacy/rk_guide_func.py
  function normalize_inputs (line 16) | def normalize_inputs(x, y0, y0_inv, guide_mode,  extra_options):
  class LatentGuide (line 39) | class LatentGuide:
    method __init__ (line 40) | def __init__(self, guides, x, model, sigmas, UNSAMPLE, LGW_MASK_RESCAL...
    method init_guides (line 103) | def init_guides(self, x, noise_sampler, latent_guide=None, latent_guid...
    method process_guides_substep (line 145) | def process_guides_substep(self, x_0, x_, eps_, data_, row, step, sigm...
    method process_guides_poststep (line 496) | def process_guides_poststep(self, x, denoised, eps, step, extra_options):
  function apply_frame_weights (line 642) | def apply_frame_weights(mask, frame_weights):
  function prepare_mask (line 650) | def prepare_mask(x, mask, LGW_MASK_RESCALE_MIN) -> Tuple[torch.Tensor, b...
  function prepare_weighted_masks (line 674) | def prepare_weighted_masks(mask, mask_inv, lgw_, lgw_inv_, latent_guide,...
  function apply_temporal_smoothing (line 692) | def apply_temporal_smoothing(tensor, temporal_smoothing):
  function get_guide_epsilon_substep (line 718) | def get_guide_epsilon_substep(x_0, x_, y0, y0_inv, s_, row, rk_type, b=N...
  function get_guide_epsilon (line 737) | def get_guide_epsilon(x_0, x_, y0, sigma, rk_type, b=None, c=None):
  function noise_cossim_guide_tiled (line 758) | def noise_cossim_guide_tiled(x_list, guide, cossim_mode="forward", tile_...
  function noise_cossim_eps_tiled (line 812) | def noise_cossim_eps_tiled(x_list, eps, noise_list, cossim_mode="forward...
  function noise_cossim_guide_eps_tiled (line 897) | def noise_cossim_guide_eps_tiled(x_0, x_list, y0, noise_list, cossim_mod...
  function get_collinear (line 990) | def get_collinear(x, y):
  function get_orthogonal (line 1001) | def get_orthogonal(x, y):
  function get_orthogonal_noise_from_channelwise (line 1015) | def get_orthogonal_noise_from_channelwise(*refs, max_iter=500, max_score...
  function gram_schmidt_channels_optimized (line 1041) | def gram_schmidt_channels_optimized(A, *refs):
  class NoiseStepHandlerOSDE (line 1063) | class NoiseStepHandlerOSDE:
    method __init__ (line 1064) | def __init__(self, x, eps=None, data=None, x_init=None, guide=None, gu...
    method check_cossim_source (line 1098) | def check_cossim_source(self, source):
    method get_ortho_noise (line 1101) | def get_ortho_noise(self, noise, prev_noises=None, max_iter=100, max_s...
  function handle_tiled_etc_noise_steps (line 1116) | def handle_tiled_etc_noise_steps(x_0, x, x_prenoise, x_init, eps, denois...

FILE: legacy/rk_method.py
  class RK_Method (line 19) | class RK_Method:
    method __init__ (line 20) | def __init__(self, model, name="", method="explicit", dynamic_method=F...
    method is_exponential (line 57) | def is_exponential(rk_type):
    method create (line 65) | def create(model, rk_type, device='cuda', dtype=torch.float64, name=""...
    method __call__ (line 71) | def __call__(self):
    method model_epsilon (line 74) | def model_epsilon(self, x, sigma, **extra_args):
    method model_denoised (line 83) | def model_denoised(self, x, sigma, **extra_args):
    method init_noise_sampler (line 91) | def init_noise_sampler(self, x, noise_seed, noise_sampler_type, alpha,...
    method add_noise_pre (line 101) | def add_noise_pre(self, x, sigma_up, sigma, sigma_next, alpha_ratio, s...
    method add_noise_post (line 107) | def add_noise_post(self, x, sigma_up, sigma, sigma_next, alpha_ratio, ...
    method add_noise (line 113) | def add_noise(self, x, sigma_up, sigma, sigma_next, alpha_ratio, s_noi...
    method set_coeff (line 128) | def set_coeff(self, rk_type, h, c1=0.0, c2=0.5, c3=1.0, stepcount=0, s...
    method a_k_sum (line 151) | def a_k_sum(self, k, row):
    method b_k_sum (line 165) | def b_k_sum(self, k, row):
    method init_cfg_channelwise (line 180) | def init_cfg_channelwise(self, x, cfg_cw=1.0, **extra_args):
    method calc_cfg_channelwise (line 192) | def calc_cfg_channelwise(self, denoised):
  class RK_Method_Exponential (line 208) | class RK_Method_Exponential(RK_Method):
    method __init__ (line 209) | def __init__(self, model, name="", method="explicit", device='cuda', d...
    method alpha_fn (line 215) | def alpha_fn(neg_h):
    method sigma_fn (line 219) | def sigma_fn(t):
    method t_fn (line 223) | def t_fn(sigma):
    method h_fn (line 227) | def h_fn(sigma_down, sigma):
    method __call__ (line 230) | def __call__(self, x_0, x, sigma, h, **extra_args):
    method data_to_vel (line 248) | def data_to_vel(self, x, data, sigma):
    method get_epsilon (line 251) | def get_epsilon(self, x_0, x, y, sigma, sigma_cur, sigma_down=None, un...
  class RK_Method_Linear (line 276) | class RK_Method_Linear(RK_Method):
    method __init__ (line 277) | def __init__(self, model, name="", method="explicit", device='cuda', d...
    method alpha_fn (line 283) | def alpha_fn(neg_h):
    method sigma_fn (line 287) | def sigma_fn(t):
    method t_fn (line 291) | def t_fn(sigma):
    method h_fn (line 295) | def h_fn(sigma_down, sigma):
    method __call__ (line 298) | def __call__(self, x_0, x, sigma, h, **extra_args):
    method data_to_vel (line 317) | def data_to_vel(self, x, data, sigma):
    method get_epsilon (line 320) | def get_epsilon(self, x_0, x, y, sigma, sigma_cur, sigma_down=None, un...

FILE: legacy/rk_sampler.py
  function prepare_sigmas (line 20) | def prepare_sigmas(model, sigmas):
  function prepare_step_to_sigma_zero (line 33) | def prepare_step_to_sigma_zero(rk, irk, rk_type, irk_type, model, x, ext...
  function sample_rk (line 58) | def sample_rk(model, x, sigmas, extra_args=None, callback=None, disable=...
  function get_explicit_rk_step (line 544) | def get_explicit_rk_step(rk, rk_type, x, LG, step, sigma, sigma_next, et...
  function preview_callback (line 612) | def preview_callback(x, eps, denoised, x_, eps_, data_, step, sigma, sig...
  function sample_res_2m (line 648) | def sample_res_2m(model, x, sigmas, extra_args=None, callback=None, disa...
  function sample_res_2s (line 650) | def sample_res_2s(model, x, sigmas, extra_args=None, callback=None, disa...
  function sample_res_3s (line 652) | def sample_res_3s(model, x, sigmas, extra_args=None, callback=None, disa...
  function sample_res_5s (line 654) | def sample_res_5s(model, x, sigmas, extra_args=None, callback=None, disa...
  function sample_res_6s (line 656) | def sample_res_6s(model, x, sigmas, extra_args=None, callback=None, disa...
  function sample_res_2m_sde (line 659) | def sample_res_2m_sde(model, x, sigmas, extra_args=None, callback=None, ...
  function sample_res_2s_sde (line 661) | def sample_res_2s_sde(model, x, sigmas, extra_args=None, callback=None, ...
  function sample_res_3s_sde (line 663) | def sample_res_3s_sde(model, x, sigmas, extra_args=None, callback=None, ...
  function sample_res_5s_sde (line 665) | def sample_res_5s_sde(model, x, sigmas, extra_args=None, callback=None, ...
  function sample_res_6s_sde (line 667) | def sample_res_6s_sde(model, x, sigmas, extra_args=None, callback=None, ...
  function sample_deis_2m (line 670) | def sample_deis_2m(model, x, sigmas, extra_args=None, callback=None, dis...
  function sample_deis_3m (line 672) | def sample_deis_3m(model, x, sigmas, extra_args=None, callback=None, dis...
  function sample_deis_4m (line 674) | def sample_deis_4m(model, x, sigmas, extra_args=None, callback=None, dis...
  function sample_deis_2m_sde (line 677) | def sample_deis_2m_sde(model, x, sigmas, extra_args=None, callback=None,...
  function sample_deis_3m_sde (line 679) | def sample_deis_3m_sde(model, x, sigmas, extra_args=None, callback=None,...
  function sample_deis_4m_sde (line 681) | def sample_deis_4m_sde(model, x, sigmas, extra_args=None, callback=None,...

FILE: legacy/samplers.py
  function move_to_same_device (line 35) | def move_to_same_device(*tensors):
  class ClownSamplerAdvanced (line 47) | class ClownSamplerAdvanced:
    method INPUT_TYPES (line 49) | def INPUT_TYPES(s):
    method main (line 82) | def main(self,
  class ClownSampler (line 162) | class ClownSampler:
    method INPUT_TYPES (line 164) | def INPUT_TYPES(s):
    method main (line 194) | def main(self,
  function process_sampler_name (line 220) | def process_sampler_name(selected_value):
  function copy_cond (line 233) | def copy_cond(positive):
  class SharkSamplerAlpha (line 247) | class SharkSamplerAlpha:
    method INPUT_TYPES (line 249) | def INPUT_TYPES(s):
    method main (line 282) | def main(self, model, cfg, scheduler, steps, sampler_mode="standard",d...
  class ClownsharKSampler (line 596) | class ClownsharKSampler:
    method INPUT_TYPES (line 598) | def INPUT_TYPES(s):
    method main (line 637) | def main(self, model, cfg, sampler_mode, scheduler, steps, denoise=1.0...
  class UltraSharkSampler (line 681) | class UltraSharkSampler:
    method INPUT_TYPES (line 684) | def INPUT_TYPES(s):
    method main (line 721) | def main(self, model, add_noise, normalize_noise, noise_type, noise_se...

FILE: legacy/samplers_extensions.py
  function move_to_same_device (line 21) | def move_to_same_device(*tensors):
  class SamplerOptions_TimestepScaling (line 33) | class SamplerOptions_TimestepScaling:
    method INPUT_TYPES (line 36) | def INPUT_TYPES(s):
    method set_sampler_extra_options (line 55) | def set_sampler_extra_options(self, sampler, t_fn_formula=None, sigma_...
  class SamplerOptions_GarbageCollection (line 66) | class SamplerOptions_GarbageCollection:
    method INPUT_TYPES (line 68) | def INPUT_TYPES(s):
    method set_sampler_extra_options (line 85) | def set_sampler_extra_options(self, sampler, garbage_collection):
  class ClownInpaint (line 115) | class ClownInpaint: ####################################################...
    method INPUT_TYPES (line 117) | def INPUT_TYPES(s):
    method main (line 146) | def main(self, guide_weight_scheduler="constant", guide_weight_schedul...
  class ClownInpaintSimple (line 193) | class ClownInpaintSimple: ##############################################...
    method INPUT_TYPES (line 195) | def INPUT_TYPES(s):
    method main (line 218) | def main(self, guide_weight_scheduler="constant", guide_weight_schedul...
  class ClownsharKSamplerGuide (line 268) | class ClownsharKSamplerGuide:
    method INPUT_TYPES (line 270) | def INPUT_TYPES(s):
    method main (line 299) | def main(self, guide_weight_scheduler="constant", guide_weight_schedul...
  class ClownsharKSamplerGuides (line 329) | class ClownsharKSamplerGuides:
    method INPUT_TYPES (line 331) | def INPUT_TYPES(s):
    method main (line 360) | def main(self, guide_weight_scheduler="constant", guide_weight_schedul...
  class ClownsharKSamplerAutomation (line 392) | class ClownsharKSamplerAutomation:
    method INPUT_TYPES (line 394) | def INPUT_TYPES(s):
    method main (line 413) | def main(self, etas=None, s_noises=None, unsample_resample_scales=None,):
  class ClownsharKSamplerAutomation_Advanced (line 420) | class ClownsharKSamplerAutomation_Advanced:
    method INPUT_TYPES (line 422) | def INPUT_TYPES(s):
    method main (line 444) | def main(self, automation=None, etas=None, etas_substep=None, s_noises...
  class ClownsharKSamplerOptions (line 461) | class ClownsharKSamplerOptions:
    method INPUT_TYPES (line 463) | def INPUT_TYPES(s):
    method main (line 498) | def main(self, noise_init_stdev, noise_init_mean, c1, c2, c3, eta, s_n...
  class ClownOptions_SDE_Noise (line 531) | class ClownOptions_SDE_Noise:
    method INPUT_TYPES (line 533) | def INPUT_TYPES(s):
    method main (line 550) | def main(self, sde_noise_steps, sde_noise, options=None,):
  class ClownOptions_FrameWeights (line 562) | class ClownOptions_FrameWeights:
    method INPUT_TYPES (line 564) | def INPUT_TYPES(s):
    method main (line 582) | def main(self, frame_weights, options=None,):

FILE: legacy/samplers_tiled.py
  function initialize_or_scale (line 30) | def initialize_or_scale(tensor, value, steps):
  function cv_cond (line 36) | def cv_cond(cv_out, conditioning, strength, noise_augmentation):
  function recursion_to_list (line 52) | def recursion_to_list(obj, attr):
  function copy_cond (line 62) | def copy_cond(cond):
  function slice_cond (line 65) | def slice_cond(tile_h, tile_h_len, tile_w, tile_w_len, cond, area):
  function slice_gligen (line 90) | def slice_gligen(tile_h, tile_h_len, tile_w, tile_w_len, cond, gligen):
  function slice_cnet (line 115) | def slice_cnet(h, h_len, w, w_len, model:comfy.controlnet.ControlBase, i...
  function slices_T2I (line 124) | def slices_T2I(h, h_len, w, w_len, model:comfy.controlnet.ControlBase, i...
  function cnets_and_cnet_imgs (line 133) | def cnets_and_cnet_imgs(positive, negative, shape):
  function T2Is_and_T2I_imgs (line 146) | def T2Is_and_T2I_imgs(positive, negative, shape):
  function spatial_conds_posneg (line 164) | def spatial_conds_posneg(positive, negative, shape, device): #cond area ...
  function gligen_posneg (line 177) | def gligen_posneg(positive, negative):
  function cascade_tiles (line 190) | def cascade_tiles(x, input_x, tile_h, tile_w, tile_h_len, tile_w_len):
  function sample_common (line 207) | def sample_common(model, x, noise, noise_mask, noise_seed, tile_width, t...
  class UltraSharkSampler_Tiled (line 488) | class UltraSharkSampler_Tiled: #this is for use with https://github.com/...
    method INPUT_TYPES (line 490) | def INPUT_TYPES(s):
    method sample (line 537) | def sample(self, model, noise_seed, add_noise, noise_is_latent, noise_...

FILE: legacy/sigmas.py
  function rescale_linear (line 14) | def rescale_linear(input, input_min, input_max, output_min, output_max):
  class set_precision_sigmas (line 18) | class set_precision_sigmas:
    method __init__ (line 19) | def __init__(self):
    method INPUT_TYPES (line 22) | def INPUT_TYPES(s):
    method main (line 37) | def main(self, precision="32", sigmas=None, set_default=False):
  class SimpleInterpolator (line 54) | class SimpleInterpolator(nn.Module):
    method __init__ (line 55) | def __init__(self):
    method forward (line 65) | def forward(self, x):
  function train_interpolator (line 68) | def train_interpolator(model, sigma_schedule, steps, epochs=5000, lr=0.01):
  function interpolate_sigma_schedule_model (line 92) | def interpolate_sigma_schedule_model(sigma_schedule, target_steps):
  class sigmas_interpolate (line 112) | class sigmas_interpolate:
    method __init__ (line 113) | def __init__(self):
    method INPUT_TYPES (line 117) | def INPUT_TYPES(s):
    method interpolate_sigma_schedule_poly (line 135) | def interpolate_sigma_schedule_poly(self, sigma_schedule, target_steps):
    method interpolate_sigma_schedule_constrained (line 154) | def interpolate_sigma_schedule_constrained(self, sigma_schedule, targe...
    method interpolate_sigma_schedule_exp (line 173) | def interpolate_sigma_schedule_exp(self, sigma_schedule, target_steps):
    method interpolate_sigma_schedule_power (line 194) | def interpolate_sigma_schedule_power(self, sigma_schedule, target_steps):
    method interpolate_sigma_schedule_linear (line 216) | def interpolate_sigma_schedule_linear(self, sigma_schedule, target_ste...
    method interpolate_sigma_schedule_nearest (line 219) | def interpolate_sigma_schedule_nearest(self, sigma_schedule, target_st...
    method interpolate_nearest_neighbor (line 222) | def interpolate_nearest_neighbor(self, sigma_schedule, target_steps):
    method main (line 236) | def main(self, sigmas_0, sigmas_1, mode, order):
  class sigmas_noise_inversion (line 257) | class sigmas_noise_inversion:
    method __init__ (line 260) | def __init__(self):
    method INPUT_TYPES (line 264) | def INPUT_TYPES(s):
    method main (line 277) | def main(self, sigmas):
  function compute_sigma_next_variance_floor (line 290) | def compute_sigma_next_variance_floor(sigma):
  class sigmas_variance_floor (line 293) | class sigmas_variance_floor:
    method __init__ (line 294) | def __init__(self):
    method INPUT_TYPES (line 298) | def INPUT_TYPES(s):
    method main (line 312) | def main(self, sigmas):
  class sigmas_from_text (line 324) | class sigmas_from_text:
    method __init__ (line 325) | def __init__(self):
    method INPUT_TYPES (line 329) | def INPUT_TYPES(s):
    method main (line 341) | def main(self, text):
  class sigmas_concatenate (line 351) | class sigmas_concatenate:
    method __init__ (line 352) | def __init__(self):
    method INPUT_TYPES (line 356) | def INPUT_TYPES(s):
    method main (line 368) | def main(self, sigmas_1, sigmas_2):
  class sigmas_truncate (line 371) | class sigmas_truncate:
    method __init__ (line 372) | def __init__(self):
    method INPUT_TYPES (line 376) | def INPUT_TYPES(s):
    method main (line 388) | def main(self, sigmas, sigmas_until):
  class sigmas_start (line 391) | class sigmas_start:
    method __init__ (line 392) | def __init__(self):
    method INPUT_TYPES (line 396) | def INPUT_TYPES(s):
    method main (line 408) | def main(self, sigmas, sigmas_until):
  class sigmas_split (line 411) | class sigmas_split:
    method __init__ (line 412) | def __init__(self):
    method INPUT_TYPES (line 416) | def INPUT_TYPES(s):
    method main (line 429) | def main(self, sigmas, sigmas_start, sigmas_end):
  class sigmas_pad (line 435) | class sigmas_pad:
    method __init__ (line 436) | def __init__(self):
    method INPUT_TYPES (line 440) | def INPUT_TYPES(s):
    method main (line 452) | def main(self, sigmas, value):
  class sigmas_unpad (line 455) | class sigmas_unpad:
    method __init__ (line 456) | def __init__(self):
    method INPUT_TYPES (line 460) | def INPUT_TYPES(s):
    method main (line 471) | def main(self, sigmas):
  class sigmas_set_floor (line 474) | class sigmas_set_floor:
    method __init__ (line 475) | def __init__(self):
    method INPUT_TYPES (line 479) | def INPUT_TYPES(s):
    method set_floor (line 493) | def set_floor(self, sigmas, floor, new_floor):
  class sigmas_delete_below_floor (line 497) | class sigmas_delete_below_floor:
    method __init__ (line 498) | def __init__(self):
    method INPUT_TYPES (line 502) | def INPUT_TYPES(s):
    method delete_below_floor (line 515) | def delete_below_floor(self, sigmas, floor):
  class sigmas_delete_value (line 518) | class sigmas_delete_value:
    method __init__ (line 519) | def __init__(self):
    method INPUT_TYPES (line 523) | def INPUT_TYPES(s):
    method delete_value (line 536) | def delete_value(self, sigmas, value):
  class sigmas_delete_consecutive_duplicates (line 539) | class sigmas_delete_consecutive_duplicates:
    method __init__ (line 540) | def __init__(self):
    method INPUT_TYPES (line 544) | def INPUT_TYPES(s):
    method delete_consecutive_duplicates (line 556) | def delete_consecutive_duplicates(self, sigmas_1):
  class sigmas_cleanup (line 561) | class sigmas_cleanup:
    method __init__ (line 562) | def __init__(self):
    method INPUT_TYPES (line 566) | def INPUT_TYPES(s):
    method cleanup (line 579) | def cleanup(self, sigmas, sigmin):
  class sigmas_mult (line 587) | class sigmas_mult:
    method __init__ (line 588) | def __init__(self):
    method INPUT_TYPES (line 592) | def INPUT_TYPES(s):
    method main (line 607) | def main(self, sigmas, multiplier, sigmas2=None):
  class sigmas_modulus (line 613) | class sigmas_modulus:
    method __init__ (line 614) | def __init__(self):
    method INPUT_TYPES (line 618) | def INPUT_TYPES(s):
    method main (line 630) | def main(self, sigmas, divisor):
  class sigmas_quotient (line 633) | class sigmas_quotient:
    method __init__ (line 634) | def __init__(self):
    method INPUT_TYPES (line 638) | def INPUT_TYPES(s):
    method main (line 650) | def main(self, sigmas, divisor):
  class sigmas_add (line 653) | class sigmas_add:
    method __init__ (line 654) | def __init__(self):
    method INPUT_TYPES (line 658) | def INPUT_TYPES(s):
    method main (line 670) | def main(self, sigmas, addend):
  class sigmas_power (line 673) | class sigmas_power:
    method __init__ (line 674) | def __init__(self):
    method INPUT_TYPES (line 678) | def INPUT_TYPES(s):
    method main (line 690) | def main(self, sigmas, power):
  class sigmas_abs (line 693) | class sigmas_abs:
    method __init__ (line 694) | def __init__(self):
    method INPUT_TYPES (line 698) | def INPUT_TYPES(s):
    method main (line 709) | def main(self, sigmas):
  class sigmas2_mult (line 712) | class sigmas2_mult:
    method __init__ (line 713) | def __init__(self):
    method INPUT_TYPES (line 717) | def INPUT_TYPES(s):
    method main (line 729) | def main(self, sigmas_1, sigmas_2):
  class sigmas2_add (line 732) | class sigmas2_add:
    method __init__ (line 733) | def __init__(self):
    method INPUT_TYPES (line 737) | def INPUT_TYPES(s):
    method main (line 749) | def main(self, sigmas_1, sigmas_2):
  class sigmas_rescale (line 752) | class sigmas_rescale:
    method __init__ (line 753) | def __init__(self):
    method INPUT_TYPES (line 756) | def INPUT_TYPES(s):
    method main (line 774) | def main(self, start=0, end=-1, sigmas=None):
  class sigmas_math1 (line 781) | class sigmas_math1:
    method __init__ (line 782) | def __init__(self):
    method INPUT_TYPES (line 785) | def INPUT_TYPES(s):
    method main (line 808) | def main(self, start=0, stop=0, trim=0, a=None, b=None, c=None, x=1.0,...
  class sigmas_math3 (line 842) | class sigmas_math3:
    method __init__ (line 843) | def __init__(self):
    method INPUT_TYPES (line 846) | def INPUT_TYPES(s):
    method main (line 877) | def main(self, start=0, stop=0, trim=0, a=None, b=None, c=None, x=1.0,...
  class sigmas_iteration_karras (line 917) | class sigmas_iteration_karras:
    method __init__ (line 918) | def __init__(self):
    method INPUT_TYPES (line 922) | def INPUT_TYPES(s):
    method main (line 944) | def main(self, steps_up, steps_down, rho_up, rho_down, s_min_start, s_...
  class sigmas_iteration_polyexp (line 965) | class sigmas_iteration_polyexp:
    method __init__ (line 966) | def __init__(self):
    method INPUT_TYPES (line 970) | def INPUT_TYPES(s):
    method main (line 992) | def main(self, steps_up, steps_down, rho_up, rho_down, s_min_start, s_...
  class tan_scheduler (line 1013) | class tan_scheduler:
    method __init__ (line 1014) | def __init__(self):
    method INPUT_TYPES (line 1018) | def INPUT_TYPES(s):
    method main (line 1035) | def main(self, steps, slope, offset, start, end, sgm, pad):
  class tan_scheduler_2stage (line 1055) | class tan_scheduler_2stage:
    method __init__ (line 1056) | def __init__(self):
    method INPUT_TYPES (line 1060) | def INPUT_TYPES(s):
    method get_tan_sigmas (line 1081) | def get_tan_sigmas(self, steps, slope, pivot, start, end):
    method main (line 1092) | def main(self, steps, midpoint, start, middle, end, pivot_1, pivot_2, ...
  class tan_scheduler_2stage_simple (line 1108) | class tan_scheduler_2stage_simple:
    method __init__ (line 1109) | def __init__(self):
    method INPUT_TYPES (line 1113) | def INPUT_TYPES(s):
    method get_tan_sigmas (line 1133) | def get_tan_sigmas(self, steps, slope, pivot, start, end):
    method main (line 1144) | def main(self, steps, start, middle, end, pivot_1, pivot_2, slope_1, s...
  class linear_quadratic_advanced (line 1168) | class linear_quadratic_advanced:
    method __init__ (line 1169) | def __init__(self):
    method INPUT_TYPES (line 1173) | def INPUT_TYPES(s):
    method main (line 1190) | def main(self, steps, denoise, inflection_percent, model=None):
  class constant_scheduler (line 1196) | class constant_scheduler:
    method __init__ (line 1197) | def __init__(self):
    method INPUT_TYPES (line 1201) | def INPUT_TYPES(s):
    method main (line 1216) | def main(self, steps, value_start, value_end, cutoff_percent):
  function get_sigmas_simple_exponential (line 1224) | def get_sigmas_simple_exponential(model, steps):
  function get_sigmas (line 1242) | def get_sigmas(model, scheduler, steps, denoise, lq_inflection_percent=0...

FILE: legacy/tiling.py
  function grouper (line 8) | def grouper(n, iterable):
  function create_batches (line 16) | def create_batches(n, iterable):
  function get_slice (line 23) | def get_slice(tensor, h, h_len, w, w_len):
  function set_slice (line 28) | def set_slice(tensor1,tensor2,  h, h_len, w, w_len, mask=None):
  function get_tiles_and_masks_simple (line 34) | def get_tiles_and_masks_simple(steps, latent_shape, tile_height, tile_wi...
  function get_tiles_and_masks_padded (line 55) | def get_tiles_and_masks_padded(steps, latent_shape, tile_height, tile_wi...
  function mask_at_boundary (line 123) | def mask_at_boundary(h, h_len, w, w_len, tile_size_h, tile_size_w, laten...
  function get_tiles_and_masks_rgrid (line 135) | def get_tiles_and_masks_rgrid(steps, latent_shape, tile_height, tile_wid...

FILE: lightricks/model.py
  function get_timestep_embedding (line 15) | def get_timestep_embedding(
  class TimestepEmbedding (line 69) | class TimestepEmbedding(nn.Module):
    method __init__ (line 70) | def __init__(
    method forward (line 103) | def forward(self, sample, condition=None):
  class Timesteps (line 118) | class Timesteps(nn.Module):
    method __init__ (line 119) | def __init__(self, num_channels: int, flip_sin_to_cos: bool, downscale...
    method forward (line 126) | def forward(self, timesteps):
  class PixArtAlphaCombinedTimestepSizeEmbeddings (line 137) | class PixArtAlphaCombinedTimestepSizeEmbeddings(nn.Module):
    method __init__ (line 145) | def __init__(self, embedding_dim, size_emb_dim, use_additional_conditi...
    method forward (line 152) | def forward(self, timestep, resolution, aspect_ratio, batch_size, hidd...
  class AdaLayerNormSingle (line 158) | class AdaLayerNormSingle(nn.Module):
    method __init__ (line 169) | def __init__(self, embedding_dim: int, use_additional_conditions: bool...
    method forward (line 179) | def forward(
  class PixArtAlphaTextProjection (line 191) | class PixArtAlphaTextProjection(nn.Module):
    method __init__ (line 198) | def __init__(self, in_features, hidden_size, out_features=None, act_fn...
    method forward (line 211) | def forward(self, caption):
  class GELU_approx (line 218) | class GELU_approx(nn.Module):
    method __init__ (line 219) | def __init__(self, dim_in, dim_out, dtype=None, device=None, operation...
    method forward (line 223) | def forward(self, x):
  class FeedForward (line 227) | class FeedForward(nn.Module):
    method __init__ (line 228) | def __init__(self, dim, dim_out, mult=4, glu=False, dropout=0., dtype=...
    method forward (line 239) | def forward(self, x):
  function apply_rotary_emb (line 243) | def apply_rotary_emb(input_tensor, freqs_cis): #TODO: remove duplicate f...
  class CrossAttention (line 257) | class CrossAttention(nn.Module):
    method __init__ (line 258) | def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, ...
    method forward (line 276) | def forward(self, x, context=None, mask=None, pe=None):
  class BasicTransformerBlock (line 296) | class BasicTransformerBlock(nn.Module):
    method __init__ (line 297) | def __init__(self, dim, n_heads, d_head, context_dim=None, attn_precis...
    method forward (line 308) | def forward(self, x, context=None, attention_mask=None, timestep=None,...
  function get_fractional_positions (line 320) | def get_fractional_positions(indices_grid, max_pos):
  function precompute_freqs_cis (line 331) | def precompute_freqs_cis(indices_grid, dim, out_dtype, theta=10000.0, ma...
  class ReLTXVModel (line 369) | class ReLTXVModel(torch.nn.Module):
    method __init__ (line 370) | def __init__(self,
    method forward (line 425) | def forward(self, x, timestep, context, attention_mask, frame_rate=25,...
  function adain_seq_inplace (line 720) | def adain_seq_inplace(content: torch.Tensor, style: torch.Tensor, eps: f...
  function adain_seq (line 730) | def adain_seq(content: torch.Tensor, style: torch.Tensor, eps: float = 1...

FILE: lightricks/symmetric_patchifier.py
  function latent_to_pixel_coords (line 9) | def latent_to_pixel_coords(
  class Patchifier (line 34) | class Patchifier(ABC):
    method __init__ (line 35) | def __init__(self, patch_size: int):
    method patchify (line 40) | def patchify(
    method unpatchify (line 46) | def unpatchify(
    method patch_size (line 57) | def patch_size(self):
    method get_latent_coords (line 60) | def get_latent_coords(
  class SymmetricPatchifier (line 82) | class SymmetricPatchifier(Patchifier):
    method patchify (line 83) | def patchify(
    method unpatchify (line 98) | def unpatchify(

FILE: lightricks/vae/causal_conv3d.py
  class CausalConv3d (line 9) | class CausalConv3d(nn.Module):
    method __init__ (line 10) | def __init__(
    method forward (line 46) | def forward(self, x, causal: bool = True):
    method weight (line 64) | def weight(self):

FILE: lightricks/vae/causal_video_autoencoder.py
  class Encoder (line 15) | class Encoder(nn.Module):
    method __init__ (line 40) | def __init__(
    method forward (line 208) | def forward(self, sample: torch.FloatTensor) -> torch.FloatTensor:
  class Decoder (line 258) | class Decoder(nn.Module):
    method __init__ (line 283) | def __init__(
    method forward (line 432) | def forward(
  class UNetMidBlock3D (line 497) | class UNetMidBlock3D(nn.Module):
    method __init__ (line 521) | def __init__(
    method forward (line 564) | def forward(
  class SpaceToDepthDownsample (line 593) | class SpaceToDepthDownsample(nn.Module):
    method __init__ (line 594) | def __init__(self, dims, in_channels, out_channels, stride, spatial_pa...
    method forward (line 608) | def forward(self, x, causal: bool = True):
  class DepthToSpaceUpsample (line 640) | class DepthToSpaceUpsample(nn.Module):
    method __init__ (line 641) | def __init__(
    method forward (line 667) | def forward(self, x, causal: bool = True, timestep: Optional[torch.Ten...
  class LayerNorm (line 695) | class LayerNorm(nn.Module):
    method __init__ (line 696) | def __init__(self, dim, eps, elementwise_affine=True) -> None:
    method forward (line 700) | def forward(self, x):
  class ResnetBlock3D (line 707) | class ResnetBlock3D(nn.Module):
    method __init__ (line 720) | def __init__(
    method _feed_spatial_noise (line 810) | def _feed_spatial_noise(
    method forward (line 824) | def forward(
  function patchify (line 888) | def patchify(x, patch_size_hw, patch_size_t=1):
  function unpatchify (line 909) | def unpatchify(x, patch_size_hw, patch_size_t=1):
  class processor (line 928) | class processor(nn.Module):
    method __init__ (line 929) | def __init__(self):
    method un_normalize (line 937) | def un_normalize(self, x):
    method normalize (line 940) | def normalize(self, x):
  class VideoVAE (line 943) | class VideoVAE(nn.Module):
    method __init__ (line 944) | def __init__(self, version=0, config=None):
    method guess_config (line 981) | def guess_config(self, version):
    method encode (line 1081) | def encode(self, x):
    method decode (line 1088) | def decode(self, x, timestep=0.05, noise_scale=0.025):

FILE: lightricks/vae/conv_nd_factory.py
  function make_conv_nd (line 9) | def make_conv_nd(
  function make_linear_nd (line 75) | def make_linear_nd(

FILE: lightricks/vae/dual_conv3d.py
  class DualConv3d (line 10) | class DualConv3d(nn.Module):
    method __init__ (line 11) | def __init__(
    method reset_parameters (line 86) | def reset_parameters(self):
    method forward (line 97) | def forward(self, x, use_conv3d=False, skip_time_conv=False):
    method forward_with_3d (line 103) | def forward_with_3d(self, x, skip_time_conv):
    method forward_with_2d (line 133) | def forward_with_2d(self, x, skip_time_conv):
    method weight (line 185) | def weight(self):
  function test_dual_conv3d_consistency (line 189) | def test_dual_conv3d_consistency():

FILE: lightricks/vae/pixel_norm.py
  class PixelNorm (line 5) | class PixelNorm(nn.Module):
    method __init__ (line 6) | def __init__(self, dim=1, eps=1e-8):
    method forward (line 11) | def forward(self, x):

FILE: loaders.py
  class BaseModelLoader (line 20) | class BaseModelLoader:
    method load_taesd (line 22) | def load_taesd(name):
    method guess_clip_type (line 53) | def guess_clip_type(model):
    method get_model_files (line 101) | def get_model_files():
    method get_weight_options (line 107) | def get_weight_options():
    method get_clip_options (line 111) | def get_clip_options():
    method vae_list (line 115) | def vae_list():
    method process_weight_dtype (line 155) | def process_weight_dtype(self, weight_dtype):
    method load_checkpoint (line 166) | def load_checkpoint(self, model_name, output_vae, output_clip, model_o...
    method load_vae (line 203) | def load_vae(self, vae_name, ckpt_out):
  function load_clipvision (line 219) | def load_clipvision(ckpt_path):
  class FluxLoader (line 224) | class FluxLoader(BaseModelLoader):
    method INPUT_TYPES (line 226) | def INPUT_TYPES(s):
    method main (line 242) | def main(self, model_name, weight_dtype, clip_name1, clip_name2_opt, v...
  class SD35Loader (line 277) | class SD35Loader(BaseModelLoader):
    method INPUT_TYPES (line 279) | def INPUT_TYPES(s):
    method main (line 294) | def main(self, model_name, weight_dtype, clip_name1, clip_name2_opt, c...
  class RES4LYFModelLoader (line 324) | class RES4LYFModelLoader(BaseModelLoader):
    method INPUT_TYPES (line 326) | def INPUT_TYPES(s):
    method main (line 343) | def main(self, model_name, weight_dtype, clip_name1_opt, clip_name2_op...
  class LayerPatcher (line 382) | class LayerPatcher:
    method INPUT_TYPES (line 384) | def INPUT_TYPES(s):
    method get_model_patches (line 400) | def get_model_patches():
    method main (line 403) | def main(self, model, embedder, gates, last_layer, retrojector=None, d...
  function set_nested_attr (line 447) | def set_nested_attr(model, key, value, dtype):

FILE: misc_scripts/replace_metadata.py
  function extract_metadata (line 7) | def extract_metadata(image_path):
  function replace_metadata (line 12) | def replace_metadata(source_image_path, target_image_path, output_image_...
  function main (line 23) | def main():

FILE: models.py
  class PRED (line 71) | class PRED:
    method get_type (line 80) | def get_type(cls, model_sampling):
  function time_snr_shift_exponential (line 85) | def time_snr_shift_exponential(alpha, t):
  function time_snr_shift_linear (line 88) | def time_snr_shift_linear(alpha, t):
  class TorchCompileModels (line 97) | class TorchCompileModels:
    method __init__ (line 98) | def __init__(self):
    method INPUT_TYPES (line 102) | def INPUT_TYPES(s):
    method main (line 118) | def main(self,
  class ReWanPatcherAdvanced (line 199) | class ReWanPatcherAdvanced:
    method __init__ (line 200) | def __init__(self):
    method INPUT_TYPES (line 205) | def INPUT_TYPES(s):
    method main (line 222) | def main(self, model, self_attn_blocks, cross_attn_blocks, sliding_win...
  class ReWanPatcher (line 292) | class ReWanPatcher(ReWanPatcherAdvanced):
    method INPUT_TYPES (line 294) | def INPUT_TYPES(cls):
    method main (line 302) | def main(self, model, enable=True, force=False):
  class ReDoubleStreamBlockNoMask (line 311) | class ReDoubleStreamBlockNoMask(ReDoubleStreamBlock):
    method forward (line 312) | def forward(self, c, mask=None):
  class ReSingleStreamBlockNoMask (line 315) | class ReSingleStreamBlockNoMask(ReSingleStreamBlock):
    method forward (line 316) | def forward(self, c, mask=None):
  class ReFluxPatcherAdvanced (line 319) | class ReFluxPatcherAdvanced:
    method INPUT_TYPES (line 321) | def INPUT_TYPES(s):
    method main (line 336) | def main(self, model, doublestream_blocks, singlestream_blocks, style_...
  class ReFluxPatcher (line 391) | class ReFluxPatcher(ReFluxPatcherAdvanced):
    method INPUT_TYPES (line 393) | def INPUT_TYPES(cls):
    method main (line 402) | def main(self, model, style_dtype="float32", enable=True, force=False):
  class ReReduxPatcher (line 417) | class ReReduxPatcher:
    method INPUT_TYPES (line 419) | def INPUT_TYPES(s):
    method main (line 433) | def main(self, style_model, style_dtype, enable=True, force=False):
  class ReChromaDoubleStreamBlockNoMask (line 457) | class ReChromaDoubleStreamBlockNoMask(ReChromaDoubleStreamBlock):
    method forward (line 458) | def forward(self, c, mask=None):
  class ReChromaSingleStreamBlockNoMask (line 461) | class ReChromaSingleStreamBlockNoMask(ReChromaSingleStreamBlock):
    method forward (line 462) | def forward(self, c, mask=None):
  class ReChromaPatcherAdvanced (line 465) | class ReChromaPatcherAdvanced:
    method INPUT_TYPES (line 467) | def INPUT_TYPES(s):
    method main (line 482) | def main(self, model, doublestream_blocks, singlestream_blocks, style_...
  class ReChromaPatcher (line 535) | class ReChromaPatcher(ReChromaPatcherAdvanced):
    method INPUT_TYPES (line 537) | def INPUT_TYPES(cls):
    method main (line 546) | def main(self, model, style_dtype="float32", enable=True, force=False):
  class ReLTXVPatcherAdvanced (line 570) | class ReLTXVPatcherAdvanced:
    method INPUT_TYPES (line 572) | def INPUT_TYPES(s):
    method main (line 587) | def main(self, model, doublestream_blocks, singlestream_blocks, style_...
  class ReLTXVPatcher (line 640) | class ReLTXVPatcher(ReLTXVPatcherAdvanced):
    method INPUT_TYPES (line 642) | def INPUT_TYPES(cls):
    method main (line 651) | def main(self, model, style_dtype="float32", enable=True, force=False):
  class ReSDPatcherAdvanced (line 665) | class ReSDPatcherAdvanced:
    method INPUT_TYPES (line 667) | def INPUT_TYPES(s):
    method main (line 683) | def main(self, model, doublestream_blocks, singlestream_blocks, style_...
  class ReSDPatcher (line 779) | class ReSDPatcher(ReSDPatcherAdvanced):
    method INPUT_TYPES (line 781) | def INPUT_TYPES(cls):
    method main (line 790) | def main(self, model, style_dtype="float32", enable=True, force=False):
  class HDBlockDoubleNoMask (line 807) | class HDBlockDoubleNoMask(HDBlockDouble):
    method forward (line 808) | def forward(self, c, mask=None):
  class HDBlockSingleNoMask (line 811) | class HDBlockSingleNoMask(HDBlockSingle):
    method forward (line 812) | def forward(self, c, mask=None):
  class ReHiDreamPatcherAdvanced (line 816) | class ReHiDreamPatcherAdvanced:
    method INPUT_TYPES (line 818) | def INPUT_TYPES(s):
    method main (line 833) | def main(self, model, double_stream_blocks, single_stream_blocks, styl...
  class ReHiDreamPatcher (line 940) | class ReHiDreamPatcher(ReHiDreamPatcherAdvanced):
    method INPUT_TYPES (line 942) | def INPUT_TYPES(cls):
    method main (line 951) | def main(self, model, style_dtype="default", enable=True, force=False):
  class ReJointBlockNoMask (line 963) | class ReJointBlockNoMask(ReJointBlock):
    method forward (line 964) | def forward(self, c, mask=None):
  class ReSD35PatcherAdvanced (line 967) | class ReSD35PatcherAdvanced:
    method INPUT_TYPES (line 969) | def INPUT_TYPES(s):
    method main (line 983) | def main(self, model, joint_blocks, style_dtype, enable=True, force=Fa...
  class ReSD35Patcher (line 1021) | class ReSD35Patcher(ReSD35PatcherAdvanced):
    method INPUT_TYPES (line 1023) | def INPUT_TYPES(cls):
    method main (line 1032) | def main(self, model, style_dtype="float32", enable=True, force=False):
  class ReDoubleAttentionNoMask (line 1041) | class ReDoubleAttentionNoMask(ReDoubleAttention):
    method forward (line 1042) | def forward(self, c, mask=None):
  class ReSingleAttentionNoMask (line 1045) | class ReSingleAttentionNoMask(ReSingleAttention):
    method forward (line 1046) | def forward(self, c, mask=None):
  class ReAuraPatcherAdvanced (line 1049) | class ReAuraPatcherAdvanced:
    method INPUT_TYPES (line 1051) | def INPUT_TYPES(s):
    method main (line 1066) | def main(self, model, doublelayer_blocks, singlelayer_blocks, style_dt...
  class ReAuraPatcher (line 1120) | class ReAuraPatcher(ReAuraPatcherAdvanced):
    method INPUT_TYPES (line 1122) | def INPUT_TYPES(cls):
    method main (line 1131) | def main(self, model, style_dtype="float32", enable=True, force=False):
  class FluxOrthoCFGPatcher (line 1142) | class FluxOrthoCFGPatcher:
    method INPUT_TYPES (line 1144) | def INPUT_TYPES(s):
    method new_forward (line 1163) | def new_forward(self, x, timestep, context, y, guidance, control=None,...
    method main (line 1179) | def main(self, model, enable=True, ortho_T5=True, ortho_clip_L=True, z...
  class FluxGuidanceDisable (line 1195) | class FluxGuidanceDisable:
    method INPUT_TYPES (line 1197) | def INPUT_TYPES(s):
    method new_forward (line 1214) | def new_forward(self, x, timestep, context, y, guidance, control=None,...
    method main (line 1220) | def main(self, model, disable=True, zero_clip_L=True):
  class ModelSamplingAdvanced (line 1235) | class ModelSamplingAdvanced:
    method INPUT_TYPES (line 1238) | def INPUT_TYPES(s):
    method sigma_exponential (line 1251) | def sigma_exponential(self, timestep):
    method sigma_linear (line 1254) | def sigma_linear(self, timestep):
    method main (line 1257) | def main(self, model, scaling, shift):
  class ModelSamplingAdvancedResolution (line 1337) | class ModelSamplingAdvancedResolution:
    method INPUT_TYPES (line 1340) | def INPUT_TYPES(s):
    method sigma_exponential (line 1355) | def sigma_exponential(self, timestep):
    method sigma_linear (line 1358) | def sigma_linear(self, timestep):
    method main (line 1361) | def main(self, model, scaling, max_shift, base_shift, latent_image):
  class UNetSave (line 1445) | class UNetSave:
    method __init__ (line 1446) | def __init__(self):
    method INPUT_TYPES (line 1450) | def INPUT_TYPES(s):
    method save (line 1468) | def save(self, model, filename_prefix, prompt=None, extra_pnginfo=None):
  function save_checkpoint (line 1482) | def save_checkpoint(
  function sd_save_checkpoint (line 1549) | def sd_save_checkpoint(output_path, model, clip=None, vae=None, clip_vis...
  class TorchCompileModelFluxAdvanced (line 1573) | class TorchCompileModelFluxAdvanced:
    method __init__ (line 1574) | def __init__(self):
    method INPUT_TYPES (line 1578) | def INPUT_TYPES(s):
    method parse_blocks (line 1594) | def parse_blocks(self, blocks_str):
    method main (line 1605) | def main(self,
  class TorchCompileModelAura (line 1651) | class TorchCompileModelAura:
    method __init__ (line 1652) | def __init__(self):
    method INPUT_TYPES (line 1656) | def INPUT_TYPES(s):
    method main (line 1671) | def main(self,
  class TorchCompileModelSD35 (line 1703) | class TorchCompileModelSD35:
    method __init__ (line 1704) | def __init__(self):
    method INPUT_TYPES (line 1708) | def INPUT_TYPES(s):
    method main (line 1723) | def main(self,
  class ClownpileModelWanVideo (line 1754) | class ClownpileModelWanVideo:
    method __init__ (line 1755) | def __init__(self):
    method INPUT_TYPES (line 1759) | def INPUT_TYPES(s):
    method patch (line 1780) | def patch(self, model, backend, fullgraph, mode, dynamic, dynamo_cache...

FILE: nodes_latents.py
  function fp_or (line 22) | def fp_or(tensor1, tensor2):
  function fp_and (line 25) | def fp_and(tensor1, tensor2):
  class AdvancedNoise (line 29) | class AdvancedNoise:
    method INPUT_TYPES (line 31) | def INPUT_TYPES(cls):
    method get_noise (line 45) | def get_noise(self, noise_seed, noise_type, alpha, k):
  class Noise_RandomNoise (line 50) | class Noise_RandomNoise:
    method __init__ (line 51) | def __init__(self, seed, noise_type, alpha, k):
    method generate_noise (line 57) | def generate_noise(self, input_latent):
  class LatentNoised (line 64) | class LatentNoised:
    method INPUT_TYPES (line 66) | def INPUT_TYPES(cls):
    method main (line 91) | def main(self,
  class LatentNoiseList (line 161) | class LatentNoiseList:
    method INPUT_TYPES (line 163) | def INPUT_TYPES(cls):
    method main (line 184) | def main(self,
  class MaskToggle (line 216) | class MaskToggle:
    method __init__ (line 217) | def __init__(self):
    method INPUT_TYPES (line 220) | def INPUT_TYPES(cls):
    method main (line 234) | def main(self, enable=True, mask=None):
  class latent_to_raw_x (line 241) | class latent_to_raw_x:
    method __init__ (line 242) | def __init__(self):
    method INPUT_TYPES (line 245) | def INPUT_TYPES(cls):
    method main (line 257) | def main(self, latent,):
  class TrimVideoLatent_state_info (line 266) | class TrimVideoLatent_state_info:
    method INPUT_TYPES (line 268) | def INPUT_TYPES(s):
    method _trim_tensor (line 279) | def _trim_tensor(tensor, trim_amount):
    method op (line 285) | def op(self, samples, trim_amount):
  class LatentUpscaleBy_state_info (line 291) | class LatentUpscaleBy_state_info:
    method INPUT_TYPES (line 295) | def INPUT_TYPES(s):
    method _upscale_tensor (line 303) | def _upscale_tensor(tensor, upscale_method, scale_by):
    method op (line 309) | def op(self, samples, upscale_method, scale_by):
  class latent_clear_state_info (line 314) | class latent_clear_state_info:
    method __init__ (line 315) | def __init__(self):
    method INPUT_TYPES (line 318) | def INPUT_TYPES(cls):
    method main (line 330) | def main(self, latent,):
  class latent_replace_state_info (line 337) | class latent_replace_state_info:
    method __init__ (line 338) | def __init__(self):
    method INPUT_TYPES (line 341) | def INPUT_TYPES(cls):
    method main (line 355) | def main(self, latent, clear_raw_x, replace_end_step):
  class latent_display_state_info (line 366) | class latent_display_state_info:
    method __init__ (line 367) | def __init__(self):
    method INPUT_TYPES (line 370) | def INPUT_TYPES(cls):
    method execute (line 382) | def execute(self, latent):
  class latent_transfer_state_info (line 422) | class latent_transfer_state_info:
    method __init__ (line 423) | def __init__(self):
    method INPUT_TYPES (line 426) | def INPUT_TYPES(cls):
    method main (line 439) | def main(self, latent_to, latent_from):
  class latent_mean_channels_from_to (line 449) | class latent_mean_channels_from_to:
    method __init__ (line 450) | def __init__(self):
    method INPUT_TYPES (line 453) | def INPUT_TYPES(cls):
    method main (line 466) | def main(self, latent_to, latent_from):
  class latent_get_channel_means (line 472) | class latent_get_channel_means:
    method __init__ (line 473) | def __init__(self):
    method INPUT_TYPES (line 476) | def INPUT_TYPES(cls):
    method main (line 488) | def main(self, latent):
  class latent_to_cuda (line 497) | class latent_to_cuda:
    method __init__ (line 498) | def __init__(self):
    method INPUT_TYPES (line 501) | def INPUT_TYPES(cls):
    method main (line 514) | def main(self, latent, to_cuda):
  class latent_batch (line 524) | class latent_batch:
    method __init__ (line 525) | def __init__(self):
    method INPUT_TYPES (line 528) | def INPUT_TYPES(cls):
    method main (line 541) | def main(self, latent, batch_size):
  class MaskFloatToBoolean (line 551) | class MaskFloatToBoolean:
    method __init__ (line 552) | def __init__(self):
    method INPUT_TYPES (line 556) | def INPUT_TYPES(cls):
    method main (line 571) | def main(self, mask=None,):
  class MaskEdge (line 578) | class MaskEdge:
    method __init__ (line 579) | def __init__(self):
    method INPUT_TYPES (line 583) | def INPUT_TYPES(cls):
    method main (line 603) | def main(self, dilation=20, mode="percent", internal=1.0, external=1.0...
  class Frame_Select_Latent_Raw (line 632) | class Frame_Select_Latent_Raw:
    method __init__ (line 633) | def __init__(self):
    method INPUT_TYPES (line 637) | def INPUT_TYPES(cls):
    method main (line 654) | def main(self, frames=None, select=0):
  class Frames_Slice_Latent_Raw (line 659) | class Frames_Slice_Latent_Raw:
    method __init__ (line 660) | def __init__(self):
    method INPUT_TYPES (line 664) | def INPUT_TYPES(cls):
    method main (line 681) | def main(self, frames=None, start=0, stop=1):
  class Frames_Concat_Latent_Raw (line 686) | class Frames_Concat_Latent_Raw:
    method __init__ (line 687) | def __init__(self):
    method INPUT_TYPES (line 691) | def INPUT_TYPES(cls):
    method main (line 707) | def main(self, frames_0, frames_1):
  class Frame_Select_Latent (line 713) | class Frame_Select_Latent:
    method __init__ (line 714) | def __init__(self):
    method INPUT_TYPES (line 718) | def INPUT_TYPES(cls):
    method main (line 735) | def main(self, frames=None, select=0):
  class Frames_Slice_Latent (line 740) | class Frames_Slice_Latent:
    method __init__ (line 741) | def __init__(self):
    method INPUT_TYPES (line 745) | def INPUT_TYPES(cls):
    method main (line 762) | def main(self, frames=None, start=0, stop=1):
  class Frames_Concat_Latent (line 767) | class Frames_Concat_Latent:
    method __init__ (line 768) | def __init__(self):
    method INPUT_TYPES (line 772) | def INPUT_TYPES(cls):
    method main (line 788) | def main(self, frames_0, frames_1):
  class Frames_Concat_Masks (line 796) | class Frames_Concat_Masks:
    method __init__ (line 797) | def __init__(self):
    method INPUT_TYPES (line 801) | def INPUT_TYPES(cls):
    method main (line 826) | def main(self, frames_0, frames_1, frames_2=None, frames_3=None, frame...
  class Frames_Masks_Uninterpolate (line 847) | class Frames_Masks_Uninterpolate:
    method __init__ (line 848) | def __init__(self):
    method INPUT_TYPES (line 852) | def INPUT_TYPES(cls):
    method main (line 868) | def main(self, raw_temporal_mask, frame_chunk_size):
  class Frames_Masks_ZeroOut (line 881) | class Frames_Masks_ZeroOut:
    method __init__ (line 882) | def __init__(self):
    method INPUT_TYPES (line 886) | def INPUT_TYPES(cls):
    method main (line 902) | def main(self, temporal_mask, zero_out_frame):
  class Frames_Latent_ReverseOrder (line 907) | class Frames_Latent_ReverseOrder:
    method __init__ (line 908) | def __init__(self):
    method INPUT_TYPES (line 912) | def INPUT_TYPES(cls):
    method main (line 927) | def main(self, frames,):
  class LatentPhaseMagnitude (line 941) | class LatentPhaseMagnitude:
    method INPUT_TYPES (line 943) | def INPUT_TYPES(cls):
    method latent_repeat (line 994) | def latent_repeat(latent, batch_size):
    method mix_latent_phase_magnitude (line 1002) | def mix_latent_phase_magnitude(latent_0,
    method main (line 1055) | def main(self,
  class LatentPhaseMagnitudeMultiply (line 1161) | class LatentPhaseMagnitudeMultiply:
    method INPUT_TYPES (line 1163) | def INPUT_TYPES(cls):
    method latent_repeat (line 1199) | def latent_repeat(latent, batch_size):
    method mix_latent_phase_magnitude (line 1207) | def mix_latent_phase_magnitude(latent_0,
    method main (line 1245) | def main(self,
  class LatentPhaseMagnitudeOffset (line 1293) | class LatentPhaseMagnitudeOffset:
    method INPUT_TYPES (line 1295) | def INPUT_TYPES(cls):
    method latent_repeat (line 1331) | def latent_repeat(latent, batch_size):
    method mix_latent_phase_magnitude (line 1339) | def mix_latent_phase_magnitude(latent_0,
    method main (line 1377) | def main(self,
  class LatentPhaseMagnitudePower (line 1425) | class LatentPhaseMagnitudePower:
    method INPUT_TYPES (line 1427) | def INPUT_TYPES(cls):
    method latent_repeat (line 1463) | def latent_repeat(latent, batch_size):
    method mix_latent_phase_magnitude (line 1471) | def mix_latent_phase_magnitude(latent_0,
    method main (line 1508) | def main(self,
  class StableCascade_StageC_VAEEncode_Exact (line 1556) | class StableCascade_StageC_VAEEncode_Exact:
    method __init__ (line 1557) | def __init__(self, device="cpu"):
    method INPUT_TYPES (line 1561) | def INPUT_TYPES(cls):
    method generate (line 1576) | def generate(self, image, vae, width, height):
  class StableCascade_StageC_VAEEncode_Exact_Tiled (line 1589) | class StableCascade_StageC_VAEEncode_Exact_Tiled:
    method __init__ (line 1590) | def __init__(self, device="cpu"):
    method INPUT_TYPES (line 1594) | def INPUT_TYPES(cls):
    method generate (line 1609) | def generate(self, image, vae, tile_size, overlap):
  function tiled_scale_multidim (line 1629) | def tiled_scale_multidim(samples,
  class EmptyLatentImageCustom (line 1677) | class EmptyLatentImageCustom:
    method __init__ (line 1678) | def __init__(self):
    method INPUT_TYPES (line 1682) | def INPUT_TYPES(cls):
    method generate (line 1700) | def generate(self,
  class EmptyLatentImage64 (line 1741) | class EmptyLatentImage64:
    method __init__ (line 1742) | def __init__(self):
    method INPUT_TYPES (line 1746) | def INPUT_TYPES(cls):
    method generate (line 1760) | def generate(self, width, height, batch_size=1):
  class LatentNoiseBatch_perlin (line 1767) | class LatentNoiseBatch_perlin:
    method __init__ (line 1768) | def __init__(self):
    method INPUT_TYPES (line 1772) | def INPUT_TYPES(cls):
    method rand_perlin_2d (line 1791) | def rand_perlin_2d(self, shape, res, fade = lambda t: 6*t**5 - 15*t**4...
    method rand_perlin_2d_octaves (line 1809) | def rand_perlin_2d_octaves(self, shape, res, octaves=1, persistence=0.5):
    method scale_tensor (line 1821) | def scale_tensor(self, x):
    method create_noisy_latents_perlin (line 1827) | def create_noisy_latents_perlin(self, seed, width, height, batch_size,...
  class LatentNoiseBatch_gaussian_channels (line 1843) | class LatentNoiseBatch_gaussian_channels:
    method INPUT_TYPES (line 1845) | def INPUT_TYPES(cls):
    method gaussian_noise_channels (line 1874) | def gaussian_noise_channels(x, mean_luminosity = -0.1, mean_cyan_red =...
    method main (line 1886) | def main(self, latent, steps, seed,
  class LatentNoiseBatch_gaussian (line 1914) | class LatentNoiseBatch_gaussian:
    method INPUT_TYPES (line 1916) | def INPUT_TYPES(cls):
    method main (line 1937) | def main(self, latent, mean, std, steps, seed, means=None, stds=None, ...
  class LatentNoiseBatch_fractal (line 1955) | class LatentNoiseBatch_fractal:
    method INPUT_TYPES (line 1957) | def INPUT_TYPES(cls):
    method main (line 1978) | def main(self,
  class LatentBatch_channels (line 2009) | class LatentBatch_channels:
    method INPUT_TYPES (line 2011) | def INPUT_TYPES(cls):
    method latent_channels_multiply (line 2035) | def latent_channels_multiply(x, luminosity = -0.1, cyan_red = 0.0, lim...
    method latent_channels_offset (line 2045) | def latent_channels_offset(x, luminosity = -0.1, cyan_red = 0.0, lime_...
    method latent_channels_power (line 2055) | def latent_channels_power(x, luminosity = -0.1, cyan_red = 0.0, lime_p...
    method main (line 2064) | def main(self,
  class LatentBatch_channels_16 (line 2099) | class LatentBatch_channels_16:
    method INPUT_TYPES (line 2101) | def INPUT_TYPES(cls):
    method latent_channels_multiply (line 2150) | def latent_channels_multiply(x, chan_1 = 0.0, chan_2 = 0.0, chan_3 = 0...
    method latent_channels_offset (line 2172) | def latent_channels_offset(x, chan_1 = 0.0, chan_2 = 0.0, chan_3 = 0.0...
    method latent_channels_power (line 2194) | def latent_channels_power(x, chan_1 = 0.0, chan_2 = 0.0, chan_3 = 0.0,...
    method main (line 2215) | def main(self, latent, mode,
  class latent_normalize_channels (line 2251) | class latent_normalize_channels:
    method __init__ (line 2252) | def __init__(self):
    method INPUT_TYPES (line 2255) | def INPUT_TYPES(cls):
    method main (line 2269) | def main(self, latent, mode, operation):
  class latent_channelwise_match (line 2301) | class latent_channelwise_match:
    method __init__ (line 2302) | def __init__(self):
    method INPUT_TYPES (line 2305) | def INPUT_TYPES(cls):
    method main (line 2324) | def main(self,

FILE: nodes_misc.py
  class SetImageSize (line 8) | class SetImageSize:
    method INPUT_TYPES (line 10) | def INPUT_TYPES(cls):
    method main (line 27) | def main(self, width, height):
  class SetImageSizeWithScale (line 31) | class SetImageSizeWithScale:
    method INPUT_TYPES (line 33) | def INPUT_TYPES(cls):
    method main (line 51) | def main(self, width, height, scale_by):
  class TextBox1 (line 55) | class TextBox1:
    method INPUT_TYPES (line 57) | def INPUT_TYPES(cls):
    method main (line 73) | def main(self, text1):
  class TextBox2 (line 77) | class TextBox2:
    method INPUT_TYPES (line 79) | def INPUT_TYPES(cls):
    method main (line 96) | def main(self, text1, text2,):
  class TextBox3 (line 100) | class TextBox3:
    method INPUT_TYPES (line 102) | def INPUT_TYPES(cls):
    method main (line 120) | def main(self, text1, text2, text3 ):
  class TextLoadFile (line 126) | class TextLoadFile:
    method INPUT_TYPES (line 128) | def INPUT_TYPES(cls):
    method main (line 144) | def main(self, text_file):
  class TextShuffle (line 156) | class TextShuffle:
    method INPUT_TYPES (line 158) | def INPUT_TYPES(cls):
    method main (line 174) | def main(self, text, separator, seed, ):
  function truncate_tokens (line 185) | def truncate_tokens(text, truncate_to, clip, clip_type, stop_token):
  class TextShuffleAndTruncate (line 226) | class TextShuffleAndTruncate:
    method INPUT_TYPES (line 228) | def INPUT_TYPES(cls):
    method main (line 247) | def main(self, text, separator, truncate_words_to, truncate_tokens_to,...
  class TextTruncateTokens (line 271) | class TextTruncateTokens:
    method INPUT_TYPES (line 273) | def INPUT_TYPES(cls):
    method main (line 292) | def main(self, text, truncate_words_to, truncate_clip_l_to, truncate_c...
  class TextConcatenate (line 316) | class TextConcatenate:
    method INPUT_TYPES (line 319) | def INPUT_TYPES(cls):
    method main (line 335) | def main(self, text_1="", text_2="", separator=""):
  class TextBoxConcatenate (line 341) | class TextBoxConcatenate:
    method INPUT_TYPES (line 344) | def INPUT_TYPES(cls):
    method main (line 363) | def main(self, text="", text_external="", separator="", mode="append_e...
  class SeedGenerator (line 373) | class SeedGenerator:
    method INPUT_TYPES (line 375) | def INPUT_TYPES(cls):
    method main (line 389) | def main(self, seed,):

FILE: nodes_precision.py
  class set_precision (line 5) | class set_precision:
    method __init__ (line 6) | def __init__(self):
    method INPUT_TYPES (line 9) | def INPUT_TYPES(cls):
    method main (line 24) | def main(self,
  class set_precision_universal (line 47) | class set_precision_universal:
    method __init__ (line 48) | def __init__(self):
    method INPUT_TYPES (line 51) | def INPUT_TYPES(cls):
    method main (line 79) | def main(self,
  class set_precision_advanced (line 123) | class set_precision_advanced:
    method __init__ (line 124) | def __init__(self):
    method INPUT_TYPES (line 127) | def INPUT_TYPES(cls):
    method main (line 149) | def main(self,

FILE: res4lyf.py
  function time_snr_shift_RES4LYF (line 24) | def time_snr_shift_RES4LYF(alpha, t):
  function get_display_sampler_category (line 33) | def get_display_sampler_category():
  function update_settings (line 38) | async def update_settings(request):
  function log_message (line 68) | async def log_message(request):
  function calculate_sigmas_RES4LYF (line 83) | def calculate_sigmas_RES4LYF(model_sampling, scheduler_name, steps):
  function init (line 90) | def init(check_imports=None):
  function save_config_value (line 116) | def save_config_value(key, value):
  function get_config_value (line 131) | def get_config_value(key, default=None, throw=False):
  function is_debug_logging_enabled (line 145) | def is_debug_logging_enabled():
  function RESplain (line 149) | def RESplain(*args, debug='info'):
  function get_ext_dir (line 170) | def get_ext_dir(subpath=None, mkdir=False):
  function merge_default_config (line 181) | def merge_default_config(config, default_config):
  function get_extension_config (line 189) | def get_extension_config(reload=False):
  function get_comfy_dir (line 217) | def get_comfy_dir(subpath=None, mkdir=False):
  function get_web_ext_dir (line 229) | def get_web_ext_dir():
  function link_js (line 239) | def link_js(src, dst):
  function is_junction (line 258) | def is_junction(path):
  function install_js (line 267) | def install_js():
  function should_install_js (line 302) | def should_install_js():
  function get_async_loop (line 305) | def get_async_loop():
  function get_http_session (line 315) | def get_http_session():
  function download (line 320) | async def download(url, stream, update_callback=None, session=None):
  function download_to_file (line 347) | async def download_to_file(url, destination, update_callback=None, is_ex...
  function wait_for_async (line 354) | def wait_for_async(async_fn, loop=None):
  function update_node_status (line 373) | def update_node_status(client_id, node, text, progress=None):
  function update_node_status_async (line 387) | async def update_node_status_async(client_id, node, text, progress=None):
  function get_config_value (line 401) | def get_config_value(key, default=None, throw=False):
  function is_inside_dir (line 415) | def is_inside_dir(root_dir, check_path):
  function get_child_dir (line 422) | def get_child_dir(root_dir, child_path, throw_if_outside=True):

FILE: rk_method_beta.py
  function get_data_from_step (line 20) | def get_data_from_step   (x:Tensor, x_next:Tensor, sigma:Tensor, sigma_n...
  function get_epsilon_from_step (line 24) | def get_epsilon_from_step(x:Tensor, x_next:Tensor, sigma:Tensor, sigma_n...
  class RK_Method_Beta (line 30) | class RK_Method_Beta:
    method __init__ (line 31) | def __init__(self,
    method is_exponential (line 95) | def is_exponential(rk_type:str) -> bool:
    method create (line 109) | def create(model,
    method __call__ (line 124) | def __call__(self):
    method model_epsilon (line 127) | def model_epsilon(self, x:Tensor, sigma:Tensor, **extra_args) -> Tuple...
    method model_denoised (line 134) | def model_denoised(self, x:Tensor, sigma:Tensor, **extra_args) -> Tensor:
    method update_transformer_options (line 253) | def update_transformer_options(self,
    method set_coeff (line 260) | def set_coeff(self,
    method reorder_tableau (line 313) | def reorder_tableau(self, indices:list[int]) -> None:
    method update_substep (line 323) | def update_substep(self,
    method a_k_einsum (line 371) | def a_k_einsum(self, row:int, k     :Tensor) -> Tensor:
    method b_k_einsum (line 374) | def b_k_einsum(self, row:int, k     :Tensor) -> Tensor:
    method u_k_einsum (line 377) | def u_k_einsum(self, row:int, k_prev:Tensor) -> Tensor:
    method v_k_einsum (line 380) | def v_k_einsum(self, row:int, k_prev:Tensor) -> Tensor:
    method zum (line 385) | def zum(self, row:int, k:Tensor, k_prev:Tensor=None,) -> Tensor:
    method zum_tableau (line 392) | def zum_tableau(self,  k:Tensor, k_prev:Tensor=None,) -> Tensor:
    method init_cfg_channelwise (line 399) | def init_cfg_channelwise(self, x:Tensor, cfg_cw:float=1.0, **extra_arg...
    method calc_cfg_channelwise (line 411) | def calc_cfg_channelwise(self, denoised:Tensor) -> Tensor:
    method calculate_res_2m_step (line 427) | def calculate_res_2m_step(
    method calculate_res_3m_step (line 466) | def calculate_res_3m_step(
    method swap_rk_type_at_step_or_threshold (line 510) | def swap_rk_type_at_step_or_threshold(self,
    method bong_iter (line 580) | def bong_iter(self,
    method newton_iter (line 651) | def newton_iter(self,
  class RK_Method_Exponential (line 764) | class RK_Method_Exponential(RK_Method_Beta):
    method __init__ (line 765) | def __init__(self,
    method alpha_fn (line 788) | def alpha_fn(neg_h:Tensor) -> Tensor:
    method sigma_fn (line 792) | def sigma_fn(t:Tensor) -> Tensor:
    method t_fn (line 796) | def t_fn(sigma:Tensor) -> Tensor:
    method h_fn (line 800) | def h_fn(sigma_down:Tensor, sigma:Tensor) -> Tensor:
    method __call__ (line 803) | def __call__(self,
    method get_epsilon (line 832) | def get_epsilon(self,
    method get_epsilon_anchored (line 851) | def get_epsilon_anchored(self, x_0:Tensor, denoised:Tensor, sigma:Tens...
    method get_guide_epsilon (line 856) | def get_guide_epsilon(self,
  class RK_Method_Linear (line 887) | class RK_Method_Linear(RK_Method_Beta):
    method __init__ (line 888) | def __init__(self,
    method alpha_fn (line 910) | def alpha_fn(neg_h:Tensor) -> Tensor:
    method sigma_fn (line 914) | def sigma_fn(t:Tensor) -> Tensor:
    method t_fn (line 918) | def t_fn(sigma:Tensor) -> Tensor:
    method h_fn (line 922) | def h_fn(sigma_down:Tensor, sigma:Tensor) -> Tensor:
    method __call__ (line 925) | def __call__(self,
    method get_epsilon (line 950) | def get_epsilon(self,
    method get_epsilon_anchored (line 965) | def get_epsilon_anchored(self, x_0:Tensor, denoised:Tensor, sigma:Tens...
    method get_guide_epsilon (line 970) | def get_guide_epsilon(self,

FILE: samplers_extensions.py
  class ClownSamplerSelector_Beta (line 20) | class ClownSamplerSelector_Beta:
    method INPUT_TYPES (line 22) | def INPUT_TYPES(cls):
    method main (line 37) | def main(self,
  class ClownOptions_SDE_Beta (line 49) | class ClownOptions_SDE_Beta:
    method INPUT_TYPES (line 51) | def INPUT_TYPES(cls):
    method main (line 75) | def main(self,
  class ClownOptions_StepSize_Beta (line 121) | class ClownOptions_StepSize_Beta:
    method INPUT_TYPES (line 123) | def INPUT_TYPES(cls):
    method main (line 142) | def main(self,
  class DetailBoostOptions (line 162) | class DetailBoostOptions:
  class ClownOptions_DetailBoost_Beta (line 180) | class ClownOptions_DetailBoost_Beta:
    method INPUT_TYPES (line 182) | def INPUT_TYPES(cls):
    method main (line 212) | def main(self,
  class ClownOptions_SigmaScaling_Beta (line 311) | class ClownOptions_SigmaScaling_Beta:
    method INPUT_TYPES (line 313) | def INPUT_TYPES(cls):
    method main (line 339) | def main(self,
  class ClownOptions_Momentum_Beta (line 378) | class ClownOptions_Momentum_Beta:
    method INPUT_TYPES (line 380) | def INPUT_TYPES(cls):
    method main (line 396) | def main(self,
  class ClownOptions_ImplicitSteps_Beta (line 409) | class ClownOptions_ImplicitSteps_Beta:
    method INPUT_TYPES (line 411) | def INPUT_TYPES(cls):
    method main (line 430) | def main(self,
  class ClownOptions_Cycles_Beta (line 449) | class ClownOptions_Cycles_Beta:
    method INPUT_TYPES (line 451) | def INPUT_TYPES(cls):
    method main (line 471) | def main(self,
  class SharkOptions_StartStep_Beta (line 493) | class SharkOptions_StartStep_Beta:
    method INPUT_TYPES (line 495) | def INPUT_TYPES(cls):
    method main (line 512) | def main(self,
  class ClownOptions_Tile_Beta (line 526) | class ClownOptions_Tile_Beta:
    method INPUT_TYPES (line 528) | def INPUT_TYPES(cls):
    method main (line 545) | def main(self,
  class ClownOptions_Tile_Advanced_Beta (line 561) | class ClownOptions_Tile_Advanced_Beta:
    method INPUT_TYPES (line 563) | def INPUT_TYPES(cls):
    method main (line 579) | def main(self,
  class ClownOptions_ExtraOptions_Beta (line 594) | class ClownOptions_ExtraOptions_Beta:
    method INPUT_TYPES (line 596) | def INPUT_TYPES(cls):
    method main (line 612) | def main(self,
  class ClownOptions_Automation_Beta (line 625) | class ClownOptions_Automation_Beta:
    method INPUT_TYPES (line 627) | def INPUT_TYPES(cls):
    method main (line 644) | def main(self,
  class SharkOptions_GuideCond_Beta (line 675) | class SharkOptions_GuideCond_Beta:
    method INPUT_TYPES (line 677) | def INPUT_TYPES(cls):
    method main (line 691) | def main(self,
  class SharkOptions_GuideConds_Beta (line 713) | class SharkOptions_GuideConds_Beta:
    method INPUT_TYPES (line 715) | def INPUT_TYPES(cls):
    method main (line 732) | def main(self,
  class SharkOptions_Beta (line 761) | class SharkOptions_Beta:
    method INPUT_TYPES (line 763) | def INPUT_TYPES(cls):
    method main (line 781) | def main(self,
  class SharkOptions_UltraCascade_Latent_Beta (line 801) | class SharkOptions_UltraCascade_Latent_Beta:
    method INPUT_TYPES (line 803) | def INPUT_TYPES(cls):
    method main (line 819) | def main(self,
  class ClownOptions_SwapSampler_Beta (line 835) | class ClownOptions_SwapSampler_Beta:
    method INPUT_TYPES (line 837) | def INPUT_TYPES(cls):
    method main (line 855) | def main(self,
  class ClownOptions_SDE_Mask_Beta (line 880) | class ClownOptions_SDE_Mask_Beta:
    method INPUT_TYPES (line 882) | def INPUT_TYPES(cls):
    method main (line 900) | def main(self,
  class ClownGuide_Mean_Beta (line 922) | class ClownGuide_Mean_Beta:
    method INPUT_TYPES (line 924) | def INPUT_TYPES(cls):
    method main (line 948) | def main(self,
  class ClownGuide_Style_Beta (line 1003) | class ClownGuide_Style_Beta:
    method INPUT_TYPES (line 1005) | def INPUT_TYPES(cls):
    method main (line 1034) | def main(self,
  class ClownGuide_AdaIN_MMDiT_Beta (line 1110) | class ClownGuide_AdaIN_MMDiT_Beta:
    method INPUT_TYPES (line 1112) | def INPUT_TYPES(cls):
    method main (line 1139) | def main(self,
  class ClownGuide_AttnInj_MMDiT_Beta (line 1248) | class ClownGuide_AttnInj_MMDiT_Beta:
    method INPUT_TYPES (line 1250) | def INPUT_TYPES(cls):
    method main (line 1294) | def main(self,
  class ClownGuide_Beta (line 1437) | class ClownGuide_Beta:
    method INPUT_TYPES (line 1439) | def INPUT_TYPES(cls):
    method main (line 1465) | def main(self,
  class ClownGuides_Beta (line 1543) | class ClownGuides_Beta:
    method INPUT_TYPES (line 1545) | def INPUT_TYPES(cls):
    method main (line 1578) | def main(self,
  class ClownGuidesAB_Beta (line 1692) | class ClownGuidesAB_Beta:
    method INPUT_TYPES (line 1694) | def INPUT_TYPES(cls):
    method main (line 1728) | def main(self,
  class ClownOptions_Combine (line 1834) | class ClownOptions_Combine:
    method INPUT_TYPES (line 1836) | def INPUT_TYPES(s):
    method main (line 1848) | def main(self, options, **kwargs):
  class ClownOptions_Frameweights (line 1854) | class ClownOptions_Frameweights:
    method INPUT_TYPES (line 1856) | def INPUT_TYPES(s):
    method main (line 1877) | def main(self,
  class SharkOptions_GuiderInput (line 1912) | class SharkOptions_GuiderInput:
    method INPUT_TYPES (line 1914) | def INPUT_TYPES(s):
    method main (line 1927) | def main(self, guider, options=None):

FILE: sd/attention.py
  function get_attn_precision (line 42) | def get_attn_precision(attn_precision, current_dtype):
  function exists (line 50) | def exists(val):
  function default (line 54) | def default(val, d):
  class GEGLU (line 61) | class GEGLU(nn.Module):
    method __init__ (line 62) | def __init__(self, dim_in, dim_out, dtype=None, device=None, operation...
    method forward (line 66) | def forward(self, x):
  class FeedForward (line 71) | class FeedForward(nn.Module):
    method __init__ (line 72) | def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0., d...
    method forward (line 87) | def forward(self, x):
  function Normalize (line 90) | def Normalize(in_channels, dtype=None, device=None):
  function attention_basic (line 93) | def attention_basic(q, k, v, heads, mask=None, attn_precision=None, skip...
  function attention_sub_quad (line 162) | def attention_sub_quad(query, key, value, heads, mask=None, attn_precisi...
  function attention_split (line 232) | def attention_split(q, k, v, heads, mask=None, attn_precision=None, skip...
  function attention_xformers (line 361) | def attention_xformers(q, k, v, heads, mask=None, attn_precision=None, s...
  function attention_pytorch (line 430) | def attention_pytorch(q, k, v, heads, mask=None, attn_precision=None, sk...
  function attention_sage (line 473) | def attention_sage(q, k, v, heads, mask=None, attn_precision=None, skip_...
  function flash_attn_wrapper (line 520) | def flash_attn_wrapper(q: torch.Tensor, k: torch.Tensor, v: torch.Tensor,
  function flash_attn_fake (line 526) | def flash_attn_fake(q, k, v, dropout_p=0.0, causal=False):
  function flash_attn_wrapper (line 532) | def flash_attn_wrapper(q: torch.Tensor, k: torch.Tensor, v: torch.Tensor,
  function attention_flash (line 537) | def attention_flash(q, k, v, heads, mask=None, attn_precision=None, skip...
  function optimized_attention_for_device (line 599) | def optimized_attention_for_device(device, mask=False, small_input=False):
  class ReCrossAttention (line 615) | class ReCrossAttention(nn.Module):
    method __init__ (line 616) | def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, ...
    method forward (line 631) | def forward(self, x, context=None, value=None, mask=None, style_block=...
  class ReBasicTransformerBlock (line 658) | class ReBasicTransformerBlock(nn.Module):
    method __init__ (line 659) | def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None,...
    method forward (line 699) | def forward(self, x, context=None, transformer_options={}, style_block...
  class ReSpatialTransformer (line 851) | class ReSpatialTransformer(nn.Module):
    method __init__ (line 860) | def __init__(self, in_channels, n_heads, d_head,
    method forward (line 893) | def forward(self, x, context=None, style_block=None, transformer_optio...
  class SpatialVideoTransformer (line 925) | class SpatialVideoTransformer(ReSpatialTransformer):
    method __init__ (line 926) | def __init__(
    method forward (line 1012) | def forward(

FILE: sd/openaimodel.py
  function forward_timestep_embed (line 31) | def forward_timestep_embed(ts, x, emb, context=None, transformer_options...
  class ReResBlock (line 69) | class ReResBlock(TimestepBlock):
    method __init__ (line 85) | def __init__(
    method forward (line 165) | def forward(self, x, emb, style_block=None):
    method _forward (line 177) | def _forward(self, x, emb, style_block=None):
  class Timestep (line 241) | class Timestep(nn.Module):
    method __init__ (line 242) | def __init__(self, dim):
    method forward (line 246) | def forward(self, t):
  function apply_control (line 249) | def apply_control(h, control, name):
  class ReUNetModel (line 260) | class ReUNetModel(nn.Module):
    method __init__ (line 286) | def __init__(
    method forward (line 699) | def forward(self, x, timesteps=None, context=None, y=None, control=Non...
    method _forward (line 706) | def _forward(self, x, timesteps=None, context=None, y=None, control=No...
  function clone_inputs_unsafe (line 1443) | def clone_inputs_unsafe(*args, index: int=None):
  function clone_inputs (line 1451) | def clone_inputs(*args, index: int = None):

FILE: sd35/mmdit.py
  function default (line 29) | def default(x, y):
  class Mlp (line 34) | class Mlp(nn.Module):
    method __init__ (line 37) | def __init__(
    method forward (line 64) | def forward(self, x):
  class PatchEmbed (line 73) | class PatchEmbed(nn.Module):
    method __init__ (line 78) | def __init__(
    method forward (line 116) | def forward(self, x):
  function modulate (line 125) | def modulate(x, shift, scale):
  function get_2d_sincos_pos_embed (line 136) | def get_2d_sincos_pos_embed(
  function get_2d_sincos_pos_embed_from_grid (line 167) | def get_2d_sincos_pos_embed_from_grid(embed_dim, grid):
  function get_1d_sincos_pos_embed_from_grid (line 178) | def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
  function get_1d_sincos_pos_embed_from_grid_torch (line 198) | def get_1d_sincos_pos_embed_from_grid_torch(embed_dim, pos, device=None,...
  function get_2d_sincos_pos_embed_torch (line 209) | def get_2d_sincos_pos_embed_torch(embed_dim, w, h, val_center=7.5, val_m...
  class TimestepEmbedder (line 225) | class TimestepEmbedder(nn.Module):
    method __init__ (line 230) | def __init__(self, hidden_size, frequency_embedding_size=256, dtype=No...
    method forward (line 239) | def forward(self, t, dtype, **kwargs):
  class VectorEmbedder (line 245) | class VectorEmbedder(nn.Module):
    method __init__ (line 250) | def __init__(self, input_dim: int, hidden_size: int, dtype=None, devic...
    method forward (line 258) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  function split_qkv (line 268) | def split_qkv(qkv, head_dim):
  class SelfAttention (line 273) | class SelfAttention(nn.Module):
    method __init__ (line 276) | def __init__(
    method pre_attention (line 315) | def pre_attention(self, x: torch.Tensor) -> torch.Tensor:
    method post_attention (line 323) | def post_attention(self, x: torch.Tensor) -> torch.Tensor:
    method forward (line 329) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class RMSNorm (line 338) | class RMSNorm(torch.nn.Module):
    method __init__ (line 339) | def __init__(
    method forward (line 359) | def forward(self, x):
  class SwiGLUFeedForward (line 364) | class SwiGLUFeedForward(nn.Module):
    method __init__ (line 365) | def __init__(
    method forward (line 398) | def forward(self, x):
  class DismantledBlock (line 402) | class DismantledBlock(nn.Module):
    method __init__ (line 409) | def __init__(
    method pre_attention (line 502) | def pre_attention(self, x: torch.Tensor, c: torch.Tensor) -> torch.Ten...
    method post_attention (line 546) | def post_attention(self, attn, x, gate_msa, shift_mlp, scale_mlp, gate...
    method pre_attention_x (line 554) | def pre_attention_x(self, x: torch.Tensor, c: torch.Tensor) -> torch.T...
    method post_attention_x (line 579) | def post_attention_x(self, attn, attn2, x, gate_msa, shift_mlp, scale_...
    method forward (line 592) | def forward(self, x: torch.Tensor, c: torch.Tensor) -> torch.Tensor:
  function block_mixing (line 614) | def block_mixing(*args, use_checkpoint=True, **kwargs):
  function _block_mixing (line 623) | def _block_mixing(context, x, context_block, x_block, c, mask=None):
  class ReJointBlock (line 670) | class ReJointBlock(nn.Module):
    method __init__ (line 673) | def __init__(
    method forward (line 685) | def forward(self, *args, **kwargs):  # context_block, x_block are Dism...
  class FinalLayer (line 691) | class FinalLayer(nn.Module):
    method __init__ (line 696) | def __init__(
    method forward (line 717) | def forward(self, x: torch.Tensor, c: torch.Tensor) -> torch.Tensor:
  class SelfAttentionContext (line 723) | class SelfAttentionContext(nn.Module):
    method __init__ (line 724) | def __init__(self, dim, heads=8, dim_head=64, dtype=None, device=None,...
    method forward (line 736) | def forward(self, x):
  class ContextProcessorBlock (line 742) | class ContextProcessorBlock(nn.Module):
    method __init__ (line 743) | def __init__(self, context_size, dtype=None, device=None, operations=N...
    method forward (line 750) | def forward(self, x):
  class ContextProcessor (line 755) | class ContextProcessor(nn.Module):
    method __init__ (line 756) | def __init__(self, context_size, num_layers, dtype=None, device=None, ...
    method forward (line 761) | def forward(self, x):
  class MMDiT (line 766) | class MMDiT(nn.Module):
    method __init__ (line 771) | def __init__(
    method cropped_pos_embed (line 906) | def cropped_pos_embed(self, hw, device=None):
    method unpatchify (line 934) | def unpatchify(self, x, hw=None):
    method forward_core_with_concat (line 957) | def forward_core_with_concat(
    method forward (line 1040) | def forward(
  class ReOpenAISignatureMMDITWrapper (line 1708) | class ReOpenAISignatureMMDITWrapper(MMDiT):
    method forward (line 1709) | def forward(
  function adain_seq_inplace (line 1723) | def adain_seq_inplace(content: torch.Tensor, style: torch.Tensor, eps: f...
  function adain_seq (line 1734) | def adain_seq(content: torch.Tensor, style: torch.Tensor, eps: float = 1...

FILE: sigmas.py
  function rescale_linear (line 23) | def rescale_linear(input, input_min, input_max, output_min, output_max):
  class set_precision_sigmas (line 27) | class set_precision_sigmas:
    method __init__ (line 28) | def __init__(self):
    method INPUT_TYPES (line 31) | def INPUT_TYPES(s):
    method main (line 46) | def main(self, precision="32", sigmas=None, set_default=False):
  class SimpleInterpolator (line 63) | class SimpleInterpolator(nn.Module):
    method __init__ (line 64) | def __init__(self):
    method forward (line 74) | def forward(self, x):
  function train_interpolator (line 77) | def train_interpolator(model, sigma_schedule, steps, epochs=5000, lr=0.01):
  function interpolate_sigma_schedule_model (line 101) | def interpolate_sigma_schedule_model(sigma_schedule, target_steps):
  class sigmas_interpolate (line 121) | class sigmas_interpolate:
    method __init__ (line 122) | def __init__(self):
    method INPUT_TYPES (line 126) | def INPUT_TYPES(s):
    method interpolate_sigma_schedule_poly (line 143) | def interpolate_sigma_schedule_poly(self, sigma_schedule, target_steps):
    method interpolate_sigma_schedule_constrained (line 162) | def interpolate_sigma_schedule_constrained(self, sigma_schedule, targe...
    method interpolate_sigma_schedule_exp (line 181) | def interpolate_sigma_schedule_exp(self, sigma_schedule, target_steps):
    method interpolate_sigma_schedule_power (line 202) | def interpolate_sigma_schedule_power(self, sigma_schedule, target_steps):
    method interpolate_sigma_schedule_linear (line 224) | def interpolate_sigma_schedule_linear(self, sigma_schedule, target_ste...
    method interpolate_sigma_schedule_nearest (line 227) | def interpolate_sigma_schedule_nearest(self, sigma_schedule, target_st...
    method interpolate_nearest_neighbor (line 230) | def interpolate_nearest_neighbor(self, sigma_schedule, target_steps):
    method main (line 244) | def main(self, sigmas_in, output_length, mode, order, rescale_after=Tr...
  class sigmas_noise_inversion (line 271) | class sigmas_noise_inversion:
    method __init__ (line 274) | def __init__(self):
    method INPUT_TYPES (line 278) | def INPUT_TYPES(s):
    method main (line 291) | def main(self, sigmas):
  function compute_sigma_next_variance_floor (line 304) | def compute_sigma_next_variance_floor(sigma):
  class sigmas_variance_floor (line 307) | class sigmas_variance_floor:
    method __init__ (line 308) | def __init__(self):
    method INPUT_TYPES (line 312) | def INPUT_TYPES(s):
    method main (line 326) | def main(self, sigmas):
  class sigmas_from_text (line 338) | class sigmas_from_text:
    method __init__ (line 339) | def __init__(self):
    method INPUT_TYPES (line 343) | def INPUT_TYPES(s):
    method main (line 355) | def main(self, text):
  class sigmas_concatenate (line 365) | class sigmas_concatenate:
    method __init__ (line 366) | def __init__(self):
    method INPUT_TYPES (line 370) | def INPUT_TYPES(s):
    method main (line 382) | def main(self, sigmas_1, sigmas_2):
  class sigmas_truncate (line 385) | class sigmas_truncate:
    method __init__ (line 386) | def __init__(self):
    method INPUT_TYPES (line 390) | def INPUT_TYPES(s):
    method main (line 402) | def main(self, sigmas, sigmas_until):
  class sigmas_start (line 406) | class sigmas_start:
    method __init__ (line 407) | def __init__(self):
    method INPUT_TYPES (line 411) | def INPUT_TYPES(s):
    method main (line 423) | def main(self, sigmas, sigmas_until):
  class sigmas_split (line 427) | class sigmas_split:
    method __init__ (line 428) | def __init__(self):
    method INPUT_TYPES (line 432) | def INPUT_TYPES(s):
    method main (line 445) | def main(self, sigmas, sigmas_start, sigmas_end):
  class sigmas_pad (line 452) | class sigmas_pad:
    method __init__ (line 453) | def __init__(self):
    method INPUT_TYPES (line 457) | def INPUT_TYPES(s):
    method main (line 469) | def main(self, sigmas, value):
  class sigmas_unpad (line 473) | class sigmas_unpad:
    method __init__ (line 474) | def __init__(self):
    method INPUT_TYPES (line 478) | def INPUT_TYPES(s):
    method main (line 489) | def main(self, sigmas):
  class sigmas_set_floor (line 493) | class sigmas_set_floor:
    method __init__ (line 494) | def __init__(self):
    method INPUT_TYPES (line 498) | def INPUT_TYPES(s):
    method set_floor (line 512) | def set_floor(self, sigmas, floor, new_floor):
  class sigmas_delete_below_floor (line 517) | class sigmas_delete_below_floor:
    method __init__ (line 518) | def __init__(self):
    method INPUT_TYPES (line 522) | def INPUT_TYPES(s):
    method delete_below_floor (line 535) | def delete_below_floor(self, sigmas, floor):
  class sigmas_delete_value (line 539) | class sigmas_delete_value:
    method __init__ (line 540) | def __init__(self):
    method INPUT_TYPES (line 544) | def INPUT_TYPES(s):
    method delete_value (line 557) | def delete_value(self, sigmas, value):
  class sigmas_delete_consecutive_duplicates (line 560) | class sigmas_delete_consecutive_duplicates:
    method __init__ (line 561) | def __init__(self):
    method INPUT_TYPES (line 565) | def INPUT_TYPES(s):
    method delete_consecutive_duplicates (line 577) | def delete_consecutive_duplicates(self, sigmas_1):
  class sigmas_cleanup (line 582) | class sigmas_cleanup:
    method __init__ (line 583) | def __init__(self):
    method INPUT_TYPES (line 587) | def INPUT_TYPES(s):
    method cleanup (line 600) | def cleanup(self, sigmas, sigmin):
  class sigmas_mult (line 608) | class sigmas_mult:
    method __init__ (line 609) | def __init__(self):
    method INPUT_TYPES (line 613) | def INPUT_TYPES(s):
    method main (line 628) | def main(self, sigmas, multiplier, sigmas2=None):
  class sigmas_modulus (line 634) | class sigmas_modulus:
    method __init__ (line 635) | def __init__(self):
    method INPUT_TYPES (line 639) | def INPUT_TYPES(s):
    method main (line 651) | def main(self, sigmas, divisor):
  class sigmas_quotient (line 654) | class sigmas_quotient:
    method __init__ (line 655) | def __init__(self):
    method INPUT_TYPES (line 659) | def INPUT_TYPES(s):
    method main (line 671) | def main(self, sigmas, divisor):
  class sigmas_add (line 674) | class sigmas_add:
    method __init__ (line 675) | def __init__(self):
    method INPUT_TYPES (line 679) | def INPUT_TYPES(s):
    method main (line 691) | def main(self, sigmas, addend):
  class sigmas_power (line 694) | class sigmas_power:
    method __init__ (line 695) | def __init__(self):
    method INPUT_TYPES (line 699) | def INPUT_TYPES(s):
    method main (line 711) | def main(self, sigmas, power):
  class sigmas_abs (line 714) | class sigmas_abs:
    method __init__ (line 715) | def __init__(self):
    method INPUT_TYPES (line 719) | def INPUT_TYPES(s):
    method main (line 730) | def main(self, sigmas):
  class sigmas2_mult (line 733) | class sigmas2_mult:
    method __init__ (line 734) | def __init__(self):
    method INPUT_TYPES (line 738) | def INPUT_TYPES(s):
    method main (line 750) | def main(self, sigmas_1, sigmas_2):
  class sigmas2_add (line 753) | class sigmas2_add:
    method __init__ (line 754) | def __init__(self):
    method INPUT_TYPES (line 758) | def INPUT_TYPES(s):
    method main (line 770) | def main(self, sigmas_1, sigmas_2):
  class sigmas_rescale (line 773) | class sigmas_rescale:
    method __init__ (line 774) | def __init__(self):
    method INPUT_TYPES (line 777) | def INPUT_TYPES(s):
    method main (line 795) | def main(self, start=0, end=-1, sigmas=None):
  class sigmas_count (line 802) | class sigmas_count:
    method __init__ (line 803) | def __init__(self):
    method INPUT_TYPES (line 806) | def INPUT_TYPES(s):
    method main (line 817) | def main(self, sigmas=None):
  class sigmas_math1 (line 821) | class sigmas_math1:
    method __init__ (line 822) | def __init__(self):
    method INPUT_TYPES (line 825) | def INPUT_TYPES(s):
    method main (line 848) | def main(self, start=0, stop=0, trim=0, a=None, b=None, c=None, x=1.0,...
  class sigmas_math3 (line 882) | class sigmas_math3:
    method __init__ (line 883) | def __init__(self):
    method INPUT_TYPES (line 886) | def INPUT_TYPES(s):
    method main (line 917) | def main(self, start=0, stop=0, trim=0, a=None, b=None, c=None, x=1.0,...
  class sigmas_iteration_karras (line 957) | class sigmas_iteration_karras:
    method __init__ (line 958) | def __init__(self):
    method INPUT_TYPES (line 962) | def INPUT_TYPES(s):
  
Condensed preview — 134 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (4,416K chars).
[
  {
    "path": ".gitignore",
    "chars": 69,
    "preview": "__pycache__/\n.idea/\n.vscode/\n.tmp\n.cache\ntests/\n/*.json\n*.config.json"
  },
  {
    "path": "LICENSE",
    "chars": 34971,
    "preview": "The use of this software or any derivative work for the purpose of \nproviding a commercial service, such as (but not lim"
  },
  {
    "path": "README.md",
    "chars": 19405,
    "preview": "# SUPERIOR SAMPLING WITH RES4LYF: THE POWER OF BONGMATH\n\nRES_3M vs. Uni-PC (WAN). Typically only 20 steps are needed wit"
  },
  {
    "path": "__init__.py",
    "chars": 22701,
    "preview": "import importlib\r\nimport os\r\n\r\nfrom . import loaders\r\nfrom . import sigmas\r\nfrom . import conditioning\r\nfrom . import im"
  },
  {
    "path": "attention_masks.py",
    "chars": 38974,
    "preview": "import torch\r\nimport torch.nn.functional as F\r\n\r\nfrom torch  import Tensor\r\nfrom typing import Optional, Callable, Tuple"
  },
  {
    "path": "aura/mmdit.py",
    "chars": 42631,
    "preview": "#AuraFlow MMDiT\n#Originally written by the AuraFlow Authors\n\nimport math\n\nimport torch\nimport torch.nn as nn\nimport torc"
  },
  {
    "path": "beta/__init__.py",
    "chars": 12902,
    "preview": "\r\nfrom . import rk_sampler_beta\r\nfrom . import samplers\r\nfrom . import samplers_extensions\r\n\r\n\r\ndef add_beta(NODE_CLASS_"
  },
  {
    "path": "beta/constants.py",
    "chars": 974,
    "preview": "MAX_STEPS = 10000\n\n\nIMPLICIT_TYPE_NAMES = [\n    \"rebound\",\n    \"retro-eta\",\n    \"bongmath\",\n    \"predictor-corrector\",\n]"
  },
  {
    "path": "beta/deis_coefficients.py",
    "chars": 6013,
    "preview": "# Adapted from: https://github.com/zju-pi/diff-sampler/blob/main/gits-main/solver_utils.py\n# fixed the calcs for \"rhoab\""
  },
  {
    "path": "beta/noise_classes.py",
    "chars": 32046,
    "preview": "import torch\r\nimport torch.nn.functional as F\r\n\r\nfrom torch               import nn, Tensor, Generator, lerp\r\nfrom torch"
  },
  {
    "path": "beta/phi_functions.py",
    "chars": 4196,
    "preview": "import torch\r\nimport math\r\nfrom typing import Optional\r\n\r\n\r\n# Remainder solution\r\ndef _phi(j, neg_h):\r\n    remainder = t"
  },
  {
    "path": "beta/rk_coefficients_beta.py",
    "chars": 126775,
    "preview": "import torch\r\nfrom torch import Tensor\r\n\r\nimport copy\r\nimport math\r\nfrom mpmath import mp, mpf, factorial, exp\r\nmp.dps ="
  },
  {
    "path": "beta/rk_guide_func_beta.py",
    "chars": 134081,
    "preview": "import torch\r\nimport torch.nn.functional as F\r\nfrom torch import Tensor\r\n\r\nimport itertools\r\nimport copy\r\n\r\nfrom typing "
  },
  {
    "path": "beta/rk_method_beta.py",
    "chars": 52006,
    "preview": "import torch\r\nfrom torch import Tensor\r\nfrom typing import Optional, Callable, Tuple, List, Dict, Any, Union\r\n\r\nimport c"
  },
  {
    "path": "beta/rk_noise_sampler_beta.py",
    "chars": 41963,
    "preview": "import torch\r\n\r\nfrom torch  import Tensor\r\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKING"
  },
  {
    "path": "beta/rk_sampler_beta.py",
    "chars": 149739,
    "preview": "import torch\r\nfrom torch import Tensor\r\nimport torch.nn.functional as F\r\nfrom tqdm.auto import trange\r\nimport gc\r\nfrom t"
  },
  {
    "path": "beta/samplers.py",
    "chars": 153352,
    "preview": "import torch\nimport torch.nn.functional as F\nfrom torch import Tensor\n\nfrom typing import Optional, Callable, Tuple, Dic"
  },
  {
    "path": "beta/samplers_extensions.py",
    "chars": 217278,
    "preview": "import torch\r\nfrom torch import Tensor\r\nimport torch.nn.functional as F\r\n\r\nfrom dataclasses import dataclass, asdict\r\nfr"
  },
  {
    "path": "chroma/layers.py",
    "chars": 7991,
    "preview": "import torch\nfrom torch import Tensor, nn\n\n#from comfy.ldm.flux.math import attention\nfrom comfy.ldm.flux.layers import "
  },
  {
    "path": "chroma/math.py",
    "chars": 1802,
    "preview": "import torch\nfrom einops import rearrange\nfrom torch import Tensor\nfrom comfy.ldm.modules.attention import attention_pyt"
  },
  {
    "path": "chroma/model.py",
    "chars": 75230,
    "preview": "#Original code can be found on: https://github.com/black-forest-labs/flux\n\nfrom dataclasses import dataclass\n\nimport tor"
  },
  {
    "path": "conditioning.py",
    "chars": 87642,
    "preview": "import torch\r\nimport torch.nn.functional as F\r\nimport math\r\n\r\nfrom torch  import Tensor\r\nfrom typing import Optional, Ca"
  },
  {
    "path": "example_workflows/chroma regional antiblur.json",
    "chars": 12116,
    "preview": "{\"last_node_id\":726,\"last_link_id\":2104,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1280,-650],\"size\":[75,26],\"flags\":{},\""
  },
  {
    "path": "example_workflows/chroma txt2img.json",
    "chars": 4665,
    "preview": "{\"last_node_id\":727,\"last_link_id\":2113,\"nodes\":[{\"id\":398,\"type\":\"SaveImage\",\"pos\":[1379.9996337890625,-267.28359985351"
  },
  {
    "path": "example_workflows/comparison ksampler vs csksampler chain workflows.json",
    "chars": 30292,
    "preview": "{\"last_node_id\":1423,\"last_link_id\":3992,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[17750,830],\"size\":[75,26],\"flags\":{},"
  },
  {
    "path": "example_workflows/flux faceswap sync pulid.json",
    "chars": 52395,
    "preview": "{\"last_node_id\":1741,\"last_link_id\":6622,\"nodes\":[{\"id\":490,\"type\":\"Reroute\",\"pos\":[-1346.8087158203125,-823.32696533203"
  },
  {
    "path": "example_workflows/flux faceswap sync.json",
    "chars": 45745,
    "preview": "{\"last_node_id\":1698,\"last_link_id\":6519,\"nodes\":[{\"id\":490,\"type\":\"Reroute\",\"pos\":[-669.7835083007812,-822.269104003906"
  },
  {
    "path": "example_workflows/flux faceswap.json",
    "chars": 27515,
    "preview": "{\"last_node_id\":1153,\"last_link_id\":4163,\"nodes\":[{\"id\":758,\"type\":\"ImageResize+\",\"pos\":[1987.2191162109375,-351.3092041"
  },
  {
    "path": "example_workflows/flux inpaint area.json",
    "chars": 14695,
    "preview": "{\"last_node_id\":698,\"last_link_id\":1968,\"nodes\":[{\"id\":670,\"type\":\"SaveImage\",\"pos\":[5481.20751953125,763.7216186523438]"
  },
  {
    "path": "example_workflows/flux inpaint bongmath.json",
    "chars": 21160,
    "preview": "{\"last_node_id\":1057,\"last_link_id\":3666,\"nodes\":[{\"id\":758,\"type\":\"ImageResize+\",\"pos\":[1304.9573974609375,-352.7953796"
  },
  {
    "path": "example_workflows/flux inpainting.json",
    "chars": 9339,
    "preview": "{\"last_node_id\":637,\"last_link_id\":1778,\"nodes\":[{\"id\":617,\"type\":\"ClownsharKSampler_Beta\",\"pos\":[4647.0654296875,1012.7"
  },
  {
    "path": "example_workflows/flux regional antiblur.json",
    "chars": 11628,
    "preview": "{\"last_node_id\":723,\"last_link_id\":2096,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1280,-650],\"size\":[75,26],\"flags\":{},\""
  },
  {
    "path": "example_workflows/flux regional redux (2 zone).json",
    "chars": 17068,
    "preview": "{\"last_node_id\":704,\"last_link_id\":2042,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1300,-790],\"size\":[75,26],\"flags\":{},\""
  },
  {
    "path": "example_workflows/flux regional redux (3 zone, nested).json",
    "chars": 21455,
    "preview": "{\"last_node_id\":720,\"last_link_id\":2082,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1300,-790],\"size\":[75,26],\"flags\":{},\""
  },
  {
    "path": "example_workflows/flux regional redux (3 zone, overlapping).json",
    "chars": 20775,
    "preview": "{\"last_node_id\":715,\"last_link_id\":2063,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1300,-790],\"size\":[75,26],\"flags\":{},\""
  },
  {
    "path": "example_workflows/flux regional redux (3 zones).json",
    "chars": 20421,
    "preview": "{\"last_node_id\":714,\"last_link_id\":2062,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1300,-790],\"size\":[75,26],\"flags\":{},\""
  },
  {
    "path": "example_workflows/flux style antiblur.json",
    "chars": 8712,
    "preview": "{\"last_node_id\":739,\"last_link_id\":2113,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1280,-650],\"size\":[75,26],\"flags\":{},\""
  },
  {
    "path": "example_workflows/flux style transfer gguf.json",
    "chars": 20867,
    "preview": "{\"last_node_id\":1392,\"last_link_id\":3739,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-109.2831802368164],"
  },
  {
    "path": "example_workflows/flux upscale thumbnail large multistage.json",
    "chars": 35316,
    "preview": "{\"last_node_id\":431,\"last_link_id\":1176,\"nodes\":[{\"id\":361,\"type\":\"CLIPVisionEncode\",\"pos\":[860,820],\"size\":[253.6000061"
  },
  {
    "path": "example_workflows/flux upscale thumbnail large.json",
    "chars": 21383,
    "preview": "{\"last_node_id\":408,\"last_link_id\":1127,\"nodes\":[{\"id\":369,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[1138.06640625,1574.3288"
  },
  {
    "path": "example_workflows/flux upscale thumbnail widescreen.json",
    "chars": 21732,
    "preview": "{\"last_node_id\":411,\"last_link_id\":1130,\"nodes\":[{\"id\":369,\"type\":\"ClownGuide_Style_Beta\",\"pos\":[1138.06640625,1574.3288"
  },
  {
    "path": "example_workflows/hidream guide data projection.json",
    "chars": 7436,
    "preview": "{\"last_node_id\":641,\"last_link_id\":2035,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\""
  },
  {
    "path": "example_workflows/hidream guide epsilon projection.json",
    "chars": 7439,
    "preview": "{\"last_node_id\":641,\"last_link_id\":2035,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\""
  },
  {
    "path": "example_workflows/hidream guide flow.json",
    "chars": 9615,
    "preview": "{\"last_node_id\":640,\"last_link_id\":2035,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\""
  },
  {
    "path": "example_workflows/hidream guide fully_pseudoimplicit.json",
    "chars": 7932,
    "preview": "{\"last_node_id\":643,\"last_link_id\":2036,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\""
  },
  {
    "path": "example_workflows/hidream guide lure.json",
    "chars": 9623,
    "preview": "{\"last_node_id\":640,\"last_link_id\":2035,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\""
  },
  {
    "path": "example_workflows/hidream guide pseudoimplicit.json",
    "chars": 7449,
    "preview": "{\"last_node_id\":641,\"last_link_id\":2035,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\""
  },
  {
    "path": "example_workflows/hidream hires fix.json",
    "chars": 17233,
    "preview": "{\"last_node_id\":1358,\"last_link_id\":3624,\"nodes\":[{\"id\":490,\"type\":\"Reroute\",\"pos\":[13130,-70],\"size\":[75,26],\"flags\":{}"
  },
  {
    "path": "example_workflows/hidream regional 3 zones.json",
    "chars": 12742,
    "preview": "{\"last_node_id\":612,\"last_link_id\":1834,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[580,-180],\"size\":[75,26],\"flags\":{},\"o"
  },
  {
    "path": "example_workflows/hidream regional antiblur.json",
    "chars": 12239,
    "preview": "{\"last_node_id\":727,\"last_link_id\":2103,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1280,-650],\"size\":[75,26],\"flags\":{},\""
  },
  {
    "path": "example_workflows/hidream style antiblur.json",
    "chars": 9276,
    "preview": "{\"last_node_id\":742,\"last_link_id\":2119,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1280,-650],\"size\":[75,26],\"flags\":{},\""
  },
  {
    "path": "example_workflows/hidream style transfer txt2img.json",
    "chars": 20327,
    "preview": "{\"last_node_id\":1385,\"last_link_id\":3733,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-109.2831802368164],"
  },
  {
    "path": "example_workflows/hidream style transfer v2.json",
    "chars": 20326,
    "preview": "{\"last_node_id\":1385,\"last_link_id\":3733,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-109.2831802368164],"
  },
  {
    "path": "example_workflows/hidream style transfer.json",
    "chars": 12817,
    "preview": "{\"last_node_id\":1317,\"last_link_id\":3533,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[13140,110],\"size\":[75,26],\"flags\":{},"
  },
  {
    "path": "example_workflows/hidream txt2img.json",
    "chars": 7296,
    "preview": "{\"last_node_id\":1321,\"last_link_id\":3548,\"nodes\":[{\"id\":490,\"type\":\"Reroute\",\"pos\":[13130,-70],\"size\":[75,26],\"flags\":{}"
  },
  {
    "path": "example_workflows/hidream unsampling data WF.json",
    "chars": 9269,
    "preview": "{\"last_node_id\":637,\"last_link_id\":2029,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\""
  },
  {
    "path": "example_workflows/hidream unsampling data.json",
    "chars": 9269,
    "preview": "{\"last_node_id\":637,\"last_link_id\":2029,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\""
  },
  {
    "path": "example_workflows/hidream unsampling pseudoimplicit.json",
    "chars": 9290,
    "preview": "{\"last_node_id\":637,\"last_link_id\":2029,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\""
  },
  {
    "path": "example_workflows/hidream unsampling.json",
    "chars": 9283,
    "preview": "{\"last_node_id\":637,\"last_link_id\":2029,\"nodes\":[{\"id\":628,\"type\":\"LoadImage\",\"pos\":[599.166015625,156.38429260253906],\""
  },
  {
    "path": "example_workflows/intro to clownsampling.json",
    "chars": 147641,
    "preview": "{\"last_node_id\":876,\"last_link_id\":2046,\"nodes\":[{\"id\":453,\"type\":\"VAEDecode\",\"pos\":[-303.0476379394531,3073.681640625],"
  },
  {
    "path": "example_workflows/sd35 medium unsampling data.json",
    "chars": 8618,
    "preview": "{\"last_node_id\":635,\"last_link_id\":2023,\"nodes\":[{\"id\":627,\"type\":\"SD35Loader\",\"pos\":[602.6103515625,-123.47957611083984"
  },
  {
    "path": "example_workflows/sd35 medium unsampling.json",
    "chars": 8619,
    "preview": "{\"last_node_id\":635,\"last_link_id\":2023,\"nodes\":[{\"id\":627,\"type\":\"SD35Loader\",\"pos\":[602.6103515625,-123.47957611083984"
  },
  {
    "path": "example_workflows/sdxl regional antiblur.json",
    "chars": 12076,
    "preview": "{\"last_node_id\":730,\"last_link_id\":2113,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[1280,-650],\"size\":[75,26],\"flags\":{},\""
  },
  {
    "path": "example_workflows/sdxl style transfer.json",
    "chars": 19730,
    "preview": "{\"last_node_id\":1394,\"last_link_id\":3744,\"nodes\":[{\"id\":13,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-109.2831802368164],"
  },
  {
    "path": "example_workflows/style transfer.json",
    "chars": 23534,
    "preview": "{\"last_node_id\":1408,\"last_link_id\":3768,\"nodes\":[{\"id\":14,\"type\":\"Reroute\",\"pos\":[13508.9013671875,-29.283178329467773]"
  },
  {
    "path": "example_workflows/ultracascade txt2img style transfer.json",
    "chars": 22627,
    "preview": "{\"last_node_id\":43,\"last_link_id\":52,\"nodes\":[{\"id\":1,\"type\":\"VAEDecode\",\"pos\":[2240,3610],\"size\":[210,46],\"flags\":{\"col"
  },
  {
    "path": "example_workflows/ultracascade txt2img.json",
    "chars": 18731,
    "preview": "{\"last_node_id\":33,\"last_link_id\":23,\"nodes\":[{\"id\":1,\"type\":\"VAEDecode\",\"pos\":[1867.32421875,3610.962158203125],\"size\":"
  },
  {
    "path": "example_workflows/wan img2vid 720p (fp8 fast).json",
    "chars": 8436,
    "preview": "{\"last_node_id\":67,\"last_link_id\":138,\"nodes\":[{\"id\":56,\"type\":\"PreviewImage\",\"pos\":[480,600],\"size\":[210,246],\"flags\":{"
  },
  {
    "path": "example_workflows/wan txt2img (fp8 fast).json",
    "chars": 6003,
    "preview": "{\"last_node_id\":698,\"last_link_id\":1748,\"nodes\":[{\"id\":676,\"type\":\"CLIPTextEncode\",\"pos\":[2651.457763671875,139.27738952"
  },
  {
    "path": "example_workflows/wan vid2vid.json",
    "chars": 15041,
    "preview": "{\"last_node_id\":406,\"last_link_id\":1039,\"nodes\":[{\"id\":7,\"type\":\"CLIPTextEncode\",\"pos\":[971.2105712890625,537.63671875],"
  },
  {
    "path": "flux/controlnet.py",
    "chars": 9172,
    "preview": "#Original code can be found on: https://github.com/XLabs-AI/x-flux/blob/main/src/flux/controlnet.py\n#modified to support"
  },
  {
    "path": "flux/layers.py",
    "chars": 19134,
    "preview": "# Adapted from: https://github.com/black-forest-labs/flux\n\nimport math\nimport torch\nfrom torch import Tensor, nn\n\nfrom t"
  },
  {
    "path": "flux/math.py",
    "chars": 1561,
    "preview": "import torch\nfrom einops import rearrange\nfrom torch import Tensor\nfrom comfy.ldm.modules.attention import attention_pyt"
  },
  {
    "path": "flux/model.py",
    "chars": 52701,
    "preview": "# Adapted from: https://github.com/black-forest-labs/flux\n\nimport torch\nimport torch.nn.functional as F\nfrom torch impor"
  },
  {
    "path": "flux/redux.py",
    "chars": 5424,
    "preview": "import torch\nimport comfy.ops\nimport torch.nn\nimport torch.nn.functional as F\n\nops = comfy.ops.manual_cast\n\nclass ReRedu"
  },
  {
    "path": "helper.py",
    "chars": 34079,
    "preview": "import torch\nimport torch.nn.functional as F\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKI"
  },
  {
    "path": "helper_sigma_preview_image_preproc.py",
    "chars": 33449,
    "preview": "import torch\nimport torch.nn.functional as F\n\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union\n\nimport num"
  },
  {
    "path": "hidream/model.py",
    "chars": 73189,
    "preview": "import torch\nimport torch.nn.functional as F\nimport math\n\nimport torch.nn as nn\nfrom torch import Tensor, FloatTensor\nfr"
  },
  {
    "path": "images.py",
    "chars": 52030,
    "preview": "import torch\nimport torch.nn.functional as F\nimport math\n\nfrom torchvision import transforms\n\nfrom torch  import Tensor\n"
  },
  {
    "path": "latent_images.py",
    "chars": 5778,
    "preview": "import comfy.samplers\r\nimport comfy.sample\r\nimport comfy.sampler_helpers\r\nimport comfy.utils\r\n    \r\nimport itertools\r\n\r\n"
  },
  {
    "path": "latents.py",
    "chars": 32067,
    "preview": "import torch\r\nimport torch.nn.functional as F\r\nfrom typing import Tuple, List, Union\r\nimport math\r\n\r\n\r\n# TENSOR PROJECTI"
  },
  {
    "path": "legacy/__init__.py",
    "chars": 5153,
    "preview": "from . import legacy_samplers\r\nfrom . import legacy_sampler_rk\r\n\r\nfrom . import rk_sampler\r\nfrom . import samplers\r\nfrom"
  },
  {
    "path": "legacy/conditioning.py",
    "chars": 34731,
    "preview": "import torch\r\nimport base64\r\nimport pickle # used strictly for serializing conditioning in the ConditioningToBase64 and "
  },
  {
    "path": "legacy/constants.py",
    "chars": 123,
    "preview": "MAX_STEPS = 10000\n\n\nIMPLICIT_TYPE_NAMES = [\n    \"predictor-corrector\",\n    \"rebound\",\n    \"retro-eta\",\n    \"bongmath\",\n]"
  },
  {
    "path": "legacy/deis_coefficients.py",
    "chars": 6013,
    "preview": "# Adapted from: https://github.com/zju-pi/diff-sampler/blob/main/gits-main/solver_utils.py\n# fixed the calcs for \"rhoab\""
  },
  {
    "path": "legacy/flux/controlnet.py",
    "chars": 9172,
    "preview": "#Original code can be found on: https://github.com/XLabs-AI/x-flux/blob/main/src/flux/controlnet.py\n#modified to support"
  },
  {
    "path": "legacy/flux/layers.py",
    "chars": 13499,
    "preview": "# Adapted from: https://github.com/black-forest-labs/flux\n\nimport math\nimport torch\nfrom torch import Tensor, nn\n\nimport"
  },
  {
    "path": "legacy/flux/math.py",
    "chars": 1495,
    "preview": "import torch\nfrom einops import rearrange\nfrom torch import Tensor\nfrom comfy.ldm.modules.attention import optimized_att"
  },
  {
    "path": "legacy/flux/model.py",
    "chars": 10913,
    "preview": "# Adapted from: https://github.com/black-forest-labs/flux\n\nimport torch\nfrom torch import Tensor, nn\nfrom dataclasses im"
  },
  {
    "path": "legacy/flux/redux.py",
    "chars": 724,
    "preview": "import torch\nimport comfy.ops\n\nops = comfy.ops.manual_cast\n\nclass ReduxImageEncoder(torch.nn.Module):\n    def __init__(\n"
  },
  {
    "path": "legacy/helper.py",
    "chars": 11285,
    "preview": "import re\nimport torch\nfrom comfy.samplers import SCHEDULER_NAMES\nimport torch.nn.functional as F\nfrom ..res4lyf import "
  },
  {
    "path": "legacy/latents.py",
    "chars": 80976,
    "preview": "\r\nimport torch\r\nimport torch.nn.functional as F\r\nimport math\r\nimport itertools\r\n\r\nimport comfy.samplers\r\nimport comfy.sa"
  },
  {
    "path": "legacy/legacy_sampler_rk.py",
    "chars": 39368,
    "preview": "import torch\r\nimport torch.nn.functional as F\r\n\r\n\r\nfrom tqdm.auto import trange\r\nimport math\r\nimport copy\r\nimport gc\r\n\r\n"
  },
  {
    "path": "legacy/legacy_samplers.py",
    "chars": 37342,
    "preview": "import torch\r\nimport torch.nn.functional as F\r\n\r\nimport comfy.samplers\r\nimport comfy.sample\r\nimport comfy.sampler_helper"
  },
  {
    "path": "legacy/models.py",
    "chars": 21311,
    "preview": "# Code adapted from https://github.com/comfyanonymous/ComfyUI/\r\n\r\nimport comfy.samplers\r\nimport comfy.sample\r\nimport com"
  },
  {
    "path": "legacy/noise_classes.py",
    "chars": 29922,
    "preview": "import torch\r\nfrom torch import nn, Tensor, Generator, lerp\r\nfrom torch.nn.functional import unfold\r\nimport torch.nn.fun"
  },
  {
    "path": "legacy/noise_sigmas_timesteps_scaling.py",
    "chars": 10957,
    "preview": "import torch\r\n#from..noise_classes import *\r\nimport comfy.model_patcher\r\nfrom .helper import has_nested_attr\r\n\r\ndef get_"
  },
  {
    "path": "legacy/phi_functions.py",
    "chars": 3110,
    "preview": "import torch\r\nimport math\r\nfrom typing import Optional\r\n\r\n\r\n# Remainder solution\r\ndef _phi(j, neg_h):\r\n    remainder = t"
  },
  {
    "path": "legacy/rk_coefficients.py",
    "chars": 61949,
    "preview": "import torch\r\nimport copy\r\nimport math\r\n\r\nfrom .deis_coefficients import get_deis_coeff_list\r\nfrom .phi_functions import"
  },
  {
    "path": "legacy/rk_guide_func.py",
    "chars": 61506,
    "preview": "import torch\r\nimport torch.nn.functional as F\r\nfrom typing import Tuple\r\n\r\nfrom einops import rearrange\r\n\r\nfrom .sigmas "
  },
  {
    "path": "legacy/rk_method.py",
    "chars": 12561,
    "preview": "import torch\r\nimport re\r\n\r\nimport torch.nn.functional as F\r\nimport torchvision.transforms as T\r\n\r\nfrom .noise_classes im"
  },
  {
    "path": "legacy/rk_sampler.py",
    "chars": 45437,
    "preview": "import torch\r\nimport torch.nn.functional as F\r\n\r\nfrom tqdm.auto import trange\r\n\r\n\r\nfrom .noise_classes import *\r\nfrom .n"
  },
  {
    "path": "legacy/samplers.py",
    "chars": 45853,
    "preview": "from .noise_classes import prepare_noise, NOISE_GENERATOR_CLASSES_SIMPLE, NOISE_GENERATOR_NAMES_SIMPLE, NOISE_GENERATOR_"
  },
  {
    "path": "legacy/samplers_extensions.py",
    "chars": 29886,
    "preview": "from .noise_classes import NOISE_GENERATOR_CLASSES, NOISE_GENERATOR_CLASSES_SIMPLE, NOISE_GENERATOR_NAMES, NOISE_GENERAT"
  },
  {
    "path": "legacy/samplers_tiled.py",
    "chars": 27405,
    "preview": "# tiled sampler code adapted from https://github.com/BlenderNeko/ComfyUI_TiledKSampler \n# and heavily modified for use w"
  },
  {
    "path": "legacy/sigmas.py",
    "chars": 44146,
    "preview": "import torch\r\n\r\nimport numpy as np\r\nfrom math import *\r\nimport builtins\r\nfrom scipy.interpolate import CubicSpline\r\nimpo"
  },
  {
    "path": "legacy/tiling.py",
    "chars": 7580,
    "preview": "import torch\nimport itertools\nimport numpy as np\n\n# tiled sampler code adapted from https://github.com/BlenderNeko/Comfy"
  },
  {
    "path": "lightricks/model.py",
    "chars": 31067,
    "preview": "import torch\nfrom torch import nn\nimport torch.nn.functional as F\n\nimport comfy.ldm.modules.attention\nimport comfy.ldm.c"
  },
  {
    "path": "lightricks/symmetric_patchifier.py",
    "chars": 3950,
    "preview": "from abc import ABC, abstractmethod\nfrom typing import Tuple\n\nimport torch\nfrom einops import rearrange\nfrom torch impor"
  },
  {
    "path": "lightricks/vae/causal_conv3d.py",
    "chars": 1813,
    "preview": "from typing import Tuple, Union\n\nimport torch\nimport torch.nn as nn\nimport comfy.ops\nops = comfy.ops.disable_weight_init"
  },
  {
    "path": "lightricks/vae/causal_video_autoencoder.py",
    "chars": 41446,
    "preview": "from __future__ import annotations\nimport torch\nfrom torch import nn\nfrom functools import partial\nimport math\nfrom eino"
  },
  {
    "path": "lightricks/vae/conv_nd_factory.py",
    "chars": 2570,
    "preview": "from typing import Tuple, Union\n\n\nfrom .dual_conv3d import DualConv3d\nfrom .causal_conv3d import CausalConv3d\nimport com"
  },
  {
    "path": "lightricks/vae/dual_conv3d.py",
    "chars": 6885,
    "preview": "import math\nfrom typing import Tuple, Union\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom ein"
  },
  {
    "path": "lightricks/vae/pixel_norm.py",
    "chars": 307,
    "preview": "import torch\nfrom torch import nn\n\n\nclass PixelNorm(nn.Module):\n    def __init__(self, dim=1, eps=1e-8):\n        super(P"
  },
  {
    "path": "loaders.py",
    "chars": 19840,
    "preview": "import folder_paths\r\nimport torch\r\nimport comfy.samplers\r\nimport comfy.sample\r\nimport comfy.sampler_helpers\r\nimport comf"
  },
  {
    "path": "misc_scripts/replace_metadata.py",
    "chars": 1209,
    "preview": "#!/usr/bin/env python3\n\nimport argparse\nfrom PIL import Image\nfrom PIL.PngImagePlugin import PngInfo\n\ndef extract_metada"
  },
  {
    "path": "models.py",
    "chars": 86279,
    "preview": "import torch\r\nimport types\r\nfrom typing import Optional, Callable, Tuple, Dict, Any, Union, TYPE_CHECKING, TypeVar\r\nimpo"
  },
  {
    "path": "nodes_latents.py",
    "chars": 100301,
    "preview": "import torch.nn.functional as F\n\nimport copy\n\nimport comfy.samplers\nimport comfy.sample\nimport comfy.sampler_helpers\nimp"
  },
  {
    "path": "nodes_misc.py",
    "chars": 12593,
    "preview": "\r\nimport folder_paths\r\n\r\nimport os\r\nimport random\r\n\r\n\r\nclass SetImageSize:\r\n    @classmethod\r\n    def INPUT_TYPES(cls):\r"
  },
  {
    "path": "nodes_precision.py",
    "chars": 5940,
    "preview": "import torch\r\nfrom .helper import precision_tool\r\n\r\n\r\nclass set_precision:\r\n    def __init__(self):\r\n        pass\r\n    @"
  },
  {
    "path": "requirements.txt",
    "chars": 54,
    "preview": "opencv-python\r\nmatplotlib\r\npywavelets\r\nnumpy>=1.26.4\r\n"
  },
  {
    "path": "res4lyf.py",
    "chars": 12976,
    "preview": "# Code adapted from https://github.com/pythongosssss/ComfyUI-Custom-Scripts\n\nimport asyncio\nimport os\nimport json\nimport"
  },
  {
    "path": "rk_method_beta.py",
    "chars": 43185,
    "preview": "import torch\r\nfrom torch import Tensor\r\nfrom typing import Optional, Callable, Tuple, List, Dict, Any, Union\r\n\r\nimport c"
  },
  {
    "path": "samplers_extensions.py",
    "chars": 85580,
    "preview": "import torch\r\nfrom torch import Tensor\r\nimport torch.nn.functional as F\r\n\r\nfrom dataclasses import dataclass, asdict\r\nfr"
  },
  {
    "path": "sd/attention.py",
    "chars": 40933,
    "preview": "import math\nimport sys\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn, einsum\nfrom einops import rea"
  },
  {
    "path": "sd/openaimodel.py",
    "chars": 76466,
    "preview": "from abc import abstractmethod\nimport torch\nimport torch as th\nimport torch.nn as nn\nimport torch.nn.functional as F\nfro"
  },
  {
    "path": "sd35/mmdit.py",
    "chars": 77122,
    "preview": "from functools import partial\nfrom typing import Dict, Optional, List\n\nimport numpy as np\nimport torch\nimport torch.nn a"
  },
  {
    "path": "sigmas.py",
    "chars": 157457,
    "preview": "import torch\r\nimport numpy as np\r\nfrom math import *\r\nimport builtins\r\nfrom scipy.interpolate import CubicSpline\r\nfrom s"
  },
  {
    "path": "style_transfer.py",
    "chars": 78513,
    "preview": "\nimport torch\nimport torch.nn.functional as F\nimport torch.nn as nn\nfrom torch import Tensor, FloatTensor\nfrom typing im"
  },
  {
    "path": "wan/model.py",
    "chars": 58941,
    "preview": "# original version: https://github.com/Wan-Video/Wan2.1/blob/main/wan/modules/model.py\n# Copyright 2024-2025 The Alibaba"
  },
  {
    "path": "wan/vae.py",
    "chars": 20257,
    "preview": "# original version: https://github.com/Wan-Video/Wan2.1/blob/main/wan/modules/vae.py\n# Copyright 2024-2025 The Alibaba W"
  },
  {
    "path": "web/js/RES4LYF_dynamicWidgets.js",
    "chars": 15729,
    "preview": "import { app } from \"../../scripts/app.js\";\nimport { ComfyWidgets } from \"../../scripts/widgets.js\";\n\nlet RESDEBUG = fal"
  },
  {
    "path": "web/js/conditioningToBase64.js",
    "chars": 1654,
    "preview": "import { app } from \"../../../scripts/app.js\";\nimport { ComfyWidgets } from \"../../../scripts/widgets.js\";\n\n// Displays "
  },
  {
    "path": "web/js/res4lyf.default.json",
    "chars": 109,
    "preview": "{\n    \"name\": \"RES4LYF\",\n    \"topClownDog\": true,\n    \"enableDebugLogs\": false,\n    \"displayCategory\": true\n}"
  }
]

About this extraction

This page contains the full source code of the ClownsharkBatwing/RES4LYF GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 134 files (3.9 MB), approximately 1.0M tokens, and a symbol index with 3031 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!