[
  {
    "path": "A用户协议.txt",
    "content": "本整合包仅用作 AIGC 技术学习，基于 Github 上开源项目 Stable Diffusion Webui 制作，提供了算法的运行环境。本整合包并不附带任何生成图像所用的模型。\r\n使用本整合包即代表您已阅读并同意以下用户协议：\r\n\r\n[✓] 您不得实施包括但不限于以下行为，也不得为任何违反法律法规的行为提供便利：\r\n\t\t反对宪法所规定的基本原则的。\r\n\t\t危害国家安全，泄露国家秘密，颠覆国家政权，破坏国家统一的。\r\n\t\t损害国家荣誉和利益的。\r\n\t\t煽动民族仇恨、民族歧视，破坏民族团结的。\r\n\t\t破坏国家宗教政策，宣扬邪教和封建迷信的。\r\n\t\t散布谣言，扰乱社会秩序，破坏社会稳定的。\r\n\t\t散布淫秽、色情、赌博、暴力、凶杀、恐怖或教唆犯罪的。\r\n\t\t侮辱或诽谤他人，侵害他人合法权益的。\r\n\t\t实施任何违背“七条底线”的行为。\r\n\t\t含有法律、行政法规禁止的其他内容的。\r\n\r\n[✓] 因您的数据的产生、收集、处理、使用等任何相关事项存在违反法律法规等情况而造成的全部结果及责任均由您自行承担。\r\n\r\n如果您已阅读并同意以上协议内容，请在下方打字写入括号内的字并保存【我已阅读并同意用户协议】。\r\n\r\n请在这里的冒号后打入：我已阅读并同意用户协议"
  },
  {
    "path": "B使用教程+常见问题.txt",
    "content": "AI 作图知识库(教程): https://guide.novelai.dev/\r\n标签超市(解析组合): https://tags.novelai.dev/\r\n原图提取标签: https://spell.novelai.dev/\r\n\r\n入门参数设置基础：https://guide.novelai.dev/guide/configuration/param-basic\r\n常见安装问题: https://guide.novelai.dev/s/troubleshooting/install\r\n常见跑图问题: https://guide.novelai.dev/s/troubleshooting/generate\r\n怎么写提示词？ https://guide.novelai.dev/advanced/prompt-engineering/ \r\n怎么训练模型？ https://guide.novelai.dev/advanced/finetuning/\r\n最新消息: https://guide.novelai.dev/newsfeed\r\n\r\n问题速查:\r\n- CUDA out of memory： 炸显存 换启动参数 换显卡\r\n- DefaultCPUAllocator: 炸内存 加虚拟内存 加内存条\r\n- CUDA driver initialization failed: 装CUDA驱动\r\n\r\n- Training models with lowvram not possible: 这点显存还想炼丹？\r\n- WinError 5: 建议重装电脑，或者等下一个整合包\r\n\r\n训练配置要求:\r\n训练embedding、hypernetwork 6G显存，使用384分辨率 8G以上可以使用512分辨率\r\n训练dreambooth 最少12G显存"
  },
  {
    "path": "CODEOWNERS",
    "content": "*       @AUTOMATIC1111\r\n\r\n# if you were managing a localization and were removed from this file, this is because\r\n# the intended way to do localizations now is via extensions. See:\r\n# https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Developing-extensions\r\n# Make a repo with your localization and since you are still listed as a collaborator\r\n# you can add it to the wiki page yourself. This change is because some people complained\r\n# the git commit log is cluttered with things unrelated to almost everyone and\r\n# because I believe this is the best overall for the project to handle localizations almost\r\n# entirely without my oversight.\r\n\r\n\r\n"
  },
  {
    "path": "LICENSE.txt",
    "content": "                    GNU AFFERO GENERAL PUBLIC LICENSE\r\n                       Version 3, 19 November 2007\r\n\r\n                    Copyright (c) 2023 AUTOMATIC1111\r\n\r\n Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>\r\n Everyone is permitted to copy and distribute verbatim copies\r\n of this license document, but changing it is not allowed.\r\n\r\n                            Preamble\r\n\r\n  The GNU Affero General Public License is a free, copyleft license for\r\nsoftware and other kinds of works, specifically designed to ensure\r\ncooperation with the community in the case of network server software.\r\n\r\n  The licenses for most software and other practical works are designed\r\nto take away your freedom to share and change the works.  By contrast,\r\nour General Public Licenses are intended to guarantee your freedom to\r\nshare and change all versions of a program--to make sure it remains free\r\nsoftware for all its users.\r\n\r\n  When we speak of free software, we are referring to freedom, not\r\nprice.  Our General Public Licenses are designed to make sure that you\r\nhave the freedom to distribute copies of free software (and charge for\r\nthem if you wish), that you receive source code or can get it if you\r\nwant it, that you can change the software or use pieces of it in new\r\nfree programs, and that you know you can do these things.\r\n\r\n  Developers that use our General Public Licenses protect your rights\r\nwith two steps: (1) assert copyright on the software, and (2) offer\r\nyou this License which gives you legal permission to copy, distribute\r\nand/or modify the software.\r\n\r\n  A secondary benefit of defending all users' freedom is that\r\nimprovements made in alternate versions of the program, if they\r\nreceive widespread use, become available for other developers to\r\nincorporate.  Many developers of free software are heartened and\r\nencouraged by the resulting cooperation.  However, in the case of\r\nsoftware used on network servers, this result may fail to come about.\r\nThe GNU General Public License permits making a modified version and\r\nletting the public access it on a server without ever releasing its\r\nsource code to the public.\r\n\r\n  The GNU Affero General Public License is designed specifically to\r\nensure that, in such cases, the modified source code becomes available\r\nto the community.  It requires the operator of a network server to\r\nprovide the source code of the modified version running there to the\r\nusers of that server.  Therefore, public use of a modified version, on\r\na publicly accessible server, gives the public access to the source\r\ncode of the modified version.\r\n\r\n  An older license, called the Affero General Public License and\r\npublished by Affero, was designed to accomplish similar goals.  This is\r\na different license, not a version of the Affero GPL, but Affero has\r\nreleased a new version of the Affero GPL which permits relicensing under\r\nthis license.\r\n\r\n  The precise terms and conditions for copying, distribution and\r\nmodification follow.\r\n\r\n                       TERMS AND CONDITIONS\r\n\r\n  0. Definitions.\r\n\r\n  \"This License\" refers to version 3 of the GNU Affero General Public License.\r\n\r\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\r\nworks, such as semiconductor masks.\r\n\r\n  \"The Program\" refers to any copyrightable work licensed under this\r\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\r\n\"recipients\" may be individuals or organizations.\r\n\r\n  To \"modify\" a work means to copy from or adapt all or part of the work\r\nin a fashion requiring copyright permission, other than the making of an\r\nexact copy.  The resulting work is called a \"modified version\" of the\r\nearlier work or a work \"based on\" the earlier work.\r\n\r\n  A \"covered work\" means either the unmodified Program or a work based\r\non the Program.\r\n\r\n  To \"propagate\" a work means to do anything with it that, without\r\npermission, would make you directly or secondarily liable for\r\ninfringement under applicable copyright law, except executing it on a\r\ncomputer or modifying a private copy.  Propagation includes copying,\r\ndistribution (with or without modification), making available to the\r\npublic, and in some countries other activities as well.\r\n\r\n  To \"convey\" a work means any kind of propagation that enables other\r\nparties to make or receive copies.  Mere interaction with a user through\r\na computer network, with no transfer of a copy, is not conveying.\r\n\r\n  An interactive user interface displays \"Appropriate Legal Notices\"\r\nto the extent that it includes a convenient and prominently visible\r\nfeature that (1) displays an appropriate copyright notice, and (2)\r\ntells the user that there is no warranty for the work (except to the\r\nextent that warranties are provided), that licensees may convey the\r\nwork under this License, and how to view a copy of this License.  If\r\nthe interface presents a list of user commands or options, such as a\r\nmenu, a prominent item in the list meets this criterion.\r\n\r\n  1. Source Code.\r\n\r\n  The \"source code\" for a work means the preferred form of the work\r\nfor making modifications to it.  \"Object code\" means any non-source\r\nform of a work.\r\n\r\n  A \"Standard Interface\" means an interface that either is an official\r\nstandard defined by a recognized standards body, or, in the case of\r\ninterfaces specified for a particular programming language, one that\r\nis widely used among developers working in that language.\r\n\r\n  The \"System Libraries\" of an executable work include anything, other\r\nthan the work as a whole, that (a) is included in the normal form of\r\npackaging a Major Component, but which is not part of that Major\r\nComponent, and (b) serves only to enable use of the work with that\r\nMajor Component, or to implement a Standard Interface for which an\r\nimplementation is available to the public in source code form.  A\r\n\"Major Component\", in this context, means a major essential component\r\n(kernel, window system, and so on) of the specific operating system\r\n(if any) on which the executable work runs, or a compiler used to\r\nproduce the work, or an object code interpreter used to run it.\r\n\r\n  The \"Corresponding Source\" for a work in object code form means all\r\nthe source code needed to generate, install, and (for an executable\r\nwork) run the object code and to modify the work, including scripts to\r\ncontrol those activities.  However, it does not include the work's\r\nSystem Libraries, or general-purpose tools or generally available free\r\nprograms which are used unmodified in performing those activities but\r\nwhich are not part of the work.  For example, Corresponding Source\r\nincludes interface definition files associated with source files for\r\nthe work, and the source code for shared libraries and dynamically\r\nlinked subprograms that the work is specifically designed to require,\r\nsuch as by intimate data communication or control flow between those\r\nsubprograms and other parts of the work.\r\n\r\n  The Corresponding Source need not include anything that users\r\ncan regenerate automatically from other parts of the Corresponding\r\nSource.\r\n\r\n  The Corresponding Source for a work in source code form is that\r\nsame work.\r\n\r\n  2. Basic Permissions.\r\n\r\n  All rights granted under this License are granted for the term of\r\ncopyright on the Program, and are irrevocable provided the stated\r\nconditions are met.  This License explicitly affirms your unlimited\r\npermission to run the unmodified Program.  The output from running a\r\ncovered work is covered by this License only if the output, given its\r\ncontent, constitutes a covered work.  This License acknowledges your\r\nrights of fair use or other equivalent, as provided by copyright law.\r\n\r\n  You may make, run and propagate covered works that you do not\r\nconvey, without conditions so long as your license otherwise remains\r\nin force.  You may convey covered works to others for the sole purpose\r\nof having them make modifications exclusively for you, or provide you\r\nwith facilities for running those works, provided that you comply with\r\nthe terms of this License in conveying all material for which you do\r\nnot control copyright.  Those thus making or running the covered works\r\nfor you must do so exclusively on your behalf, under your direction\r\nand control, on terms that prohibit them from making any copies of\r\nyour copyrighted material outside their relationship with you.\r\n\r\n  Conveying under any other circumstances is permitted solely under\r\nthe conditions stated below.  Sublicensing is not allowed; section 10\r\nmakes it unnecessary.\r\n\r\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\r\n\r\n  No covered work shall be deemed part of an effective technological\r\nmeasure under any applicable law fulfilling obligations under article\r\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\r\nsimilar laws prohibiting or restricting circumvention of such\r\nmeasures.\r\n\r\n  When you convey a covered work, you waive any legal power to forbid\r\ncircumvention of technological measures to the extent such circumvention\r\nis effected by exercising rights under this License with respect to\r\nthe covered work, and you disclaim any intention to limit operation or\r\nmodification of the work as a means of enforcing, against the work's\r\nusers, your or third parties' legal rights to forbid circumvention of\r\ntechnological measures.\r\n\r\n  4. Conveying Verbatim Copies.\r\n\r\n  You may convey verbatim copies of the Program's source code as you\r\nreceive it, in any medium, provided that you conspicuously and\r\nappropriately publish on each copy an appropriate copyright notice;\r\nkeep intact all notices stating that this License and any\r\nnon-permissive terms added in accord with section 7 apply to the code;\r\nkeep intact all notices of the absence of any warranty; and give all\r\nrecipients a copy of this License along with the Program.\r\n\r\n  You may charge any price or no price for each copy that you convey,\r\nand you may offer support or warranty protection for a fee.\r\n\r\n  5. Conveying Modified Source Versions.\r\n\r\n  You may convey a work based on the Program, or the modifications to\r\nproduce it from the Program, in the form of source code under the\r\nterms of section 4, provided that you also meet all of these conditions:\r\n\r\n    a) The work must carry prominent notices stating that you modified\r\n    it, and giving a relevant date.\r\n\r\n    b) The work must carry prominent notices stating that it is\r\n    released under this License and any conditions added under section\r\n    7.  This requirement modifies the requirement in section 4 to\r\n    \"keep intact all notices\".\r\n\r\n    c) You must license the entire work, as a whole, under this\r\n    License to anyone who comes into possession of a copy.  This\r\n    License will therefore apply, along with any applicable section 7\r\n    additional terms, to the whole of the work, and all its parts,\r\n    regardless of how they are packaged.  This License gives no\r\n    permission to license the work in any other way, but it does not\r\n    invalidate such permission if you have separately received it.\r\n\r\n    d) If the work has interactive user interfaces, each must display\r\n    Appropriate Legal Notices; however, if the Program has interactive\r\n    interfaces that do not display Appropriate Legal Notices, your\r\n    work need not make them do so.\r\n\r\n  A compilation of a covered work with other separate and independent\r\nworks, which are not by their nature extensions of the covered work,\r\nand which are not combined with it such as to form a larger program,\r\nin or on a volume of a storage or distribution medium, is called an\r\n\"aggregate\" if the compilation and its resulting copyright are not\r\nused to limit the access or legal rights of the compilation's users\r\nbeyond what the individual works permit.  Inclusion of a covered work\r\nin an aggregate does not cause this License to apply to the other\r\nparts of the aggregate.\r\n\r\n  6. Conveying Non-Source Forms.\r\n\r\n  You may convey a covered work in object code form under the terms\r\nof sections 4 and 5, provided that you also convey the\r\nmachine-readable Corresponding Source under the terms of this License,\r\nin one of these ways:\r\n\r\n    a) Convey the object code in, or embodied in, a physical product\r\n    (including a physical distribution medium), accompanied by the\r\n    Corresponding Source fixed on a durable physical medium\r\n    customarily used for software interchange.\r\n\r\n    b) Convey the object code in, or embodied in, a physical product\r\n    (including a physical distribution medium), accompanied by a\r\n    written offer, valid for at least three years and valid for as\r\n    long as you offer spare parts or customer support for that product\r\n    model, to give anyone who possesses the object code either (1) a\r\n    copy of the Corresponding Source for all the software in the\r\n    product that is covered by this License, on a durable physical\r\n    medium customarily used for software interchange, for a price no\r\n    more than your reasonable cost of physically performing this\r\n    conveying of source, or (2) access to copy the\r\n    Corresponding Source from a network server at no charge.\r\n\r\n    c) Convey individual copies of the object code with a copy of the\r\n    written offer to provide the Corresponding Source.  This\r\n    alternative is allowed only occasionally and noncommercially, and\r\n    only if you received the object code with such an offer, in accord\r\n    with subsection 6b.\r\n\r\n    d) Convey the object code by offering access from a designated\r\n    place (gratis or for a charge), and offer equivalent access to the\r\n    Corresponding Source in the same way through the same place at no\r\n    further charge.  You need not require recipients to copy the\r\n    Corresponding Source along with the object code.  If the place to\r\n    copy the object code is a network server, the Corresponding Source\r\n    may be on a different server (operated by you or a third party)\r\n    that supports equivalent copying facilities, provided you maintain\r\n    clear directions next to the object code saying where to find the\r\n    Corresponding Source.  Regardless of what server hosts the\r\n    Corresponding Source, you remain obligated to ensure that it is\r\n    available for as long as needed to satisfy these requirements.\r\n\r\n    e) Convey the object code using peer-to-peer transmission, provided\r\n    you inform other peers where the object code and Corresponding\r\n    Source of the work are being offered to the general public at no\r\n    charge under subsection 6d.\r\n\r\n  A separable portion of the object code, whose source code is excluded\r\nfrom the Corresponding Source as a System Library, need not be\r\nincluded in conveying the object code work.\r\n\r\n  A \"User Product\" is either (1) a \"consumer product\", which means any\r\ntangible personal property which is normally used for personal, family,\r\nor household purposes, or (2) anything designed or sold for incorporation\r\ninto a dwelling.  In determining whether a product is a consumer product,\r\ndoubtful cases shall be resolved in favor of coverage.  For a particular\r\nproduct received by a particular user, \"normally used\" refers to a\r\ntypical or common use of that class of product, regardless of the status\r\nof the particular user or of the way in which the particular user\r\nactually uses, or expects or is expected to use, the product.  A product\r\nis a consumer product regardless of whether the product has substantial\r\ncommercial, industrial or non-consumer uses, unless such uses represent\r\nthe only significant mode of use of the product.\r\n\r\n  \"Installation Information\" for a User Product means any methods,\r\nprocedures, authorization keys, or other information required to install\r\nand execute modified versions of a covered work in that User Product from\r\na modified version of its Corresponding Source.  The information must\r\nsuffice to ensure that the continued functioning of the modified object\r\ncode is in no case prevented or interfered with solely because\r\nmodification has been made.\r\n\r\n  If you convey an object code work under this section in, or with, or\r\nspecifically for use in, a User Product, and the conveying occurs as\r\npart of a transaction in which the right of possession and use of the\r\nUser Product is transferred to the recipient in perpetuity or for a\r\nfixed term (regardless of how the transaction is characterized), the\r\nCorresponding Source conveyed under this section must be accompanied\r\nby the Installation Information.  But this requirement does not apply\r\nif neither you nor any third party retains the ability to install\r\nmodified object code on the User Product (for example, the work has\r\nbeen installed in ROM).\r\n\r\n  The requirement to provide Installation Information does not include a\r\nrequirement to continue to provide support service, warranty, or updates\r\nfor a work that has been modified or installed by the recipient, or for\r\nthe User Product in which it has been modified or installed.  Access to a\r\nnetwork may be denied when the modification itself materially and\r\nadversely affects the operation of the network or violates the rules and\r\nprotocols for communication across the network.\r\n\r\n  Corresponding Source conveyed, and Installation Information provided,\r\nin accord with this section must be in a format that is publicly\r\ndocumented (and with an implementation available to the public in\r\nsource code form), and must require no special password or key for\r\nunpacking, reading or copying.\r\n\r\n  7. Additional Terms.\r\n\r\n  \"Additional permissions\" are terms that supplement the terms of this\r\nLicense by making exceptions from one or more of its conditions.\r\nAdditional permissions that are applicable to the entire Program shall\r\nbe treated as though they were included in this License, to the extent\r\nthat they are valid under applicable law.  If additional permissions\r\napply only to part of the Program, that part may be used separately\r\nunder those permissions, but the entire Program remains governed by\r\nthis License without regard to the additional permissions.\r\n\r\n  When you convey a copy of a covered work, you may at your option\r\nremove any additional permissions from that copy, or from any part of\r\nit.  (Additional permissions may be written to require their own\r\nremoval in certain cases when you modify the work.)  You may place\r\nadditional permissions on material, added by you to a covered work,\r\nfor which you have or can give appropriate copyright permission.\r\n\r\n  Notwithstanding any other provision of this License, for material you\r\nadd to a covered work, you may (if authorized by the copyright holders of\r\nthat material) supplement the terms of this License with terms:\r\n\r\n    a) Disclaiming warranty or limiting liability differently from the\r\n    terms of sections 15 and 16 of this License; or\r\n\r\n    b) Requiring preservation of specified reasonable legal notices or\r\n    author attributions in that material or in the Appropriate Legal\r\n    Notices displayed by works containing it; or\r\n\r\n    c) Prohibiting misrepresentation of the origin of that material, or\r\n    requiring that modified versions of such material be marked in\r\n    reasonable ways as different from the original version; or\r\n\r\n    d) Limiting the use for publicity purposes of names of licensors or\r\n    authors of the material; or\r\n\r\n    e) Declining to grant rights under trademark law for use of some\r\n    trade names, trademarks, or service marks; or\r\n\r\n    f) Requiring indemnification of licensors and authors of that\r\n    material by anyone who conveys the material (or modified versions of\r\n    it) with contractual assumptions of liability to the recipient, for\r\n    any liability that these contractual assumptions directly impose on\r\n    those licensors and authors.\r\n\r\n  All other non-permissive additional terms are considered \"further\r\nrestrictions\" within the meaning of section 10.  If the Program as you\r\nreceived it, or any part of it, contains a notice stating that it is\r\ngoverned by this License along with a term that is a further\r\nrestriction, you may remove that term.  If a license document contains\r\na further restriction but permits relicensing or conveying under this\r\nLicense, you may add to a covered work material governed by the terms\r\nof that license document, provided that the further restriction does\r\nnot survive such relicensing or conveying.\r\n\r\n  If you add terms to a covered work in accord with this section, you\r\nmust place, in the relevant source files, a statement of the\r\nadditional terms that apply to those files, or a notice indicating\r\nwhere to find the applicable terms.\r\n\r\n  Additional terms, permissive or non-permissive, may be stated in the\r\nform of a separately written license, or stated as exceptions;\r\nthe above requirements apply either way.\r\n\r\n  8. Termination.\r\n\r\n  You may not propagate or modify a covered work except as expressly\r\nprovided under this License.  Any attempt otherwise to propagate or\r\nmodify it is void, and will automatically terminate your rights under\r\nthis License (including any patent licenses granted under the third\r\nparagraph of section 11).\r\n\r\n  However, if you cease all violation of this License, then your\r\nlicense from a particular copyright holder is reinstated (a)\r\nprovisionally, unless and until the copyright holder explicitly and\r\nfinally terminates your license, and (b) permanently, if the copyright\r\nholder fails to notify you of the violation by some reasonable means\r\nprior to 60 days after the cessation.\r\n\r\n  Moreover, your license from a particular copyright holder is\r\nreinstated permanently if the copyright holder notifies you of the\r\nviolation by some reasonable means, this is the first time you have\r\nreceived notice of violation of this License (for any work) from that\r\ncopyright holder, and you cure the violation prior to 30 days after\r\nyour receipt of the notice.\r\n\r\n  Termination of your rights under this section does not terminate the\r\nlicenses of parties who have received copies or rights from you under\r\nthis License.  If your rights have been terminated and not permanently\r\nreinstated, you do not qualify to receive new licenses for the same\r\nmaterial under section 10.\r\n\r\n  9. Acceptance Not Required for Having Copies.\r\n\r\n  You are not required to accept this License in order to receive or\r\nrun a copy of the Program.  Ancillary propagation of a covered work\r\noccurring solely as a consequence of using peer-to-peer transmission\r\nto receive a copy likewise does not require acceptance.  However,\r\nnothing other than this License grants you permission to propagate or\r\nmodify any covered work.  These actions infringe copyright if you do\r\nnot accept this License.  Therefore, by modifying or propagating a\r\ncovered work, you indicate your acceptance of this License to do so.\r\n\r\n  10. Automatic Licensing of Downstream Recipients.\r\n\r\n  Each time you convey a covered work, the recipient automatically\r\nreceives a license from the original licensors, to run, modify and\r\npropagate that work, subject to this License.  You are not responsible\r\nfor enforcing compliance by third parties with this License.\r\n\r\n  An \"entity transaction\" is a transaction transferring control of an\r\norganization, or substantially all assets of one, or subdividing an\r\norganization, or merging organizations.  If propagation of a covered\r\nwork results from an entity transaction, each party to that\r\ntransaction who receives a copy of the work also receives whatever\r\nlicenses to the work the party's predecessor in interest had or could\r\ngive under the previous paragraph, plus a right to possession of the\r\nCorresponding Source of the work from the predecessor in interest, if\r\nthe predecessor has it or can get it with reasonable efforts.\r\n\r\n  You may not impose any further restrictions on the exercise of the\r\nrights granted or affirmed under this License.  For example, you may\r\nnot impose a license fee, royalty, or other charge for exercise of\r\nrights granted under this License, and you may not initiate litigation\r\n(including a cross-claim or counterclaim in a lawsuit) alleging that\r\nany patent claim is infringed by making, using, selling, offering for\r\nsale, or importing the Program or any portion of it.\r\n\r\n  11. Patents.\r\n\r\n  A \"contributor\" is a copyright holder who authorizes use under this\r\nLicense of the Program or a work on which the Program is based.  The\r\nwork thus licensed is called the contributor's \"contributor version\".\r\n\r\n  A contributor's \"essential patent claims\" are all patent claims\r\nowned or controlled by the contributor, whether already acquired or\r\nhereafter acquired, that would be infringed by some manner, permitted\r\nby this License, of making, using, or selling its contributor version,\r\nbut do not include claims that would be infringed only as a\r\nconsequence of further modification of the contributor version.  For\r\npurposes of this definition, \"control\" includes the right to grant\r\npatent sublicenses in a manner consistent with the requirements of\r\nthis License.\r\n\r\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\r\npatent license under the contributor's essential patent claims, to\r\nmake, use, sell, offer for sale, import and otherwise run, modify and\r\npropagate the contents of its contributor version.\r\n\r\n  In the following three paragraphs, a \"patent license\" is any express\r\nagreement or commitment, however denominated, not to enforce a patent\r\n(such as an express permission to practice a patent or covenant not to\r\nsue for patent infringement).  To \"grant\" such a patent license to a\r\nparty means to make such an agreement or commitment not to enforce a\r\npatent against the party.\r\n\r\n  If you convey a covered work, knowingly relying on a patent license,\r\nand the Corresponding Source of the work is not available for anyone\r\nto copy, free of charge and under the terms of this License, through a\r\npublicly available network server or other readily accessible means,\r\nthen you must either (1) cause the Corresponding Source to be so\r\navailable, or (2) arrange to deprive yourself of the benefit of the\r\npatent license for this particular work, or (3) arrange, in a manner\r\nconsistent with the requirements of this License, to extend the patent\r\nlicense to downstream recipients.  \"Knowingly relying\" means you have\r\nactual knowledge that, but for the patent license, your conveying the\r\ncovered work in a country, or your recipient's use of the covered work\r\nin a country, would infringe one or more identifiable patents in that\r\ncountry that you have reason to believe are valid.\r\n\r\n  If, pursuant to or in connection with a single transaction or\r\narrangement, you convey, or propagate by procuring conveyance of, a\r\ncovered work, and grant a patent license to some of the parties\r\nreceiving the covered work authorizing them to use, propagate, modify\r\nor convey a specific copy of the covered work, then the patent license\r\nyou grant is automatically extended to all recipients of the covered\r\nwork and works based on it.\r\n\r\n  A patent license is \"discriminatory\" if it does not include within\r\nthe scope of its coverage, prohibits the exercise of, or is\r\nconditioned on the non-exercise of one or more of the rights that are\r\nspecifically granted under this License.  You may not convey a covered\r\nwork if you are a party to an arrangement with a third party that is\r\nin the business of distributing software, under which you make payment\r\nto the third party based on the extent of your activity of conveying\r\nthe work, and under which the third party grants, to any of the\r\nparties who would receive the covered work from you, a discriminatory\r\npatent license (a) in connection with copies of the covered work\r\nconveyed by you (or copies made from those copies), or (b) primarily\r\nfor and in connection with specific products or compilations that\r\ncontain the covered work, unless you entered into that arrangement,\r\nor that patent license was granted, prior to 28 March 2007.\r\n\r\n  Nothing in this License shall be construed as excluding or limiting\r\nany implied license or other defenses to infringement that may\r\notherwise be available to you under applicable patent law.\r\n\r\n  12. No Surrender of Others' Freedom.\r\n\r\n  If conditions are imposed on you (whether by court order, agreement or\r\notherwise) that contradict the conditions of this License, they do not\r\nexcuse you from the conditions of this License.  If you cannot convey a\r\ncovered work so as to satisfy simultaneously your obligations under this\r\nLicense and any other pertinent obligations, then as a consequence you may\r\nnot convey it at all.  For example, if you agree to terms that obligate you\r\nto collect a royalty for further conveying from those to whom you convey\r\nthe Program, the only way you could satisfy both those terms and this\r\nLicense would be to refrain entirely from conveying the Program.\r\n\r\n  13. Remote Network Interaction; Use with the GNU General Public License.\r\n\r\n  Notwithstanding any other provision of this License, if you modify the\r\nProgram, your modified version must prominently offer all users\r\ninteracting with it remotely through a computer network (if your version\r\nsupports such interaction) an opportunity to receive the Corresponding\r\nSource of your version by providing access to the Corresponding Source\r\nfrom a network server at no charge, through some standard or customary\r\nmeans of facilitating copying of software.  This Corresponding Source\r\nshall include the Corresponding Source for any work covered by version 3\r\nof the GNU General Public License that is incorporated pursuant to the\r\nfollowing paragraph.\r\n\r\n  Notwithstanding any other provision of this License, you have\r\npermission to link or combine any covered work with a work licensed\r\nunder version 3 of the GNU General Public License into a single\r\ncombined work, and to convey the resulting work.  The terms of this\r\nLicense will continue to apply to the part which is the covered work,\r\nbut the work with which it is combined will remain governed by version\r\n3 of the GNU General Public License.\r\n\r\n  14. Revised Versions of this License.\r\n\r\n  The Free Software Foundation may publish revised and/or new versions of\r\nthe GNU Affero General Public License from time to time.  Such new versions\r\nwill be similar in spirit to the present version, but may differ in detail to\r\naddress new problems or concerns.\r\n\r\n  Each version is given a distinguishing version number.  If the\r\nProgram specifies that a certain numbered version of the GNU Affero General\r\nPublic License \"or any later version\" applies to it, you have the\r\noption of following the terms and conditions either of that numbered\r\nversion or of any later version published by the Free Software\r\nFoundation.  If the Program does not specify a version number of the\r\nGNU Affero General Public License, you may choose any version ever published\r\nby the Free Software Foundation.\r\n\r\n  If the Program specifies that a proxy can decide which future\r\nversions of the GNU Affero General Public License can be used, that proxy's\r\npublic statement of acceptance of a version permanently authorizes you\r\nto choose that version for the Program.\r\n\r\n  Later license versions may give you additional or different\r\npermissions.  However, no additional obligations are imposed on any\r\nauthor or copyright holder as a result of your choosing to follow a\r\nlater version.\r\n\r\n  15. Disclaimer of Warranty.\r\n\r\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\r\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\r\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\r\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\r\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\r\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\r\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\r\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\r\n\r\n  16. Limitation of Liability.\r\n\r\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\r\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\r\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\r\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\r\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\r\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\r\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\r\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\r\nSUCH DAMAGES.\r\n\r\n  17. Interpretation of Sections 15 and 16.\r\n\r\n  If the disclaimer of warranty and limitation of liability provided\r\nabove cannot be given local legal effect according to their terms,\r\nreviewing courts shall apply local law that most closely approximates\r\nan absolute waiver of all civil liability in connection with the\r\nProgram, unless a warranty or assumption of liability accompanies a\r\ncopy of the Program in return for a fee.\r\n\r\n                     END OF TERMS AND CONDITIONS\r\n\r\n            How to Apply These Terms to Your New Programs\r\n\r\n  If you develop a new program, and you want it to be of the greatest\r\npossible use to the public, the best way to achieve this is to make it\r\nfree software which everyone can redistribute and change under these terms.\r\n\r\n  To do so, attach the following notices to the program.  It is safest\r\nto attach them to the start of each source file to most effectively\r\nstate the exclusion of warranty; and each file should have at least\r\nthe \"copyright\" line and a pointer to where the full notice is found.\r\n\r\n    <one line to give the program's name and a brief idea of what it does.>\r\n    Copyright (C) <year>  <name of author>\r\n\r\n    This program is free software: you can redistribute it and/or modify\r\n    it under the terms of the GNU Affero General Public License as published by\r\n    the Free Software Foundation, either version 3 of the License, or\r\n    (at your option) any later version.\r\n\r\n    This program is distributed in the hope that it will be useful,\r\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\r\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\r\n    GNU Affero General Public License for more details.\r\n\r\n    You should have received a copy of the GNU Affero General Public License\r\n    along with this program.  If not, see <https://www.gnu.org/licenses/>.\r\n\r\nAlso add information on how to contact you by electronic and paper mail.\r\n\r\n  If your software can interact with users remotely through a computer\r\nnetwork, you should also make sure that it provides a way for users to\r\nget its source.  For example, if your program is a web application, its\r\ninterface could display a \"Source\" link that leads users to an archive\r\nof the code.  There are many ways you could offer source, and different\r\nsolutions will be better for different programs; see section 13 for the\r\nspecific requirements.\r\n\r\n  You should also get your employer (if you work as a programmer) or school,\r\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\r\nFor more information on this, and how to apply and follow the GNU AGPL, see\r\n<https://www.gnu.org/licenses/>.\r\n"
  },
  {
    "path": "README.md",
    "content": "# Stable Diffusion web UI\r\nA browser interface based on Gradio library for Stable Diffusion.\r\n\r\n![](screenshot.png)\r\n\r\n## Features\r\n[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):\r\n- Original txt2img and img2img modes\r\n- One click install and run script (but you still must install python and git)\r\n- Outpainting\r\n- Inpainting\r\n- Color Sketch\r\n- Prompt Matrix\r\n- Stable Diffusion Upscale\r\n- Attention, specify parts of text that the model should pay more attention to\r\n    - a man in a ((tuxedo)) - will pay more attention to tuxedo\r\n    - a man in a (tuxedo:1.21) - alternative syntax\r\n    - select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user)\r\n- Loopback, run img2img processing multiple times\r\n- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters\r\n- Textual Inversion\r\n    - have as many embeddings as you want and use any names you like for them\r\n    - use multiple embeddings with different numbers of vectors per token\r\n    - works with half precision floating point numbers\r\n    - train embeddings on 8GB (also reports of 6GB working)\r\n- Extras tab with:\r\n    - GFPGAN, neural network that fixes faces\r\n    - CodeFormer, face restoration tool as an alternative to GFPGAN\r\n    - RealESRGAN, neural network upscaler\r\n    - ESRGAN, neural network upscaler with a lot of third party models\r\n    - SwinIR and Swin2SR([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers\r\n    - LDSR, Latent diffusion super resolution upscaling\r\n- Resizing aspect ratio options\r\n- Sampling method selection\r\n    - Adjust sampler eta values (noise multiplier)\r\n    - More advanced noise setting options\r\n- Interrupt processing at any time\r\n- 4GB video card support (also reports of 2GB working)\r\n- Correct seeds for batches\r\n- Live prompt token length validation\r\n- Generation parameters\r\n     - parameters you used to generate images are saved with that image\r\n     - in PNG chunks for PNG, in EXIF for JPEG\r\n     - can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI\r\n     - can be disabled in settings\r\n     - drag and drop an image/text-parameters to promptbox\r\n- Read Generation Parameters Button, loads parameters in promptbox to UI\r\n- Settings page\r\n- Running arbitrary python code from UI (must run with --allow-code to enable)\r\n- Mouseover hints for most UI elements\r\n- Possible to change defaults/mix/max/step values for UI elements via text config\r\n- Tiling support, a checkbox to create images that can be tiled like textures\r\n- Progress bar and live image generation preview\r\n    - Can use a separate neural network to produce previews with almost none VRAM or compute requirement\r\n- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image\r\n- Styles, a way to save part of prompt and easily apply them via dropdown later\r\n- Variations, a way to generate same image but with tiny differences\r\n- Seed resizing, a way to generate same image but at slightly different resolution\r\n- CLIP interrogator, a button that tries to guess prompt from an image\r\n- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway\r\n- Batch Processing, process a group of files using img2img\r\n- Img2img Alternative, reverse Euler method of cross attention control\r\n- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions\r\n- Reloading checkpoints on the fly\r\n- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one\r\n- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community\r\n- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once\r\n     - separate prompts using uppercase `AND`\r\n     - also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`\r\n- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)\r\n- DeepDanbooru integration, creates danbooru style tags for anime prompts\r\n- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add --xformers to commandline args)\r\n- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI\r\n- Generate forever option\r\n- Training tab\r\n     - hypernetworks and embeddings options\r\n     - Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)\r\n- Clip skip\r\n- Hypernetworks\r\n- Loras (same as Hypernetworks but more pretty)\r\n- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt. \r\n- Can select to load a different VAE from settings screen\r\n- Estimated completion time in progress bar\r\n- API\r\n- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML.\r\n- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))\r\n- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions\r\n- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions\r\n- Now without any bad letters!\r\n- Load checkpoints in safetensors format\r\n- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64\r\n- Now with a license!\r\n- Reorder elements in the UI from settings screen\r\n- \r\n\r\n## Installation and Running\r\nMake sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.\r\n\r\nAlternatively, use online services (like Google Colab):\r\n\r\n- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)\r\n\r\n### Automatic Installation on Windows\r\n1. Install [Python 3.10.6](https://www.python.org/downloads/windows/), checking \"Add Python to PATH\"\r\n2. Install [git](https://git-scm.com/download/win).\r\n3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.\r\n4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.\r\n\r\n### Automatic Installation on Linux\r\n1. Install the dependencies:\r\n```bash\r\n# Debian-based:\r\nsudo apt install wget git python3 python3-venv\r\n# Red Hat-based:\r\nsudo dnf install wget git python3\r\n# Arch-based:\r\nsudo pacman -S wget git python3\r\n```\r\n2. To install in `/home/$(whoami)/stable-diffusion-webui/`, run:\r\n```bash\r\nbash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)\r\n```\r\n3. Run `webui.sh`.\r\n### Installation on Apple Silicon\r\n\r\nFind the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).\r\n\r\n## Contributing\r\nHere's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)\r\n\r\n## Documentation\r\nThe documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).\r\n\r\n## Credits\r\nLicenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.\r\n\r\n- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers\r\n- k-diffusion - https://github.com/crowsonkb/k-diffusion.git\r\n- GFPGAN - https://github.com/TencentARC/GFPGAN.git\r\n- CodeFormer - https://github.com/sczhou/CodeFormer\r\n- ESRGAN - https://github.com/xinntao/ESRGAN\r\n- SwinIR - https://github.com/JingyunLiang/SwinIR\r\n- Swin2SR - https://github.com/mv-lab/swin2sr\r\n- LDSR - https://github.com/Hafiidz/latent-diffusion\r\n- MiDaS - https://github.com/isl-org/MiDaS\r\n- Ideas for optimizations - https://github.com/basujindal/stable-diffusion\r\n- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.\r\n- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)\r\n- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)\r\n- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).\r\n- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd\r\n- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot\r\n- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator\r\n- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch\r\n- xformers - https://github.com/facebookresearch/xformers\r\n- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru\r\n- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)\r\n- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix\r\n- Security advice - RyotaK\r\n- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.\r\n- (You)\r\n"
  },
  {
    "path": "cache.json",
    "content": "{\r\n    \"hashes\": {\r\n        \"checkpoint/final-prune.ckpt\": {\r\n            \"mtime\": 1665122544.3176749,\r\n            \"sha256\": \"89d59c3dde4c56c6d5c41da34cc55ce479d93b4007046980934b14db71bdb2a8\"\r\n        },\r\n        \"checkpoint/Anything-V3.0-no-ema-full.ckpt\": {\r\n            \"mtime\": 1672839247.3145154,\r\n            \"sha256\": \"7b66ce8d3a639e13f14761a29de61e252cf5aa489ae967228d96166fc3f55b40\"\r\n        }\r\n    }\r\n}"
  },
  {
    "path": "config.json",
    "content": "{\r\n    \"samples_save\": true,\r\n    \"samples_format\": \"png\",\r\n    \"samples_filename_pattern\": \"\",\r\n    \"save_images_add_number\": true,\r\n    \"grid_save\": true,\r\n    \"grid_format\": \"png\",\r\n    \"grid_extended_filename\": false,\r\n    \"grid_only_if_multiple\": true,\r\n    \"grid_prevent_empty_spots\": false,\r\n    \"n_rows\": -1,\r\n    \"enable_pnginfo\": true,\r\n    \"save_txt\": false,\r\n    \"save_images_before_face_restoration\": false,\r\n    \"save_images_before_highres_fix\": false,\r\n    \"save_images_before_color_correction\": false,\r\n    \"jpeg_quality\": 80,\r\n    \"export_for_4chan\": true,\r\n    \"use_original_name_batch\": true,\r\n    \"use_upscaler_name_as_suffix\": false,\r\n    \"save_selected_only\": true,\r\n    \"do_not_add_watermark\": false,\r\n    \"temp_dir\": \"\",\r\n    \"clean_temp_dir_at_start\": false,\r\n    \"outdir_samples\": \"\",\r\n    \"outdir_txt2img_samples\": \"outputs/txt2img-images\",\r\n    \"outdir_img2img_samples\": \"outputs/img2img-images\",\r\n    \"outdir_extras_samples\": \"outputs/extras-images\",\r\n    \"outdir_grids\": \"\",\r\n    \"outdir_txt2img_grids\": \"outputs/txt2img-grids\",\r\n    \"outdir_img2img_grids\": \"outputs/img2img-grids\",\r\n    \"outdir_save\": \"log/images\",\r\n    \"save_to_dirs\": false,\r\n    \"grid_save_to_dirs\": false,\r\n    \"use_save_to_dirs_for_ui\": false,\r\n    \"directories_filename_pattern\": \"[date]\",\r\n    \"directories_max_prompt_words\": 8,\r\n    \"ESRGAN_tile\": 192,\r\n    \"ESRGAN_tile_overlap\": 8,\r\n    \"realesrgan_enabled_models\": [\r\n        \"R-ESRGAN 4x+\",\r\n        \"R-ESRGAN 4x+ Anime6B\"\r\n    ],\r\n    \"upscaler_for_img2img\": null,\r\n    \"face_restoration_model\": null,\r\n    \"code_former_weight\": 0.5,\r\n    \"face_restoration_unload\": false,\r\n    \"show_warnings\": false,\r\n    \"memmon_poll_rate\": 8,\r\n    \"samples_log_stdout\": false,\r\n    \"multiple_tqdm\": true,\r\n    \"print_hypernet_extra\": false,\r\n    \"unload_models_when_training\": true,\r\n    \"pin_memory\": false,\r\n    \"save_optimizer_state\": false,\r\n    \"save_training_settings_to_txt\": true,\r\n    \"dataset_filename_word_regex\": \"\",\r\n    \"dataset_filename_join_string\": \" \",\r\n    \"training_image_repeats_per_epoch\": 1,\r\n    \"training_write_csv_every\": 500,\r\n    \"training_xattention_optimizations\": false,\r\n    \"training_enable_tensorboard\": false,\r\n    \"training_tensorboard_save_images\": false,\r\n    \"training_tensorboard_flush_every\": 120,\r\n    \"sd_model_checkpoint\": \"\",\r\n    \"sd_checkpoint_cache\": 0,\r\n    \"sd_vae_checkpoint_cache\": 0,\r\n    \"sd_vae\": \"animevae.pt\",\r\n    \"sd_vae_as_default\": false,\r\n    \"inpainting_mask_weight\": 1.0,\r\n    \"initial_noise_multiplier\": 1.0,\r\n    \"img2img_color_correction\": false,\r\n    \"img2img_fix_steps\": false,\r\n    \"img2img_background_color\": \"#ffffff\",\r\n    \"enable_quantization\": false,\r\n    \"enable_emphasis\": true,\r\n    \"enable_batch_seeds\": true,\r\n    \"comma_padding_backtrack\": 20,\r\n    \"CLIP_stop_at_last_layers\": 2,\r\n    \"upcast_attn\": false,\r\n    \"use_old_emphasis_implementation\": false,\r\n    \"use_old_karras_scheduler_sigmas\": false,\r\n    \"use_old_hires_fix_width_height\": false,\r\n    \"interrogate_keep_models_in_memory\": false,\r\n    \"interrogate_return_ranks\": false,\r\n    \"interrogate_clip_num_beams\": 1,\r\n    \"interrogate_clip_min_length\": 24,\r\n    \"interrogate_clip_max_length\": 48,\r\n    \"interrogate_clip_dict_limit\": 1500,\r\n    \"interrogate_clip_skip_categories\": [],\r\n    \"interrogate_deepbooru_score_threshold\": 0.7,\r\n    \"deepbooru_sort_alpha\": false,\r\n    \"deepbooru_use_spaces\": false,\r\n    \"deepbooru_escape\": true,\r\n    \"deepbooru_filter_tags\": \"\",\r\n    \"extra_networks_default_view\": \"thumbs\",\r\n    \"extra_networks_default_multiplier\": 1.0,\r\n    \"sd_hypernetwork\": \"None\",\r\n    \"return_grid\": true,\r\n    \"do_not_show_images\": false,\r\n    \"add_model_hash_to_info\": true,\r\n    \"add_model_name_to_info\": true,\r\n    \"disable_weights_auto_swap\": true,\r\n    \"send_seed\": true,\r\n    \"send_size\": true,\r\n    \"font\": \"\",\r\n    \"js_modal_lightbox\": true,\r\n    \"js_modal_lightbox_initially_zoomed\": true,\r\n    \"show_progress_in_title\": true,\r\n    \"samplers_in_dropdown\": false,\r\n    \"dimensions_and_batch_together\": true,\r\n    \"keyedit_precision_attention\": 0.1,\r\n    \"keyedit_precision_extra\": 0.05,\r\n    \"quicksettings\": \"sd_model_checkpoint, sd_vae, CLIP_stop_at_last_layers\",\r\n    \"ui_reorder\": \"inpaint, sampler, checkboxes, hires_fix, dimensions, cfg, seed, batch, override_settings, scripts\",\r\n    \"ui_extra_networks_tab_reorder\": \"\",\r\n    \"localization\": \"zh_CN\",\r\n    \"show_progressbar\": true,\r\n    \"live_previews_enable\": true,\r\n    \"show_progress_grid\": true,\r\n    \"show_progress_every_n_steps\": 20,\r\n    \"show_progress_type\": \"Approx NN\",\r\n    \"live_preview_content\": \"Prompt\",\r\n    \"live_preview_refresh_period\": 1000,\r\n    \"hide_samplers\": [],\r\n    \"eta_ddim\": 0.0,\r\n    \"eta_ancestral\": 1.0,\r\n    \"ddim_discretize\": \"uniform\",\r\n    \"s_churn\": 0.0,\r\n    \"s_tmin\": 0.0,\r\n    \"s_noise\": 1.0,\r\n    \"eta_noise_seed_delta\": 31337,\r\n    \"always_discard_next_to_last_sigma\": false,\r\n    \"postprocessing_enable_in_main_ui\": [],\r\n    \"postprocessing_operation_order\": [],\r\n    \"upscaling_max_images_in_cache\": 5,\r\n    \"disabled_extensions\": [],\r\n    \"sd_checkpoint_hash\": \"7af57400eb7303877ec35e5b9e03fc29802c44066828165dc3a20b973c439428\",\r\n    \"ldsr_steps\": 100,\r\n    \"ldsr_cached\": false,\r\n    \"SWIN_tile\": 192,\r\n    \"SWIN_tile_overlap\": 8,\r\n    \"sd_lora\": \"None\",\r\n    \"lora_apply_to_outputs\": false,\r\n    \"tac_tagFile\": \"danbooru.csv\",\r\n    \"tac_active\": true,\r\n    \"tac_activeIn.txt2img\": true,\r\n    \"tac_activeIn.img2img\": true,\r\n    \"tac_activeIn.negativePrompts\": true,\r\n    \"tac_activeIn.thirdParty\": true,\r\n    \"tac_activeIn.modelList\": \"\",\r\n    \"tac_activeIn.modelListMode\": \"Blacklist\",\r\n    \"tac_maxResults\": 15.0,\r\n    \"tac_showAllResults\": false,\r\n    \"tac_resultStepLength\": 100.0,\r\n    \"tac_delayTime\": 100.0,\r\n    \"tac_useWildcards\": true,\r\n    \"tac_useEmbeddings\": true,\r\n    \"tac_useHypernetworks\": true,\r\n    \"tac_useLoras\": true,\r\n    \"tac_showWikiLinks\": false,\r\n    \"tac_replaceUnderscores\": true,\r\n    \"tac_escapeParentheses\": true,\r\n    \"tac_appendComma\": true,\r\n    \"tac_alias.searchByAlias\": true,\r\n    \"tac_alias.onlyShowAlias\": false,\r\n    \"tac_translation.translationFile\": \"None\",\r\n    \"tac_translation.oldFormat\": false,\r\n    \"tac_translation.searchByTranslation\": true,\r\n    \"tac_extra.extraFile\": \"extra-quality-tags.csv\",\r\n    \"tac_extra.addMode\": \"Insert before\",\r\n    \"additional_networks_extra_lora_path\": \"\",\r\n    \"additional_networks_sort_models_by\": \"name\",\r\n    \"additional_networks_reverse_sort_order\": false,\r\n    \"additional_networks_model_name_filter\": \"\",\r\n    \"additional_networks_xy_grid_model_metadata\": \"\",\r\n    \"additional_networks_hash_thread_count\": 1.0,\r\n    \"additional_networks_back_up_model_when_saving\": true,\r\n    \"additional_networks_show_only_safetensors\": false,\r\n    \"additional_networks_show_only_models_with_metadata\": \"disabled\",\r\n    \"additional_networks_max_top_tags\": 20.0,\r\n    \"additional_networks_max_dataset_folders\": 20.0,\r\n    \"images_history_preload\": false,\r\n    \"images_record_paths\": true,\r\n    \"images_delete_message\": true,\r\n    \"images_history_page_columns\": 6.0,\r\n    \"images_history_page_rows\": 6.0,\r\n    \"images_history_pages_perload\": 20.0,\r\n    \"img_downscale_threshold\": 4.0,\r\n    \"target_side_length\": 4000.0,\r\n    \"no_dpmpp_sde_batch_determinism\": false\r\n}"
  },
  {
    "path": "configs/alt-diffusion-inference.yaml",
    "content": "model:\r\n  base_learning_rate: 1.0e-04\r\n  target: ldm.models.diffusion.ddpm.LatentDiffusion\r\n  params:\r\n    linear_start: 0.00085\r\n    linear_end: 0.0120\r\n    num_timesteps_cond: 1\r\n    log_every_t: 200\r\n    timesteps: 1000\r\n    first_stage_key: \"jpg\"\r\n    cond_stage_key: \"txt\"\r\n    image_size: 64\r\n    channels: 4\r\n    cond_stage_trainable: false   # Note: different from the one we trained before\r\n    conditioning_key: crossattn\r\n    monitor: val/loss_simple_ema\r\n    scale_factor: 0.18215\r\n    use_ema: False\r\n\r\n    scheduler_config: # 10000 warmup steps\r\n      target: ldm.lr_scheduler.LambdaLinearScheduler\r\n      params:\r\n        warm_up_steps: [ 10000 ]\r\n        cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases\r\n        f_start: [ 1.e-6 ]\r\n        f_max: [ 1. ]\r\n        f_min: [ 1. ]\r\n\r\n    unet_config:\r\n      target: ldm.modules.diffusionmodules.openaimodel.UNetModel\r\n      params:\r\n        image_size: 32 # unused\r\n        in_channels: 4\r\n        out_channels: 4\r\n        model_channels: 320\r\n        attention_resolutions: [ 4, 2, 1 ]\r\n        num_res_blocks: 2\r\n        channel_mult: [ 1, 2, 4, 4 ]\r\n        num_heads: 8\r\n        use_spatial_transformer: True\r\n        transformer_depth: 1\r\n        context_dim: 768\r\n        use_checkpoint: True\r\n        legacy: False\r\n\r\n    first_stage_config:\r\n      target: ldm.models.autoencoder.AutoencoderKL\r\n      params:\r\n        embed_dim: 4\r\n        monitor: val/rec_loss\r\n        ddconfig:\r\n          double_z: true\r\n          z_channels: 4\r\n          resolution: 256\r\n          in_channels: 3\r\n          out_ch: 3\r\n          ch: 128\r\n          ch_mult:\r\n          - 1\r\n          - 2\r\n          - 4\r\n          - 4\r\n          num_res_blocks: 2\r\n          attn_resolutions: []\r\n          dropout: 0.0\r\n        lossconfig:\r\n          target: torch.nn.Identity\r\n\r\n    cond_stage_config:\r\n      target: modules.xlmr.BertSeriesModelWithTransformation\r\n      params:\r\n        name: \"XLMR-Large\""
  },
  {
    "path": "configs/instruct-pix2pix.yaml",
    "content": "# File modified by authors of InstructPix2Pix from original (https://github.com/CompVis/stable-diffusion).\r\n# See more details in LICENSE.\r\n\r\nmodel:\r\n  base_learning_rate: 1.0e-04\r\n  target: modules.models.diffusion.ddpm_edit.LatentDiffusion\r\n  params:\r\n    linear_start: 0.00085\r\n    linear_end: 0.0120\r\n    num_timesteps_cond: 1\r\n    log_every_t: 200\r\n    timesteps: 1000\r\n    first_stage_key: edited\r\n    cond_stage_key: edit\r\n    # image_size: 64\r\n    # image_size: 32\r\n    image_size: 16\r\n    channels: 4\r\n    cond_stage_trainable: false   # Note: different from the one we trained before\r\n    conditioning_key: hybrid\r\n    monitor: val/loss_simple_ema\r\n    scale_factor: 0.18215\r\n    use_ema: false\r\n\r\n    scheduler_config: # 10000 warmup steps\r\n      target: ldm.lr_scheduler.LambdaLinearScheduler\r\n      params:\r\n        warm_up_steps: [ 0 ]\r\n        cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases\r\n        f_start: [ 1.e-6 ]\r\n        f_max: [ 1. ]\r\n        f_min: [ 1. ]\r\n\r\n    unet_config:\r\n      target: ldm.modules.diffusionmodules.openaimodel.UNetModel\r\n      params:\r\n        image_size: 32 # unused\r\n        in_channels: 8\r\n        out_channels: 4\r\n        model_channels: 320\r\n        attention_resolutions: [ 4, 2, 1 ]\r\n        num_res_blocks: 2\r\n        channel_mult: [ 1, 2, 4, 4 ]\r\n        num_heads: 8\r\n        use_spatial_transformer: True\r\n        transformer_depth: 1\r\n        context_dim: 768\r\n        use_checkpoint: True\r\n        legacy: False\r\n\r\n    first_stage_config:\r\n      target: ldm.models.autoencoder.AutoencoderKL\r\n      params:\r\n        embed_dim: 4\r\n        monitor: val/rec_loss\r\n        ddconfig:\r\n          double_z: true\r\n          z_channels: 4\r\n          resolution: 256\r\n          in_channels: 3\r\n          out_ch: 3\r\n          ch: 128\r\n          ch_mult:\r\n          - 1\r\n          - 2\r\n          - 4\r\n          - 4\r\n          num_res_blocks: 2\r\n          attn_resolutions: []\r\n          dropout: 0.0\r\n        lossconfig:\r\n          target: torch.nn.Identity\r\n\r\n    cond_stage_config:\r\n      target: ldm.modules.encoders.modules.FrozenCLIPEmbedder\r\n\r\ndata:\r\n  target: main.DataModuleFromConfig\r\n  params:\r\n    batch_size: 128\r\n    num_workers: 1\r\n    wrap: false\r\n    validation:\r\n      target: edit_dataset.EditDataset\r\n      params:\r\n        path: data/clip-filtered-dataset\r\n        cache_dir:  data/\r\n        cache_name: data_10k\r\n        split: val\r\n        min_text_sim: 0.2\r\n        min_image_sim: 0.75\r\n        min_direction_sim: 0.2\r\n        max_samples_per_prompt: 1\r\n        min_resize_res: 512\r\n        max_resize_res: 512\r\n        crop_res: 512\r\n        output_as_edit: False\r\n        real_input: True\r\n"
  },
  {
    "path": "configs/v1-inference.yaml",
    "content": "model:\r\n  base_learning_rate: 1.0e-04\r\n  target: ldm.models.diffusion.ddpm.LatentDiffusion\r\n  params:\r\n    linear_start: 0.00085\r\n    linear_end: 0.0120\r\n    num_timesteps_cond: 1\r\n    log_every_t: 200\r\n    timesteps: 1000\r\n    first_stage_key: \"jpg\"\r\n    cond_stage_key: \"txt\"\r\n    image_size: 64\r\n    channels: 4\r\n    cond_stage_trainable: false   # Note: different from the one we trained before\r\n    conditioning_key: crossattn\r\n    monitor: val/loss_simple_ema\r\n    scale_factor: 0.18215\r\n    use_ema: False\r\n\r\n    scheduler_config: # 10000 warmup steps\r\n      target: ldm.lr_scheduler.LambdaLinearScheduler\r\n      params:\r\n        warm_up_steps: [ 10000 ]\r\n        cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases\r\n        f_start: [ 1.e-6 ]\r\n        f_max: [ 1. ]\r\n        f_min: [ 1. ]\r\n\r\n    unet_config:\r\n      target: ldm.modules.diffusionmodules.openaimodel.UNetModel\r\n      params:\r\n        image_size: 32 # unused\r\n        in_channels: 4\r\n        out_channels: 4\r\n        model_channels: 320\r\n        attention_resolutions: [ 4, 2, 1 ]\r\n        num_res_blocks: 2\r\n        channel_mult: [ 1, 2, 4, 4 ]\r\n        num_heads: 8\r\n        use_spatial_transformer: True\r\n        transformer_depth: 1\r\n        context_dim: 768\r\n        use_checkpoint: True\r\n        legacy: False\r\n\r\n    first_stage_config:\r\n      target: ldm.models.autoencoder.AutoencoderKL\r\n      params:\r\n        embed_dim: 4\r\n        monitor: val/rec_loss\r\n        ddconfig:\r\n          double_z: true\r\n          z_channels: 4\r\n          resolution: 256\r\n          in_channels: 3\r\n          out_ch: 3\r\n          ch: 128\r\n          ch_mult:\r\n          - 1\r\n          - 2\r\n          - 4\r\n          - 4\r\n          num_res_blocks: 2\r\n          attn_resolutions: []\r\n          dropout: 0.0\r\n        lossconfig:\r\n          target: torch.nn.Identity\r\n\r\n    cond_stage_config:\r\n      target: ldm.modules.encoders.modules.FrozenCLIPEmbedder\r\n"
  },
  {
    "path": "configs/v1-inpainting-inference.yaml",
    "content": "model:\r\n  base_learning_rate: 7.5e-05\r\n  target: ldm.models.diffusion.ddpm.LatentInpaintDiffusion\r\n  params:\r\n    linear_start: 0.00085\r\n    linear_end: 0.0120\r\n    num_timesteps_cond: 1\r\n    log_every_t: 200\r\n    timesteps: 1000\r\n    first_stage_key: \"jpg\"\r\n    cond_stage_key: \"txt\"\r\n    image_size: 64\r\n    channels: 4\r\n    cond_stage_trainable: false   # Note: different from the one we trained before\r\n    conditioning_key: hybrid   # important\r\n    monitor: val/loss_simple_ema\r\n    scale_factor: 0.18215\r\n    finetune_keys: null\r\n\r\n    scheduler_config: # 10000 warmup steps\r\n      target: ldm.lr_scheduler.LambdaLinearScheduler\r\n      params:\r\n        warm_up_steps: [ 2500 ] # NOTE for resuming. use 10000 if starting from scratch\r\n        cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases\r\n        f_start: [ 1.e-6 ]\r\n        f_max: [ 1. ]\r\n        f_min: [ 1. ]\r\n\r\n    unet_config:\r\n      target: ldm.modules.diffusionmodules.openaimodel.UNetModel\r\n      params:\r\n        image_size: 32 # unused\r\n        in_channels: 9  # 4 data + 4 downscaled image + 1 mask\r\n        out_channels: 4\r\n        model_channels: 320\r\n        attention_resolutions: [ 4, 2, 1 ]\r\n        num_res_blocks: 2\r\n        channel_mult: [ 1, 2, 4, 4 ]\r\n        num_heads: 8\r\n        use_spatial_transformer: True\r\n        transformer_depth: 1\r\n        context_dim: 768\r\n        use_checkpoint: True\r\n        legacy: False\r\n\r\n    first_stage_config:\r\n      target: ldm.models.autoencoder.AutoencoderKL\r\n      params:\r\n        embed_dim: 4\r\n        monitor: val/rec_loss\r\n        ddconfig:\r\n          double_z: true\r\n          z_channels: 4\r\n          resolution: 256\r\n          in_channels: 3\r\n          out_ch: 3\r\n          ch: 128\r\n          ch_mult:\r\n          - 1\r\n          - 2\r\n          - 4\r\n          - 4\r\n          num_res_blocks: 2\r\n          attn_resolutions: []\r\n          dropout: 0.0\r\n        lossconfig:\r\n          target: torch.nn.Identity\r\n\r\n    cond_stage_config:\r\n      target: ldm.modules.encoders.modules.FrozenCLIPEmbedder\r\n"
  },
  {
    "path": "environment-wsl2.yaml",
    "content": "name: automatic\r\nchannels:\r\n  - pytorch\r\n  - defaults\r\ndependencies:\r\n  - python=3.10\r\n  - pip=22.2.2\r\n  - cudatoolkit=11.3\r\n  - pytorch=1.12.1\r\n  - torchvision=0.13.1\r\n  - numpy=1.23.1"
  },
  {
    "path": "launch.py",
    "content": "# this scripts installs necessary requirements and launches main program in webui.py\r\nimport subprocess\r\nimport os\r\nimport sys\r\nimport importlib.util\r\nimport shlex\r\nimport platform\r\nimport argparse\r\nimport json\r\n\r\ndir_repos = \"repositories\"\r\ndir_extensions = \"extensions\"\r\npython = sys.executable\r\ngit = os.environ.get('GIT', \"git\")\r\nindex_url = os.environ.get('INDEX_URL', \"\")\r\nstored_commit_hash = None\r\nskip_install = False\r\n\r\n\r\ndef check_python_version():\r\n    is_windows = platform.system() == \"Windows\"\r\n    major = sys.version_info.major\r\n    minor = sys.version_info.minor\r\n    micro = sys.version_info.micro\r\n\r\n    if is_windows:\r\n        supported_minors = [10]\r\n    else:\r\n        supported_minors = [7, 8, 9, 10, 11]\r\n\r\n    if not (major == 3 and minor in supported_minors):\r\n        import modules.errors\r\n\r\n        modules.errors.print_error_explanation(f\"\"\"\r\nINCOMPATIBLE PYTHON VERSION\r\n\r\nThis program is tested with 3.10.6 Python, but you have {major}.{minor}.{micro}.\r\nIf you encounter an error with \"RuntimeError: Couldn't install torch.\" message,\r\nor any other error regarding unsuccessful package (library) installation,\r\nplease downgrade (or upgrade) to the latest version of 3.10 Python\r\nand delete current Python and \"venv\" folder in WebUI's directory.\r\n\r\nYou can download 3.10 Python from here: https://www.python.org/downloads/release/python-3109/\r\n\r\n{\"Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases\" if is_windows else \"\"}\r\n\r\nUse --skip-python-version-check to suppress this warning.\r\n\"\"\")\r\n\r\n\r\ndef commit_hash():\r\n    global stored_commit_hash\r\n\r\n    if stored_commit_hash is not None:\r\n        return stored_commit_hash\r\n\r\n    try:\r\n        stored_commit_hash = run(f\"{git} rev-parse HEAD\").strip()\r\n    except Exception:\r\n        stored_commit_hash = \"<none>\"\r\n\r\n    return stored_commit_hash\r\n\r\n\r\ndef extract_arg(args, name):\r\n    return [x for x in args if x != name], name in args\r\n\r\n\r\ndef extract_opt(args, name):\r\n    opt = None\r\n    is_present = False\r\n    if name in args:\r\n        is_present = True\r\n        idx = args.index(name)\r\n        del args[idx]\r\n        if idx < len(args) and args[idx][0] != \"-\":\r\n            opt = args[idx]\r\n            del args[idx]\r\n    return args, is_present, opt\r\n\r\n\r\ndef run(command, desc=None, errdesc=None, custom_env=None, live=False):\r\n    if desc is not None:\r\n        print(desc)\r\n\r\n    if live:\r\n        result = subprocess.run(command, shell=True, env=os.environ if custom_env is None else custom_env)\r\n        if result.returncode != 0:\r\n            raise RuntimeError(f\"\"\"{errdesc or 'Error running command'}.\r\nCommand: {command}\r\nError code: {result.returncode}\"\"\")\r\n\r\n        return \"\"\r\n\r\n    result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, env=os.environ if custom_env is None else custom_env)\r\n\r\n    if result.returncode != 0:\r\n\r\n        message = f\"\"\"{errdesc or 'Error running command'}.\r\nCommand: {command}\r\nError code: {result.returncode}\r\nstdout: {result.stdout.decode(encoding=\"utf8\", errors=\"ignore\") if len(result.stdout)>0 else '<empty>'}\r\nstderr: {result.stderr.decode(encoding=\"utf8\", errors=\"ignore\") if len(result.stderr)>0 else '<empty>'}\r\n\"\"\"\r\n        raise RuntimeError(message)\r\n\r\n    return result.stdout.decode(encoding=\"utf8\", errors=\"ignore\")\r\n\r\n\r\ndef check_run(command):\r\n    result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)\r\n    return result.returncode == 0\r\n\r\n\r\ndef is_installed(package):\r\n    try:\r\n        spec = importlib.util.find_spec(package)\r\n    except ModuleNotFoundError:\r\n        return False\r\n\r\n    return spec is not None\r\n\r\n\r\ndef repo_dir(name):\r\n    return os.path.join(dir_repos, name)\r\n\r\n\r\ndef run_python(code, desc=None, errdesc=None):\r\n    return run(f'\"{python}\" -c \"{code}\"', desc, errdesc)\r\n\r\n\r\ndef run_pip(args, desc=None):\r\n    if skip_install:\r\n        return\r\n\r\n    index_url_line = f' --index-url {index_url}' if index_url != '' else ''\r\n    return run(f'\"{python}\" -m pip {args} --prefer-binary{index_url_line}', desc=f\"Installing {desc}\", errdesc=f\"Couldn't install {desc}\")\r\n\r\n\r\ndef check_run_python(code):\r\n    return check_run(f'\"{python}\" -c \"{code}\"')\r\n\r\n\r\ndef git_clone(url, dir, name, commithash=None):\r\n    # TODO clone into temporary dir and move if successful\r\n\r\n    if os.path.exists(dir):\r\n        if commithash is None:\r\n            return\r\n\r\n        current_hash = run(f'\"{git}\" -C \"{dir}\" rev-parse HEAD', None, f\"Couldn't determine {name}'s hash: {commithash}\").strip()\r\n        if current_hash == commithash:\r\n            return\r\n\r\n        run(f'\"{git}\" -C \"{dir}\" fetch', f\"Fetching updates for {name}...\", f\"Couldn't fetch {name}\")\r\n        run(f'\"{git}\" -C \"{dir}\" checkout {commithash}', f\"Checking out commit for {name} with hash: {commithash}...\", f\"Couldn't checkout commit {commithash} for {name}\")\r\n        return\r\n\r\n    run(f'\"{git}\" clone \"{url}\" \"{dir}\"', f\"Cloning {name} into {dir}...\", f\"Couldn't clone {name}\")\r\n\r\n    if commithash is not None:\r\n        run(f'\"{git}\" -C \"{dir}\" checkout {commithash}', None, \"Couldn't checkout {name}'s hash: {commithash}\")\r\n\r\n        \r\ndef version_check(commit):\r\n    try:\r\n        import requests\r\n        commits = requests.get('https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/branches/master').json()\r\n        if commit != \"<none>\" and commits['commit']['sha'] != commit:\r\n            print(\"--------------------------------------------------------\")\r\n            print(\"| You are not up to date with the most recent release. |\")\r\n            print(\"| Consider running `git pull` to update.               |\")\r\n            print(\"--------------------------------------------------------\")\r\n        elif commits['commit']['sha'] == commit:\r\n            print(\"You are up to date with the most recent release.\")\r\n        else:\r\n            print(\"Not a git clone, can't perform version check.\")\r\n    except Exception as e:\r\n        print(\"version check failed\", e)\r\n\r\n\r\ndef run_extension_installer(extension_dir):\r\n    path_installer = os.path.join(extension_dir, \"install.py\")\r\n    if not os.path.isfile(path_installer):\r\n        return\r\n\r\n    try:\r\n        env = os.environ.copy()\r\n        env['PYTHONPATH'] = os.path.abspath(\".\")\r\n\r\n        print(run(f'\"{python}\" \"{path_installer}\"', errdesc=f\"Error running install.py for extension {extension_dir}\", custom_env=env))\r\n    except Exception as e:\r\n        print(e, file=sys.stderr)\r\n\r\n\r\ndef list_extensions(settings_file):\r\n    settings = {}\r\n\r\n    try:\r\n        if os.path.isfile(settings_file):\r\n            with open(settings_file, \"r\", encoding=\"utf8\") as file:\r\n                settings = json.load(file)\r\n    except Exception as e:\r\n        print(e, file=sys.stderr)\r\n\r\n    disabled_extensions = set(settings.get('disabled_extensions', []))\r\n\r\n    return [x for x in os.listdir(dir_extensions) if x not in disabled_extensions]\r\n\r\n\r\ndef run_extensions_installers(settings_file):\r\n    if not os.path.isdir(dir_extensions):\r\n        return\r\n\r\n    for dirname_extension in list_extensions(settings_file):\r\n        run_extension_installer(os.path.join(dir_extensions, dirname_extension))\r\n\r\n\r\ndef prepare_environment():\r\n    global skip_install\r\n\r\n    torch_command = os.environ.get('TORCH_COMMAND', \"pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117\")\r\n    requirements_file = os.environ.get('REQS_FILE', \"requirements_versions.txt\")\r\n    commandline_args = os.environ.get('COMMANDLINE_ARGS', \"\")\r\n\r\n    xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.16rc425')\r\n    gfpgan_package = os.environ.get('GFPGAN_PACKAGE', \"git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379\")\r\n    clip_package = os.environ.get('CLIP_PACKAGE', \"git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1\")\r\n    openclip_package = os.environ.get('OPENCLIP_PACKAGE', \"git+https://github.com/mlfoundations/open_clip.git@bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b\")\r\n\r\n    stable_diffusion_repo = os.environ.get('STABLE_DIFFUSION_REPO', \"https://github.com/Stability-AI/stablediffusion.git\")\r\n    taming_transformers_repo = os.environ.get('TAMING_TRANSFORMERS_REPO', \"https://github.com/CompVis/taming-transformers.git\")\r\n    k_diffusion_repo = os.environ.get('K_DIFFUSION_REPO', 'https://github.com/crowsonkb/k-diffusion.git')\r\n    codeformer_repo = os.environ.get('CODEFORMER_REPO', 'https://github.com/sczhou/CodeFormer.git')\r\n    blip_repo = os.environ.get('BLIP_REPO', 'https://github.com/salesforce/BLIP.git')\r\n\r\n    stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', \"47b6b607fdd31875c9279cd2f4f16b92e4ea958e\")\r\n    taming_transformers_commit_hash = os.environ.get('TAMING_TRANSFORMERS_COMMIT_HASH', \"24268930bf1dce879235a7fddd0b2355b84d7ea6\")\r\n    k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', \"5b3af030dd83e0297272d861c19477735d0317ec\")\r\n    codeformer_commit_hash = os.environ.get('CODEFORMER_COMMIT_HASH', \"c5b4593074ba6214284d6acd5f1719b6c5d739af\")\r\n    blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', \"48211a1594f1321b00f14c9f7a5b4813144b2fb9\")\r\n\r\n    sys.argv += shlex.split(commandline_args)\r\n\r\n    parser = argparse.ArgumentParser(add_help=False)\r\n    parser.add_argument(\"--ui-settings-file\", type=str, help=\"filename to use for ui settings\", default='config.json')\r\n    args, _ = parser.parse_known_args(sys.argv)\r\n\r\n    sys.argv, _ = extract_arg(sys.argv, '-f')\r\n    sys.argv, skip_torch_cuda_test = extract_arg(sys.argv, '--skip-torch-cuda-test')\r\n    sys.argv, skip_python_version_check = extract_arg(sys.argv, '--skip-python-version-check')\r\n    sys.argv, reinstall_xformers = extract_arg(sys.argv, '--reinstall-xformers')\r\n    sys.argv, reinstall_torch = extract_arg(sys.argv, '--reinstall-torch')\r\n    sys.argv, update_check = extract_arg(sys.argv, '--update-check')\r\n    sys.argv, run_tests, test_dir = extract_opt(sys.argv, '--tests')\r\n    sys.argv, skip_install = extract_arg(sys.argv, '--skip-install')\r\n    xformers = '--xformers' in sys.argv\r\n    ngrok = '--ngrok' in sys.argv\r\n\r\n    if not skip_python_version_check:\r\n        check_python_version()\r\n\r\n    commit = commit_hash()\r\n\r\n    print(f\"Python {sys.version}\")\r\n    print(f\"Commit hash: {commit}\")\r\n\r\n    if reinstall_torch or not is_installed(\"torch\") or not is_installed(\"torchvision\"):\r\n        run(f'\"{python}\" -m {torch_command}', \"Installing torch and torchvision\", \"Couldn't install torch\", live=True)\r\n\r\n    if not skip_torch_cuda_test:\r\n        run_python(\"import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'\")\r\n\r\n    if not is_installed(\"gfpgan\"):\r\n        run_pip(f\"install {gfpgan_package}\", \"gfpgan\")\r\n\r\n    if not is_installed(\"clip\"):\r\n        run_pip(f\"install {clip_package}\", \"clip\")\r\n\r\n    if not is_installed(\"open_clip\"):\r\n        run_pip(f\"install {openclip_package}\", \"open_clip\")\r\n\r\n    if (not is_installed(\"xformers\") or reinstall_xformers) and xformers:\r\n        if platform.system() == \"Windows\":\r\n            if platform.python_version().startswith(\"3.10\"):\r\n                run_pip(f\"install -U -I --no-deps {xformers_package}\", \"xformers\")\r\n            else:\r\n                print(\"Installation of xformers is not supported in this version of Python.\")\r\n                print(\"You can also check this and build manually: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers#building-xformers-on-windows-by-duckness\")\r\n                if not is_installed(\"xformers\"):\r\n                    exit(0)\r\n        elif platform.system() == \"Linux\":\r\n            run_pip(f\"install {xformers_package}\", \"xformers\")\r\n\r\n    if not is_installed(\"pyngrok\") and ngrok:\r\n        run_pip(\"install pyngrok\", \"ngrok\")\r\n\r\n    os.makedirs(dir_repos, exist_ok=True)\r\n\r\n    git_clone(stable_diffusion_repo, repo_dir('stable-diffusion-stability-ai'), \"Stable Diffusion\", stable_diffusion_commit_hash)\r\n    git_clone(taming_transformers_repo, repo_dir('taming-transformers'), \"Taming Transformers\", taming_transformers_commit_hash)\r\n    git_clone(k_diffusion_repo, repo_dir('k-diffusion'), \"K-diffusion\", k_diffusion_commit_hash)\r\n    git_clone(codeformer_repo, repo_dir('CodeFormer'), \"CodeFormer\", codeformer_commit_hash)\r\n    git_clone(blip_repo, repo_dir('BLIP'), \"BLIP\", blip_commit_hash)\r\n\r\n    if not is_installed(\"lpips\"):\r\n        run_pip(f\"install -r {os.path.join(repo_dir('CodeFormer'), 'requirements.txt')}\", \"requirements for CodeFormer\")\r\n\r\n    run_pip(f\"install -r {requirements_file}\", \"requirements for Web UI\")\r\n\r\n    run_extensions_installers(settings_file=args.ui_settings_file)\r\n\r\n    if update_check:\r\n        version_check(commit)\r\n    \r\n    if \"--exit\" in sys.argv:\r\n        print(\"Exiting because of --exit argument\")\r\n        exit(0)\r\n\r\n    if run_tests:\r\n        exitcode = tests(test_dir)\r\n        exit(exitcode)\r\n\r\n\r\ndef tests(test_dir):\r\n    if \"--api\" not in sys.argv:\r\n        sys.argv.append(\"--api\")\r\n    if \"--ckpt\" not in sys.argv:\r\n        sys.argv.append(\"--ckpt\")\r\n        sys.argv.append(\"./test/test_files/empty.pt\")\r\n    if \"--skip-torch-cuda-test\" not in sys.argv:\r\n        sys.argv.append(\"--skip-torch-cuda-test\")\r\n    if \"--disable-nan-check\" not in sys.argv:\r\n        sys.argv.append(\"--disable-nan-check\")\r\n\r\n    print(f\"Launching Web UI in another process for testing with arguments: {' '.join(sys.argv[1:])}\")\r\n\r\n    os.environ['COMMANDLINE_ARGS'] = \"\"\r\n    with open('test/stdout.txt', \"w\", encoding=\"utf8\") as stdout, open('test/stderr.txt', \"w\", encoding=\"utf8\") as stderr:\r\n        proc = subprocess.Popen([sys.executable, *sys.argv], stdout=stdout, stderr=stderr)\r\n\r\n    import test.server_poll\r\n    exitcode = test.server_poll.run_tests(proc, test_dir)\r\n\r\n    print(f\"Stopping Web UI process with id {proc.pid}\")\r\n    proc.kill()\r\n    return exitcode\r\n\r\n\r\ndef start():\r\n    print(f\"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with arguments: {' '.join(sys.argv[1:])}\")\r\n    import webui\r\n    if '--nowebui' in sys.argv:\r\n        webui.api_only()\r\n    else:\r\n        webui.webui()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n    prepare_environment()\r\n    start()\r\n"
  },
  {
    "path": "params.txt",
    "content": "masterpiece, best quality, 1girl, \r\nNegative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry\r\nSteps: 20, Sampler: Euler a, CFG scale: 7, Seed: 4170070523, Size: 512x512, Model hash: 89d59c3dde, Model: final-prune, Clip skip: 2, ENSD: 31337"
  },
  {
    "path": "requirements.txt",
    "content": "blendmodes\r\naccelerate\r\nbasicsr\r\nfonts\r\nfont-roboto\r\ngfpgan\r\ngradio==3.16.2\r\ninvisible-watermark\r\nnumpy\r\nomegaconf\r\nopencv-contrib-python\r\nrequests\r\npiexif\r\nPillow\r\npytorch_lightning==1.7.7\r\nrealesrgan\r\nscikit-image>=0.19\r\ntimm==0.4.12\r\ntransformers==4.25.1\r\ntorch\r\neinops\r\njsonmerge\r\nclean-fid\r\nresize-right\r\ntorchdiffeq\r\nkornia\r\nlark\r\ninflection\r\nGitPython\r\ntorchsde\r\nsafetensors\r\npsutil\r\n"
  },
  {
    "path": "requirements_versions.txt",
    "content": "blendmodes==2022\r\ntransformers==4.25.1\r\naccelerate==0.12.0\r\nbasicsr==1.4.2\r\ngfpgan==1.3.8\r\ngradio==3.16.2\r\nnumpy==1.23.3\r\nPillow==9.4.0\r\nrealesrgan==0.3.0\r\ntorch\r\nomegaconf==2.2.3\r\npytorch_lightning==1.7.6\r\nscikit-image==0.19.2\r\nfonts\r\nfont-roboto\r\ntimm==0.6.7\r\npiexif==1.1.3\r\neinops==0.4.1\r\njsonmerge==1.8.0\r\nclean-fid==0.1.29\r\nresize-right==0.0.2\r\ntorchdiffeq==0.2.3\r\nkornia==0.6.7\r\nlark==1.1.2\r\ninflection==0.5.1\r\nGitPython==3.1.27\r\ntorchsde==0.2.5\r\nsafetensors==0.2.7\r\nhttpcore<=0.15\r\nfastapi==0.90.1\r\n"
  },
  {
    "path": "script.js",
    "content": "function gradioApp() {\r\n    const elems = document.getElementsByTagName('gradio-app')\r\n    const gradioShadowRoot = elems.length == 0 ? null : elems[0].shadowRoot\r\n    return !!gradioShadowRoot ? gradioShadowRoot : document;\r\n}\r\n\r\nfunction get_uiCurrentTab() {\r\n    return gradioApp().querySelector('#tabs button:not(.border-transparent)')\r\n}\r\n\r\nfunction get_uiCurrentTabContent() {\r\n    return gradioApp().querySelector('.tabitem[id^=tab_]:not([style*=\"display: none\"])')\r\n}\r\n\r\nuiUpdateCallbacks = []\r\nuiLoadedCallbacks = []\r\nuiTabChangeCallbacks = []\r\noptionsChangedCallbacks = []\r\nlet uiCurrentTab = null\r\n\r\nfunction onUiUpdate(callback){\r\n    uiUpdateCallbacks.push(callback)\r\n}\r\nfunction onUiLoaded(callback){\r\n    uiLoadedCallbacks.push(callback)\r\n}\r\nfunction onUiTabChange(callback){\r\n    uiTabChangeCallbacks.push(callback)\r\n}\r\nfunction onOptionsChanged(callback){\r\n    optionsChangedCallbacks.push(callback)\r\n}\r\n\r\nfunction runCallback(x, m){\r\n    try {\r\n        x(m)\r\n    } catch (e) {\r\n        (console.error || console.log).call(console, e.message, e);\r\n    }\r\n}\r\nfunction executeCallbacks(queue, m) {\r\n    queue.forEach(function(x){runCallback(x, m)})\r\n}\r\n\r\nvar executedOnLoaded = false;\r\n\r\ndocument.addEventListener(\"DOMContentLoaded\", function() {\r\n    var mutationObserver = new MutationObserver(function(m){\r\n        if(!executedOnLoaded && gradioApp().querySelector('#txt2img_prompt')){\r\n            executedOnLoaded = true;\r\n            executeCallbacks(uiLoadedCallbacks);\r\n        }\r\n\r\n        executeCallbacks(uiUpdateCallbacks, m);\r\n        const newTab = get_uiCurrentTab();\r\n        if ( newTab && ( newTab !== uiCurrentTab ) ) {\r\n            uiCurrentTab = newTab;\r\n            executeCallbacks(uiTabChangeCallbacks);\r\n        }\r\n    });\r\n    mutationObserver.observe( gradioApp(), { childList:true, subtree:true })\r\n});\r\n\r\n/**\r\n * Add a ctrl+enter as a shortcut to start a generation\r\n */\r\ndocument.addEventListener('keydown', function(e) {\r\n    var handled = false;\r\n    if (e.key !== undefined) {\r\n        if((e.key == \"Enter\" && (e.metaKey || e.ctrlKey || e.altKey))) handled = true;\r\n    } else if (e.keyCode !== undefined) {\r\n        if((e.keyCode == 13 && (e.metaKey || e.ctrlKey || e.altKey))) handled = true;\r\n    }\r\n    if (handled) {\r\n        button = get_uiCurrentTabContent().querySelector('button[id$=_generate]');\r\n        if (button) {\r\n            button.click();\r\n        }\r\n        e.preventDefault();\r\n    }\r\n})\r\n\r\n/**\r\n * checks that a UI element is not in another hidden element or tab content\r\n */\r\nfunction uiElementIsVisible(el) {\r\n    let isVisible = !el.closest('.\\\\!hidden');\r\n    if ( ! isVisible ) {\r\n        return false;\r\n    }\r\n\r\n    while( isVisible = el.closest('.tabitem')?.style.display !== 'none' ) {\r\n        if ( ! isVisible ) {\r\n            return false;\r\n        } else if ( el.parentElement ) {\r\n            el = el.parentElement\r\n        } else {\r\n            break;\r\n        }\r\n    }\r\n    return isVisible;\r\n}\r\n"
  },
  {
    "path": "scripts/custom_code.py",
    "content": "import modules.scripts as scripts\r\nimport gradio as gr\r\n\r\nfrom modules.processing import Processed\r\nfrom modules.shared import opts, cmd_opts, state\r\n\r\nclass Script(scripts.Script):\r\n\r\n    def title(self):\r\n        return \"Custom code\"\r\n\r\n    def show(self, is_img2img):\r\n        return cmd_opts.allow_code\r\n\r\n    def ui(self, is_img2img):\r\n        code = gr.Textbox(label=\"Python code\", lines=1, elem_id=self.elem_id(\"code\"))\r\n\r\n        return [code]\r\n\r\n\r\n    def run(self, p, code):\r\n        assert cmd_opts.allow_code, '--allow-code option must be enabled'\r\n\r\n        display_result_data = [[], -1, \"\"]\r\n\r\n        def display(imgs, s=display_result_data[1], i=display_result_data[2]):\r\n            display_result_data[0] = imgs\r\n            display_result_data[1] = s\r\n            display_result_data[2] = i\r\n\r\n        from types import ModuleType\r\n        compiled = compile(code, '', 'exec')\r\n        module = ModuleType(\"testmodule\")\r\n        module.__dict__.update(globals())\r\n        module.p = p\r\n        module.display = display\r\n        exec(compiled, module.__dict__)\r\n\r\n        return Processed(p, *display_result_data)\r\n    \r\n    "
  },
  {
    "path": "scripts/img2imgalt.py",
    "content": "from collections import namedtuple\r\n\r\nimport numpy as np\r\nfrom tqdm import trange\r\n\r\nimport modules.scripts as scripts\r\nimport gradio as gr\r\n\r\nfrom modules import processing, shared, sd_samplers, prompt_parser, sd_samplers_common\r\nfrom modules.processing import Processed\r\nfrom modules.shared import opts, cmd_opts, state\r\n\r\nimport torch\r\nimport k_diffusion as K\r\n\r\nfrom PIL import Image\r\nfrom torch import autocast\r\nfrom einops import rearrange, repeat\r\n\r\n\r\ndef find_noise_for_image(p, cond, uncond, cfg_scale, steps):\r\n    x = p.init_latent\r\n\r\n    s_in = x.new_ones([x.shape[0]])\r\n    dnw = K.external.CompVisDenoiser(shared.sd_model)\r\n    sigmas = dnw.get_sigmas(steps).flip(0)\r\n\r\n    shared.state.sampling_steps = steps\r\n\r\n    for i in trange(1, len(sigmas)):\r\n        shared.state.sampling_step += 1\r\n\r\n        x_in = torch.cat([x] * 2)\r\n        sigma_in = torch.cat([sigmas[i] * s_in] * 2)\r\n        cond_in = torch.cat([uncond, cond])\r\n\r\n        image_conditioning = torch.cat([p.image_conditioning] * 2)\r\n        cond_in = {\"c_concat\": [image_conditioning], \"c_crossattn\": [cond_in]}\r\n\r\n        c_out, c_in = [K.utils.append_dims(k, x_in.ndim) for k in dnw.get_scalings(sigma_in)]\r\n        t = dnw.sigma_to_t(sigma_in)\r\n\r\n        eps = shared.sd_model.apply_model(x_in * c_in, t, cond=cond_in)\r\n        denoised_uncond, denoised_cond = (x_in + eps * c_out).chunk(2)\r\n\r\n        denoised = denoised_uncond + (denoised_cond - denoised_uncond) * cfg_scale\r\n\r\n        d = (x - denoised) / sigmas[i]\r\n        dt = sigmas[i] - sigmas[i - 1]\r\n\r\n        x = x + d * dt\r\n\r\n        sd_samplers_common.store_latent(x)\r\n\r\n        # This shouldn't be necessary, but solved some VRAM issues\r\n        del x_in, sigma_in, cond_in, c_out, c_in, t,\r\n        del eps, denoised_uncond, denoised_cond, denoised, d, dt\r\n\r\n    shared.state.nextjob()\r\n\r\n    return x / x.std()\r\n\r\n\r\nCached = namedtuple(\"Cached\", [\"noise\", \"cfg_scale\", \"steps\", \"latent\", \"original_prompt\", \"original_negative_prompt\", \"sigma_adjustment\"])\r\n\r\n\r\n# Based on changes suggested by briansemrau in https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/736\r\ndef find_noise_for_image_sigma_adjustment(p, cond, uncond, cfg_scale, steps):\r\n    x = p.init_latent\r\n\r\n    s_in = x.new_ones([x.shape[0]])\r\n    dnw = K.external.CompVisDenoiser(shared.sd_model)\r\n    sigmas = dnw.get_sigmas(steps).flip(0)\r\n\r\n    shared.state.sampling_steps = steps\r\n\r\n    for i in trange(1, len(sigmas)):\r\n        shared.state.sampling_step += 1\r\n\r\n        x_in = torch.cat([x] * 2)\r\n        sigma_in = torch.cat([sigmas[i - 1] * s_in] * 2)\r\n        cond_in = torch.cat([uncond, cond])\r\n\r\n        image_conditioning = torch.cat([p.image_conditioning] * 2)\r\n        cond_in = {\"c_concat\": [image_conditioning], \"c_crossattn\": [cond_in]}\r\n\r\n        c_out, c_in = [K.utils.append_dims(k, x_in.ndim) for k in dnw.get_scalings(sigma_in)]\r\n\r\n        if i == 1:\r\n            t = dnw.sigma_to_t(torch.cat([sigmas[i] * s_in] * 2))\r\n        else:\r\n            t = dnw.sigma_to_t(sigma_in)\r\n\r\n        eps = shared.sd_model.apply_model(x_in * c_in, t, cond=cond_in)\r\n        denoised_uncond, denoised_cond = (x_in + eps * c_out).chunk(2)\r\n\r\n        denoised = denoised_uncond + (denoised_cond - denoised_uncond) * cfg_scale\r\n\r\n        if i == 1:\r\n            d = (x - denoised) / (2 * sigmas[i])\r\n        else:\r\n            d = (x - denoised) / sigmas[i - 1]\r\n\r\n        dt = sigmas[i] - sigmas[i - 1]\r\n        x = x + d * dt\r\n\r\n        sd_samplers_common.store_latent(x)\r\n\r\n        # This shouldn't be necessary, but solved some VRAM issues\r\n        del x_in, sigma_in, cond_in, c_out, c_in, t,\r\n        del eps, denoised_uncond, denoised_cond, denoised, d, dt\r\n\r\n    shared.state.nextjob()\r\n\r\n    return x / sigmas[-1]\r\n\r\n\r\nclass Script(scripts.Script):\r\n    def __init__(self):\r\n        self.cache = None\r\n\r\n    def title(self):\r\n        return \"img2img alternative test\"\r\n\r\n    def show(self, is_img2img):\r\n        return is_img2img\r\n\r\n    def ui(self, is_img2img):     \r\n        info = gr.Markdown('''\r\n        * `CFG Scale` should be 2 or lower.\r\n        ''')\r\n\r\n        override_sampler = gr.Checkbox(label=\"Override `Sampling method` to Euler?(this method is built for it)\", value=True, elem_id=self.elem_id(\"override_sampler\"))\r\n\r\n        override_prompt = gr.Checkbox(label=\"Override `prompt` to the same value as `original prompt`?(and `negative prompt`)\", value=True, elem_id=self.elem_id(\"override_prompt\"))\r\n        original_prompt = gr.Textbox(label=\"Original prompt\", lines=1, elem_id=self.elem_id(\"original_prompt\"))\r\n        original_negative_prompt = gr.Textbox(label=\"Original negative prompt\", lines=1, elem_id=self.elem_id(\"original_negative_prompt\"))\r\n\r\n        override_steps = gr.Checkbox(label=\"Override `Sampling Steps` to the same value as `Decode steps`?\", value=True, elem_id=self.elem_id(\"override_steps\"))\r\n        st = gr.Slider(label=\"Decode steps\", minimum=1, maximum=150, step=1, value=50, elem_id=self.elem_id(\"st\"))\r\n\r\n        override_strength = gr.Checkbox(label=\"Override `Denoising strength` to 1?\", value=True, elem_id=self.elem_id(\"override_strength\"))\r\n\r\n        cfg = gr.Slider(label=\"Decode CFG scale\", minimum=0.0, maximum=15.0, step=0.1, value=1.0, elem_id=self.elem_id(\"cfg\"))\r\n        randomness = gr.Slider(label=\"Randomness\", minimum=0.0, maximum=1.0, step=0.01, value=0.0, elem_id=self.elem_id(\"randomness\"))\r\n        sigma_adjustment = gr.Checkbox(label=\"Sigma adjustment for finding noise for image\", value=False, elem_id=self.elem_id(\"sigma_adjustment\"))\r\n\r\n        return [\r\n            info, \r\n            override_sampler,\r\n            override_prompt, original_prompt, original_negative_prompt, \r\n            override_steps, st,\r\n            override_strength,\r\n            cfg, randomness, sigma_adjustment,\r\n        ]\r\n\r\n    def run(self, p, _, override_sampler, override_prompt, original_prompt, original_negative_prompt, override_steps, st, override_strength, cfg, randomness, sigma_adjustment):\r\n        # Override\r\n        if override_sampler:\r\n            p.sampler_name = \"Euler\"\r\n        if override_prompt:\r\n            p.prompt = original_prompt\r\n            p.negative_prompt = original_negative_prompt\r\n        if override_steps:\r\n            p.steps = st\r\n        if override_strength:\r\n            p.denoising_strength = 1.0\r\n\r\n        def sample_extra(conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength, prompts):\r\n            lat = (p.init_latent.cpu().numpy() * 10).astype(int)\r\n\r\n            same_params = self.cache is not None and self.cache.cfg_scale == cfg and self.cache.steps == st \\\r\n                                and self.cache.original_prompt == original_prompt \\\r\n                                and self.cache.original_negative_prompt == original_negative_prompt \\\r\n                                and self.cache.sigma_adjustment == sigma_adjustment\r\n            same_everything = same_params and self.cache.latent.shape == lat.shape and np.abs(self.cache.latent-lat).sum() < 100\r\n\r\n            if same_everything:\r\n                rec_noise = self.cache.noise\r\n            else:\r\n                shared.state.job_count += 1\r\n                cond = p.sd_model.get_learned_conditioning(p.batch_size * [original_prompt])\r\n                uncond = p.sd_model.get_learned_conditioning(p.batch_size * [original_negative_prompt])\r\n                if sigma_adjustment:\r\n                    rec_noise = find_noise_for_image_sigma_adjustment(p, cond, uncond, cfg, st)\r\n                else:\r\n                    rec_noise = find_noise_for_image(p, cond, uncond, cfg, st)\r\n                self.cache = Cached(rec_noise, cfg, st, lat, original_prompt, original_negative_prompt, sigma_adjustment)\r\n\r\n            rand_noise = processing.create_random_tensors(p.init_latent.shape[1:], seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, seed_resize_from_h=p.seed_resize_from_h, seed_resize_from_w=p.seed_resize_from_w, p=p)\r\n            \r\n            combined_noise = ((1 - randomness) * rec_noise + randomness * rand_noise) / ((randomness**2 + (1-randomness)**2) ** 0.5)\r\n            \r\n            sampler = sd_samplers.create_sampler(p.sampler_name, p.sd_model)\r\n\r\n            sigmas = sampler.model_wrap.get_sigmas(p.steps)\r\n            \r\n            noise_dt = combined_noise - (p.init_latent / sigmas[0])\r\n            \r\n            p.seed = p.seed + 1\r\n            \r\n            return sampler.sample_img2img(p, p.init_latent, noise_dt, conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning)\r\n\r\n        p.sample = sample_extra\r\n\r\n        p.extra_generation_params[\"Decode prompt\"] = original_prompt\r\n        p.extra_generation_params[\"Decode negative prompt\"] = original_negative_prompt\r\n        p.extra_generation_params[\"Decode CFG scale\"] = cfg\r\n        p.extra_generation_params[\"Decode steps\"] = st\r\n        p.extra_generation_params[\"Randomness\"] = randomness\r\n        p.extra_generation_params[\"Sigma Adjustment\"] = sigma_adjustment\r\n\r\n        processed = processing.process_images(p)\r\n\r\n        return processed\r\n\r\n"
  },
  {
    "path": "scripts/loopback.py",
    "content": "import numpy as np\r\nfrom tqdm import trange\r\n\r\nimport modules.scripts as scripts\r\nimport gradio as gr\r\n\r\nfrom modules import processing, shared, sd_samplers, images\r\nfrom modules.processing import Processed\r\nfrom modules.sd_samplers import samplers\r\nfrom modules.shared import opts, cmd_opts, state\r\nfrom modules import deepbooru\r\n\r\n\r\nclass Script(scripts.Script):\r\n    def title(self):\r\n        return \"Loopback\"\r\n\r\n    def show(self, is_img2img):\r\n        return is_img2img\r\n\r\n    def ui(self, is_img2img):        \r\n        loops = gr.Slider(minimum=1, maximum=32, step=1, label='Loops', value=4, elem_id=self.elem_id(\"loops\"))\r\n        denoising_strength_change_factor = gr.Slider(minimum=0.9, maximum=1.1, step=0.01, label='Denoising strength change factor', value=1, elem_id=self.elem_id(\"denoising_strength_change_factor\"))\r\n        append_interrogation = gr.Dropdown(label=\"Append interrogated prompt at each iteration\", choices=[\"None\", \"CLIP\", \"DeepBooru\"], value=\"None\")\r\n\r\n        return [loops, denoising_strength_change_factor, append_interrogation]\r\n\r\n    def run(self, p, loops, denoising_strength_change_factor, append_interrogation):\r\n        processing.fix_seed(p)\r\n        batch_count = p.n_iter\r\n        p.extra_generation_params = {\r\n            \"Denoising strength change factor\": denoising_strength_change_factor,\r\n        }\r\n\r\n        p.batch_size = 1\r\n        p.n_iter = 1\r\n\r\n        output_images, info = None, None\r\n        initial_seed = None\r\n        initial_info = None\r\n\r\n        grids = []\r\n        all_images = []\r\n        original_init_image = p.init_images\r\n        original_prompt = p.prompt\r\n        state.job_count = loops * batch_count\r\n\r\n        initial_color_corrections = [processing.setup_color_correction(p.init_images[0])]\r\n\r\n        for n in range(batch_count):\r\n            history = []\r\n\r\n            # Reset to original init image at the start of each batch\r\n            p.init_images = original_init_image\r\n\r\n            for i in range(loops):\r\n                p.n_iter = 1\r\n                p.batch_size = 1\r\n                p.do_not_save_grid = True\r\n\r\n                if opts.img2img_color_correction:\r\n                    p.color_corrections = initial_color_corrections\r\n\r\n                if append_interrogation != \"None\":\r\n                    p.prompt = original_prompt + \", \" if original_prompt != \"\" else \"\"\r\n                    if append_interrogation == \"CLIP\":\r\n                        p.prompt += shared.interrogator.interrogate(p.init_images[0])\r\n                    elif append_interrogation == \"DeepBooru\":\r\n                        p.prompt += deepbooru.model.tag(p.init_images[0])\r\n\r\n                state.job = f\"Iteration {i + 1}/{loops}, batch {n + 1}/{batch_count}\"\r\n\r\n                processed = processing.process_images(p)\r\n\r\n                if initial_seed is None:\r\n                    initial_seed = processed.seed\r\n                    initial_info = processed.info\r\n\r\n                init_img = processed.images[0]\r\n\r\n                p.init_images = [init_img]\r\n                p.seed = processed.seed + 1\r\n                p.denoising_strength = min(max(p.denoising_strength * denoising_strength_change_factor, 0.1), 1)\r\n                history.append(processed.images[0])\r\n\r\n            grid = images.image_grid(history, rows=1)\r\n            if opts.grid_save:\r\n                images.save_image(grid, p.outpath_grids, \"grid\", initial_seed, p.prompt, opts.grid_format, info=info, short_filename=not opts.grid_extended_filename, grid=True, p=p)\r\n\r\n            grids.append(grid)\r\n            all_images += history\r\n\r\n        if opts.return_grid:\r\n            all_images = grids + all_images\r\n\r\n        processed = Processed(p, all_images, initial_seed, initial_info)\r\n\r\n        return processed\r\n"
  },
  {
    "path": "scripts/outpainting_mk_2.py",
    "content": "import math\r\n\r\nimport numpy as np\r\nimport skimage\r\n\r\nimport modules.scripts as scripts\r\nimport gradio as gr\r\nfrom PIL import Image, ImageDraw\r\n\r\nfrom modules import images, processing, devices\r\nfrom modules.processing import Processed, process_images\r\nfrom modules.shared import opts, cmd_opts, state\r\n\r\n\r\n# this function is taken from https://github.com/parlance-zz/g-diffuser-bot\r\ndef get_matched_noise(_np_src_image, np_mask_rgb, noise_q=1, color_variation=0.05):\r\n    # helper fft routines that keep ortho normalization and auto-shift before and after fft\r\n    def _fft2(data):\r\n        if data.ndim > 2:  # has channels\r\n            out_fft = np.zeros((data.shape[0], data.shape[1], data.shape[2]), dtype=np.complex128)\r\n            for c in range(data.shape[2]):\r\n                c_data = data[:, :, c]\r\n                out_fft[:, :, c] = np.fft.fft2(np.fft.fftshift(c_data), norm=\"ortho\")\r\n                out_fft[:, :, c] = np.fft.ifftshift(out_fft[:, :, c])\r\n        else:  # one channel\r\n            out_fft = np.zeros((data.shape[0], data.shape[1]), dtype=np.complex128)\r\n            out_fft[:, :] = np.fft.fft2(np.fft.fftshift(data), norm=\"ortho\")\r\n            out_fft[:, :] = np.fft.ifftshift(out_fft[:, :])\r\n\r\n        return out_fft\r\n\r\n    def _ifft2(data):\r\n        if data.ndim > 2:  # has channels\r\n            out_ifft = np.zeros((data.shape[0], data.shape[1], data.shape[2]), dtype=np.complex128)\r\n            for c in range(data.shape[2]):\r\n                c_data = data[:, :, c]\r\n                out_ifft[:, :, c] = np.fft.ifft2(np.fft.fftshift(c_data), norm=\"ortho\")\r\n                out_ifft[:, :, c] = np.fft.ifftshift(out_ifft[:, :, c])\r\n        else:  # one channel\r\n            out_ifft = np.zeros((data.shape[0], data.shape[1]), dtype=np.complex128)\r\n            out_ifft[:, :] = np.fft.ifft2(np.fft.fftshift(data), norm=\"ortho\")\r\n            out_ifft[:, :] = np.fft.ifftshift(out_ifft[:, :])\r\n\r\n        return out_ifft\r\n\r\n    def _get_gaussian_window(width, height, std=3.14, mode=0):\r\n        window_scale_x = float(width / min(width, height))\r\n        window_scale_y = float(height / min(width, height))\r\n\r\n        window = np.zeros((width, height))\r\n        x = (np.arange(width) / width * 2. - 1.) * window_scale_x\r\n        for y in range(height):\r\n            fy = (y / height * 2. - 1.) * window_scale_y\r\n            if mode == 0:\r\n                window[:, y] = np.exp(-(x ** 2 + fy ** 2) * std)\r\n            else:\r\n                window[:, y] = (1 / ((x ** 2 + 1.) * (fy ** 2 + 1.))) ** (std / 3.14)  # hey wait a minute that's not gaussian\r\n\r\n        return window\r\n\r\n    def _get_masked_window_rgb(np_mask_grey, hardness=1.):\r\n        np_mask_rgb = np.zeros((np_mask_grey.shape[0], np_mask_grey.shape[1], 3))\r\n        if hardness != 1.:\r\n            hardened = np_mask_grey[:] ** hardness\r\n        else:\r\n            hardened = np_mask_grey[:]\r\n        for c in range(3):\r\n            np_mask_rgb[:, :, c] = hardened[:]\r\n        return np_mask_rgb\r\n\r\n    width = _np_src_image.shape[0]\r\n    height = _np_src_image.shape[1]\r\n    num_channels = _np_src_image.shape[2]\r\n\r\n    np_src_image = _np_src_image[:] * (1. - np_mask_rgb)\r\n    np_mask_grey = (np.sum(np_mask_rgb, axis=2) / 3.)\r\n    img_mask = np_mask_grey > 1e-6\r\n    ref_mask = np_mask_grey < 1e-3\r\n\r\n    windowed_image = _np_src_image * (1. - _get_masked_window_rgb(np_mask_grey))\r\n    windowed_image /= np.max(windowed_image)\r\n    windowed_image += np.average(_np_src_image) * np_mask_rgb  # / (1.-np.average(np_mask_rgb))  # rather than leave the masked area black, we get better results from fft by filling the average unmasked color\r\n\r\n    src_fft = _fft2(windowed_image)  # get feature statistics from masked src img\r\n    src_dist = np.absolute(src_fft)\r\n    src_phase = src_fft / src_dist\r\n\r\n    # create a generator with a static seed to make outpainting deterministic / only follow global seed\r\n    rng = np.random.default_rng(0)\r\n\r\n    noise_window = _get_gaussian_window(width, height, mode=1)  # start with simple gaussian noise\r\n    noise_rgb = rng.random((width, height, num_channels))\r\n    noise_grey = (np.sum(noise_rgb, axis=2) / 3.)\r\n    noise_rgb *= color_variation  # the colorfulness of the starting noise is blended to greyscale with a parameter\r\n    for c in range(num_channels):\r\n        noise_rgb[:, :, c] += (1. - color_variation) * noise_grey\r\n\r\n    noise_fft = _fft2(noise_rgb)\r\n    for c in range(num_channels):\r\n        noise_fft[:, :, c] *= noise_window\r\n    noise_rgb = np.real(_ifft2(noise_fft))\r\n    shaped_noise_fft = _fft2(noise_rgb)\r\n    shaped_noise_fft[:, :, :] = np.absolute(shaped_noise_fft[:, :, :]) ** 2 * (src_dist ** noise_q) * src_phase  # perform the actual shaping\r\n\r\n    brightness_variation = 0.  # color_variation # todo: temporarily tieing brightness variation to color variation for now\r\n    contrast_adjusted_np_src = _np_src_image[:] * (brightness_variation + 1.) - brightness_variation * 2.\r\n\r\n    # scikit-image is used for histogram matching, very convenient!\r\n    shaped_noise = np.real(_ifft2(shaped_noise_fft))\r\n    shaped_noise -= np.min(shaped_noise)\r\n    shaped_noise /= np.max(shaped_noise)\r\n    shaped_noise[img_mask, :] = skimage.exposure.match_histograms(shaped_noise[img_mask, :] ** 1., contrast_adjusted_np_src[ref_mask, :], channel_axis=1)\r\n    shaped_noise = _np_src_image[:] * (1. - np_mask_rgb) + shaped_noise * np_mask_rgb\r\n\r\n    matched_noise = shaped_noise[:]\r\n\r\n    return np.clip(matched_noise, 0., 1.)\r\n\r\n\r\n\r\nclass Script(scripts.Script):\r\n    def title(self):\r\n        return \"Outpainting mk2\"\r\n\r\n    def show(self, is_img2img):\r\n        return is_img2img\r\n\r\n    def ui(self, is_img2img):\r\n        if not is_img2img:\r\n            return None\r\n\r\n        info = gr.HTML(\"<p style=\\\"margin-bottom:0.75em\\\">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>\")\r\n\r\n        pixels = gr.Slider(label=\"Pixels to expand\", minimum=8, maximum=256, step=8, value=128, elem_id=self.elem_id(\"pixels\"))\r\n        mask_blur = gr.Slider(label='Mask blur', minimum=0, maximum=64, step=1, value=8, elem_id=self.elem_id(\"mask_blur\"))\r\n        direction = gr.CheckboxGroup(label=\"Outpainting direction\", choices=['left', 'right', 'up', 'down'], value=['left', 'right', 'up', 'down'], elem_id=self.elem_id(\"direction\"))\r\n        noise_q = gr.Slider(label=\"Fall-off exponent (lower=higher detail)\", minimum=0.0, maximum=4.0, step=0.01, value=1.0, elem_id=self.elem_id(\"noise_q\"))\r\n        color_variation = gr.Slider(label=\"Color variation\", minimum=0.0, maximum=1.0, step=0.01, value=0.05, elem_id=self.elem_id(\"color_variation\"))\r\n\r\n        return [info, pixels, mask_blur, direction, noise_q, color_variation]\r\n\r\n    def run(self, p, _, pixels, mask_blur, direction, noise_q, color_variation):\r\n        initial_seed_and_info = [None, None]\r\n\r\n        process_width = p.width\r\n        process_height = p.height\r\n\r\n        p.mask_blur = mask_blur*4\r\n        p.inpaint_full_res = False\r\n        p.inpainting_fill = 1\r\n        p.do_not_save_samples = True\r\n        p.do_not_save_grid = True\r\n\r\n        left = pixels if \"left\" in direction else 0\r\n        right = pixels if \"right\" in direction else 0\r\n        up = pixels if \"up\" in direction else 0\r\n        down = pixels if \"down\" in direction else 0\r\n\r\n        init_img = p.init_images[0]\r\n        target_w = math.ceil((init_img.width + left + right) / 64) * 64\r\n        target_h = math.ceil((init_img.height + up + down) / 64) * 64\r\n\r\n        if left > 0:\r\n            left = left * (target_w - init_img.width) // (left + right)\r\n\r\n        if right > 0:\r\n            right = target_w - init_img.width - left\r\n\r\n        if up > 0:\r\n            up = up * (target_h - init_img.height) // (up + down)\r\n\r\n        if down > 0:\r\n            down = target_h - init_img.height - up\r\n\r\n        def expand(init, count, expand_pixels, is_left=False, is_right=False, is_top=False, is_bottom=False):\r\n            is_horiz = is_left or is_right\r\n            is_vert = is_top or is_bottom\r\n            pixels_horiz = expand_pixels if is_horiz else 0\r\n            pixels_vert = expand_pixels if is_vert else 0\r\n\r\n            images_to_process = []\r\n            output_images = []\r\n            for n in range(count):\r\n                res_w = init[n].width + pixels_horiz\r\n                res_h = init[n].height + pixels_vert\r\n                process_res_w = math.ceil(res_w / 64) * 64\r\n                process_res_h = math.ceil(res_h / 64) * 64\r\n\r\n                img = Image.new(\"RGB\", (process_res_w, process_res_h))\r\n                img.paste(init[n], (pixels_horiz if is_left else 0, pixels_vert if is_top else 0))\r\n                mask = Image.new(\"RGB\", (process_res_w, process_res_h), \"white\")\r\n                draw = ImageDraw.Draw(mask)\r\n                draw.rectangle((\r\n                    expand_pixels + mask_blur if is_left else 0,\r\n                    expand_pixels + mask_blur if is_top else 0,\r\n                    mask.width - expand_pixels - mask_blur if is_right else res_w,\r\n                    mask.height - expand_pixels - mask_blur if is_bottom else res_h,\r\n                ), fill=\"black\")\r\n\r\n                np_image = (np.asarray(img) / 255.0).astype(np.float64)\r\n                np_mask = (np.asarray(mask) / 255.0).astype(np.float64)\r\n                noised = get_matched_noise(np_image, np_mask, noise_q, color_variation)\r\n                output_images.append(Image.fromarray(np.clip(noised * 255., 0., 255.).astype(np.uint8), mode=\"RGB\"))\r\n\r\n                target_width = min(process_width, init[n].width + pixels_horiz) if is_horiz else img.width\r\n                target_height = min(process_height, init[n].height + pixels_vert) if is_vert else img.height\r\n                p.width = target_width if is_horiz else img.width\r\n                p.height = target_height if is_vert else img.height\r\n\r\n                crop_region = (\r\n                    0 if is_left else output_images[n].width - target_width,\r\n                    0 if is_top else output_images[n].height - target_height,\r\n                    target_width if is_left else output_images[n].width,\r\n                    target_height if is_top else output_images[n].height,\r\n                )\r\n                mask = mask.crop(crop_region)\r\n                p.image_mask = mask\r\n\r\n                image_to_process = output_images[n].crop(crop_region)\r\n                images_to_process.append(image_to_process)\r\n\r\n            p.init_images = images_to_process\r\n\r\n            latent_mask = Image.new(\"RGB\", (p.width, p.height), \"white\")\r\n            draw = ImageDraw.Draw(latent_mask)\r\n            draw.rectangle((\r\n                expand_pixels + mask_blur * 2 if is_left else 0,\r\n                expand_pixels + mask_blur * 2 if is_top else 0,\r\n                mask.width - expand_pixels - mask_blur * 2 if is_right else res_w,\r\n                mask.height - expand_pixels - mask_blur * 2 if is_bottom else res_h,\r\n            ), fill=\"black\")\r\n            p.latent_mask = latent_mask\r\n\r\n            proc = process_images(p)\r\n\r\n            if initial_seed_and_info[0] is None:\r\n                initial_seed_and_info[0] = proc.seed\r\n                initial_seed_and_info[1] = proc.info\r\n\r\n            for n in range(count):\r\n                output_images[n].paste(proc.images[n], (0 if is_left else output_images[n].width - proc.images[n].width, 0 if is_top else output_images[n].height - proc.images[n].height))\r\n                output_images[n] = output_images[n].crop((0, 0, res_w, res_h))\r\n\r\n            return output_images\r\n\r\n        batch_count = p.n_iter\r\n        batch_size = p.batch_size\r\n        p.n_iter = 1\r\n        state.job_count = batch_count * ((1 if left > 0 else 0) + (1 if right > 0 else 0) + (1 if up > 0 else 0) + (1 if down > 0 else 0))\r\n        all_processed_images = []\r\n\r\n        for i in range(batch_count):\r\n            imgs = [init_img] * batch_size\r\n            state.job = f\"Batch {i + 1} out of {batch_count}\"\r\n\r\n            if left > 0:\r\n                imgs = expand(imgs, batch_size, left, is_left=True)\r\n            if right > 0:\r\n                imgs = expand(imgs, batch_size, right, is_right=True)\r\n            if up > 0:\r\n                imgs = expand(imgs, batch_size, up, is_top=True)\r\n            if down > 0:\r\n                imgs = expand(imgs, batch_size, down, is_bottom=True)\r\n\r\n            all_processed_images += imgs\r\n\r\n        all_images = all_processed_images\r\n\r\n        combined_grid_image = images.image_grid(all_processed_images)\r\n        unwanted_grid_because_of_img_count = len(all_processed_images) < 2 and opts.grid_only_if_multiple\r\n        if opts.return_grid and not unwanted_grid_because_of_img_count:\r\n            all_images = [combined_grid_image] + all_processed_images\r\n\r\n        res = Processed(p, all_images, initial_seed_and_info[0], initial_seed_and_info[1])\r\n\r\n        if opts.samples_save:\r\n            for img in all_processed_images:\r\n                images.save_image(img, p.outpath_samples, \"\", res.seed, p.prompt, opts.grid_format, info=res.info, p=p)\r\n\r\n        if opts.grid_save and not unwanted_grid_because_of_img_count:\r\n            images.save_image(combined_grid_image, p.outpath_grids, \"grid\", res.seed, p.prompt, opts.grid_format, info=res.info, short_filename=not opts.grid_extended_filename, grid=True, p=p)\r\n\r\n        return res\r\n"
  },
  {
    "path": "scripts/poor_mans_outpainting.py",
    "content": "import math\r\n\r\nimport modules.scripts as scripts\r\nimport gradio as gr\r\nfrom PIL import Image, ImageDraw\r\n\r\nfrom modules import images, processing, devices\r\nfrom modules.processing import Processed, process_images\r\nfrom modules.shared import opts, cmd_opts, state\r\n\r\n\r\nclass Script(scripts.Script):\r\n    def title(self):\r\n        return \"Poor man's outpainting\"\r\n\r\n    def show(self, is_img2img):\r\n        return is_img2img\r\n\r\n    def ui(self, is_img2img):\r\n        if not is_img2img:\r\n            return None\r\n        \r\n        pixels = gr.Slider(label=\"Pixels to expand\", minimum=8, maximum=256, step=8, value=128, elem_id=self.elem_id(\"pixels\"))\r\n        mask_blur = gr.Slider(label='Mask blur', minimum=0, maximum=64, step=1, value=4, elem_id=self.elem_id(\"mask_blur\"))\r\n        inpainting_fill = gr.Radio(label='Masked content', choices=['fill', 'original', 'latent noise', 'latent nothing'], value='fill', type=\"index\", elem_id=self.elem_id(\"inpainting_fill\"))\r\n        direction = gr.CheckboxGroup(label=\"Outpainting direction\", choices=['left', 'right', 'up', 'down'], value=['left', 'right', 'up', 'down'], elem_id=self.elem_id(\"direction\"))\r\n\r\n        return [pixels, mask_blur, inpainting_fill, direction]\r\n\r\n    def run(self, p, pixels, mask_blur, inpainting_fill, direction):\r\n        initial_seed = None\r\n        initial_info = None\r\n\r\n        p.mask_blur = mask_blur * 2\r\n        p.inpainting_fill = inpainting_fill\r\n        p.inpaint_full_res = False\r\n\r\n        left = pixels if \"left\" in direction else 0\r\n        right = pixels if \"right\" in direction else 0\r\n        up = pixels if \"up\" in direction else 0\r\n        down = pixels if \"down\" in direction else 0\r\n\r\n        init_img = p.init_images[0]\r\n        target_w = math.ceil((init_img.width + left + right) / 64) * 64\r\n        target_h = math.ceil((init_img.height + up + down) / 64) * 64\r\n\r\n        if left > 0:\r\n            left = left * (target_w - init_img.width) // (left + right)\r\n        if right > 0:\r\n            right = target_w - init_img.width - left\r\n\r\n        if up > 0:\r\n            up = up * (target_h - init_img.height) // (up + down)\r\n\r\n        if down > 0:\r\n            down = target_h - init_img.height - up\r\n\r\n        img = Image.new(\"RGB\", (target_w, target_h))\r\n        img.paste(init_img, (left, up))\r\n\r\n        mask = Image.new(\"L\", (img.width, img.height), \"white\")\r\n        draw = ImageDraw.Draw(mask)\r\n        draw.rectangle((\r\n            left + (mask_blur * 2 if left > 0 else 0),\r\n            up + (mask_blur * 2 if up > 0 else 0),\r\n            mask.width - right - (mask_blur * 2 if right > 0 else 0),\r\n            mask.height - down - (mask_blur * 2 if down > 0 else 0)\r\n        ), fill=\"black\")\r\n\r\n        latent_mask = Image.new(\"L\", (img.width, img.height), \"white\")\r\n        latent_draw = ImageDraw.Draw(latent_mask)\r\n        latent_draw.rectangle((\r\n             left + (mask_blur//2 if left > 0 else 0),\r\n             up + (mask_blur//2 if up > 0 else 0),\r\n             mask.width - right - (mask_blur//2 if right > 0 else 0),\r\n             mask.height - down - (mask_blur//2 if down > 0 else 0)\r\n        ), fill=\"black\")\r\n\r\n        devices.torch_gc()\r\n\r\n        grid = images.split_grid(img, tile_w=p.width, tile_h=p.height, overlap=pixels)\r\n        grid_mask = images.split_grid(mask, tile_w=p.width, tile_h=p.height, overlap=pixels)\r\n        grid_latent_mask = images.split_grid(latent_mask, tile_w=p.width, tile_h=p.height, overlap=pixels)\r\n\r\n        p.n_iter = 1\r\n        p.batch_size = 1\r\n        p.do_not_save_grid = True\r\n        p.do_not_save_samples = True\r\n\r\n        work = []\r\n        work_mask = []\r\n        work_latent_mask = []\r\n        work_results = []\r\n\r\n        for (y, h, row), (_, _, row_mask), (_, _, row_latent_mask) in zip(grid.tiles, grid_mask.tiles, grid_latent_mask.tiles):\r\n            for tiledata, tiledata_mask, tiledata_latent_mask in zip(row, row_mask, row_latent_mask):\r\n                x, w = tiledata[0:2]\r\n\r\n                if x >= left and x+w <= img.width - right and y >= up and y+h <= img.height - down:\r\n                    continue\r\n\r\n                work.append(tiledata[2])\r\n                work_mask.append(tiledata_mask[2])\r\n                work_latent_mask.append(tiledata_latent_mask[2])\r\n\r\n        batch_count = len(work)\r\n        print(f\"Poor man's outpainting will process a total of {len(work)} images tiled as {len(grid.tiles[0][2])}x{len(grid.tiles)}.\")\r\n\r\n        state.job_count = batch_count\r\n\r\n        for i in range(batch_count):\r\n            p.init_images = [work[i]]\r\n            p.image_mask = work_mask[i]\r\n            p.latent_mask = work_latent_mask[i]\r\n\r\n            state.job = f\"Batch {i + 1} out of {batch_count}\"\r\n            processed = process_images(p)\r\n\r\n            if initial_seed is None:\r\n                initial_seed = processed.seed\r\n                initial_info = processed.info\r\n\r\n            p.seed = processed.seed + 1\r\n            work_results += processed.images\r\n\r\n\r\n        image_index = 0\r\n        for y, h, row in grid.tiles:\r\n            for tiledata in row:\r\n                x, w = tiledata[0:2]\r\n\r\n                if x >= left and x+w <= img.width - right and y >= up and y+h <= img.height - down:\r\n                    continue\r\n\r\n                tiledata[2] = work_results[image_index] if image_index < len(work_results) else Image.new(\"RGB\", (p.width, p.height))\r\n                image_index += 1\r\n\r\n        combined_image = images.combine_grid(grid)\r\n\r\n        if opts.samples_save:\r\n            images.save_image(combined_image, p.outpath_samples, \"\", initial_seed, p.prompt, opts.grid_format, info=initial_info, p=p)\r\n\r\n        processed = Processed(p, [combined_image], initial_seed, initial_info)\r\n\r\n        return processed\r\n\r\n"
  },
  {
    "path": "scripts/postprocessing_codeformer.py",
    "content": "from PIL import Image\r\nimport numpy as np\r\n\r\nfrom modules import scripts_postprocessing, codeformer_model\r\nimport gradio as gr\r\n\r\nfrom modules.ui_components import FormRow\r\n\r\n\r\nclass ScriptPostprocessingCodeFormer(scripts_postprocessing.ScriptPostprocessing):\r\n    name = \"CodeFormer\"\r\n    order = 3000\r\n\r\n    def ui(self):\r\n        with FormRow():\r\n            codeformer_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label=\"CodeFormer visibility\", value=0, elem_id=\"extras_codeformer_visibility\")\r\n            codeformer_weight = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label=\"CodeFormer weight (0 = maximum effect, 1 = minimum effect)\", value=0, elem_id=\"extras_codeformer_weight\")\r\n\r\n        return {\r\n            \"codeformer_visibility\": codeformer_visibility,\r\n            \"codeformer_weight\": codeformer_weight,\r\n        }\r\n\r\n    def process(self, pp: scripts_postprocessing.PostprocessedImage, codeformer_visibility, codeformer_weight):\r\n        if codeformer_visibility == 0:\r\n            return\r\n\r\n        restored_img = codeformer_model.codeformer.restore(np.array(pp.image, dtype=np.uint8), w=codeformer_weight)\r\n        res = Image.fromarray(restored_img)\r\n\r\n        if codeformer_visibility < 1.0:\r\n            res = Image.blend(pp.image, res, codeformer_visibility)\r\n\r\n        pp.image = res\r\n        pp.info[\"CodeFormer visibility\"] = round(codeformer_visibility, 3)\r\n        pp.info[\"CodeFormer weight\"] = round(codeformer_weight, 3)\r\n"
  },
  {
    "path": "scripts/postprocessing_gfpgan.py",
    "content": "from PIL import Image\r\nimport numpy as np\r\n\r\nfrom modules import scripts_postprocessing, gfpgan_model\r\nimport gradio as gr\r\n\r\nfrom modules.ui_components import FormRow\r\n\r\n\r\nclass ScriptPostprocessingGfpGan(scripts_postprocessing.ScriptPostprocessing):\r\n    name = \"GFPGAN\"\r\n    order = 2000\r\n\r\n    def ui(self):\r\n        with FormRow():\r\n            gfpgan_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label=\"GFPGAN visibility\", value=0, elem_id=\"extras_gfpgan_visibility\")\r\n\r\n        return {\r\n            \"gfpgan_visibility\": gfpgan_visibility,\r\n        }\r\n\r\n    def process(self, pp: scripts_postprocessing.PostprocessedImage, gfpgan_visibility):\r\n        if gfpgan_visibility == 0:\r\n            return\r\n\r\n        restored_img = gfpgan_model.gfpgan_fix_faces(np.array(pp.image, dtype=np.uint8))\r\n        res = Image.fromarray(restored_img)\r\n\r\n        if gfpgan_visibility < 1.0:\r\n            res = Image.blend(pp.image, res, gfpgan_visibility)\r\n\r\n        pp.image = res\r\n        pp.info[\"GFPGAN visibility\"] = round(gfpgan_visibility, 3)\r\n"
  },
  {
    "path": "scripts/postprocessing_upscale.py",
    "content": "from PIL import Image\r\nimport numpy as np\r\n\r\nfrom modules import scripts_postprocessing, shared\r\nimport gradio as gr\r\n\r\nfrom modules.ui_components import FormRow\r\n\r\n\r\nupscale_cache = {}\r\n\r\n\r\nclass ScriptPostprocessingUpscale(scripts_postprocessing.ScriptPostprocessing):\r\n    name = \"Upscale\"\r\n    order = 1000\r\n\r\n    def ui(self):\r\n        selected_tab = gr.State(value=0)\r\n\r\n        with gr.Tabs(elem_id=\"extras_resize_mode\"):\r\n            with gr.TabItem('Scale by', elem_id=\"extras_scale_by_tab\") as tab_scale_by:\r\n                upscaling_resize = gr.Slider(minimum=1.0, maximum=8.0, step=0.05, label=\"Resize\", value=4, elem_id=\"extras_upscaling_resize\")\r\n\r\n            with gr.TabItem('Scale to', elem_id=\"extras_scale_to_tab\") as tab_scale_to:\r\n                with FormRow():\r\n                    upscaling_resize_w = gr.Number(label=\"Width\", value=512, precision=0, elem_id=\"extras_upscaling_resize_w\")\r\n                    upscaling_resize_h = gr.Number(label=\"Height\", value=512, precision=0, elem_id=\"extras_upscaling_resize_h\")\r\n                    upscaling_crop = gr.Checkbox(label='Crop to fit', value=True, elem_id=\"extras_upscaling_crop\")\r\n\r\n        with FormRow():\r\n            extras_upscaler_1 = gr.Dropdown(label='Upscaler 1', elem_id=\"extras_upscaler_1\", choices=[x.name for x in shared.sd_upscalers], value=shared.sd_upscalers[0].name)\r\n\r\n        with FormRow():\r\n            extras_upscaler_2 = gr.Dropdown(label='Upscaler 2', elem_id=\"extras_upscaler_2\", choices=[x.name for x in shared.sd_upscalers], value=shared.sd_upscalers[0].name)\r\n            extras_upscaler_2_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label=\"Upscaler 2 visibility\", value=0.0, elem_id=\"extras_upscaler_2_visibility\")\r\n\r\n        tab_scale_by.select(fn=lambda: 0, inputs=[], outputs=[selected_tab])\r\n        tab_scale_to.select(fn=lambda: 1, inputs=[], outputs=[selected_tab])\r\n\r\n        return {\r\n            \"upscale_mode\": selected_tab,\r\n            \"upscale_by\": upscaling_resize,\r\n            \"upscale_to_width\": upscaling_resize_w,\r\n            \"upscale_to_height\": upscaling_resize_h,\r\n            \"upscale_crop\": upscaling_crop,\r\n            \"upscaler_1_name\": extras_upscaler_1,\r\n            \"upscaler_2_name\": extras_upscaler_2,\r\n            \"upscaler_2_visibility\": extras_upscaler_2_visibility,\r\n        }\r\n\r\n    def upscale(self, image, info, upscaler, upscale_mode, upscale_by,  upscale_to_width, upscale_to_height, upscale_crop):\r\n        if upscale_mode == 1:\r\n            upscale_by = max(upscale_to_width/image.width, upscale_to_height/image.height)\r\n            info[\"Postprocess upscale to\"] = f\"{upscale_to_width}x{upscale_to_height}\"\r\n        else:\r\n            info[\"Postprocess upscale by\"] = upscale_by\r\n\r\n        cache_key = (hash(np.array(image.getdata()).tobytes()), upscaler.name, upscale_mode, upscale_by,  upscale_to_width, upscale_to_height, upscale_crop)\r\n        cached_image = upscale_cache.pop(cache_key, None)\r\n\r\n        if cached_image is not None:\r\n            image = cached_image\r\n        else:\r\n            image = upscaler.scaler.upscale(image, upscale_by, upscaler.data_path)\r\n\r\n        upscale_cache[cache_key] = image\r\n        if len(upscale_cache) > shared.opts.upscaling_max_images_in_cache:\r\n            upscale_cache.pop(next(iter(upscale_cache), None), None)\r\n\r\n        if upscale_mode == 1 and upscale_crop:\r\n            cropped = Image.new(\"RGB\", (upscale_to_width, upscale_to_height))\r\n            cropped.paste(image, box=(upscale_to_width // 2 - image.width // 2, upscale_to_height // 2 - image.height // 2))\r\n            image = cropped\r\n            info[\"Postprocess crop to\"] = f\"{image.width}x{image.height}\"\r\n\r\n        return image\r\n\r\n    def process(self, pp: scripts_postprocessing.PostprocessedImage, upscale_mode=1, upscale_by=2.0, upscale_to_width=None, upscale_to_height=None, upscale_crop=False, upscaler_1_name=None, upscaler_2_name=None, upscaler_2_visibility=0.0):\r\n        if upscaler_1_name == \"None\":\r\n            upscaler_1_name = None\r\n\r\n        upscaler1 = next(iter([x for x in shared.sd_upscalers if x.name == upscaler_1_name]), None)\r\n        assert upscaler1 or (upscaler_1_name is None), f'could not find upscaler named {upscaler_1_name}'\r\n\r\n        if not upscaler1:\r\n            return\r\n\r\n        if upscaler_2_name == \"None\":\r\n            upscaler_2_name = None\r\n\r\n        upscaler2 = next(iter([x for x in shared.sd_upscalers if x.name == upscaler_2_name and x.name != \"None\"]), None)\r\n        assert upscaler2 or (upscaler_2_name is None), f'could not find upscaler named {upscaler_2_name}'\r\n\r\n        upscaled_image = self.upscale(pp.image, pp.info, upscaler1, upscale_mode, upscale_by, upscale_to_width, upscale_to_height, upscale_crop)\r\n        pp.info[f\"Postprocess upscaler\"] = upscaler1.name\r\n\r\n        if upscaler2 and upscaler_2_visibility > 0:\r\n            second_upscale = self.upscale(pp.image, pp.info, upscaler2, upscale_mode, upscale_by, upscale_to_width, upscale_to_height, upscale_crop)\r\n            upscaled_image = Image.blend(upscaled_image, second_upscale, upscaler_2_visibility)\r\n\r\n            pp.info[f\"Postprocess upscaler 2\"] = upscaler2.name\r\n\r\n        pp.image = upscaled_image\r\n\r\n    def image_changed(self):\r\n        upscale_cache.clear()\r\n\r\n\r\nclass ScriptPostprocessingUpscaleSimple(ScriptPostprocessingUpscale):\r\n    name = \"Simple Upscale\"\r\n    order = 900\r\n\r\n    def ui(self):\r\n        with FormRow():\r\n            upscaler_name = gr.Dropdown(label='Upscaler', choices=[x.name for x in shared.sd_upscalers], value=shared.sd_upscalers[0].name)\r\n            upscale_by = gr.Slider(minimum=0.05, maximum=8.0, step=0.05, label=\"Upscale by\", value=2)\r\n\r\n        return {\r\n            \"upscale_by\": upscale_by,\r\n            \"upscaler_name\": upscaler_name,\r\n        }\r\n\r\n    def process(self, pp: scripts_postprocessing.PostprocessedImage, upscale_by=2.0, upscaler_name=None):\r\n        if upscaler_name is None or upscaler_name == \"None\":\r\n            return\r\n\r\n        upscaler1 = next(iter([x for x in shared.sd_upscalers if x.name == upscaler_name]), None)\r\n        assert upscaler1, f'could not find upscaler named {upscaler_name}'\r\n\r\n        pp.image = self.upscale(pp.image, pp.info, upscaler1, 0, upscale_by, 0, 0, False)\r\n        pp.info[f\"Postprocess upscaler\"] = upscaler1.name\r\n"
  },
  {
    "path": "scripts/prompt_matrix.py",
    "content": "import math\r\nfrom collections import namedtuple\r\nfrom copy import copy\r\nimport random\r\n\r\nimport modules.scripts as scripts\r\nimport gradio as gr\r\n\r\nfrom modules import images\r\nfrom modules.processing import process_images, Processed\r\nfrom modules.shared import opts, cmd_opts, state\r\nimport modules.sd_samplers\r\n\r\n\r\ndef draw_xy_grid(xs, ys, x_label, y_label, cell):\r\n    res = []\r\n\r\n    ver_texts = [[images.GridAnnotation(y_label(y))] for y in ys]\r\n    hor_texts = [[images.GridAnnotation(x_label(x))] for x in xs]\r\n\r\n    first_processed = None\r\n\r\n    state.job_count = len(xs) * len(ys)\r\n\r\n    for iy, y in enumerate(ys):\r\n        for ix, x in enumerate(xs):\r\n            state.job = f\"{ix + iy * len(xs) + 1} out of {len(xs) * len(ys)}\"\r\n\r\n            processed = cell(x, y)\r\n            if first_processed is None:\r\n                first_processed = processed\r\n\r\n            res.append(processed.images[0])\r\n\r\n    grid = images.image_grid(res, rows=len(ys))\r\n    grid = images.draw_grid_annotations(grid, res[0].width, res[0].height, hor_texts, ver_texts)\r\n\r\n    first_processed.images = [grid]\r\n\r\n    return first_processed\r\n\r\n\r\nclass Script(scripts.Script):\r\n    def title(self):\r\n        return \"Prompt matrix\"\r\n\r\n    def ui(self, is_img2img):\r\n        gr.HTML('<br />')\r\n        with gr.Row():\r\n            with gr.Column():\r\n                put_at_start = gr.Checkbox(label='Put variable parts at start of prompt', value=False, elem_id=self.elem_id(\"put_at_start\"))\r\n                different_seeds = gr.Checkbox(label='Use different seed for each picture', value=False, elem_id=self.elem_id(\"different_seeds\"))\r\n            with gr.Column():\r\n                prompt_type = gr.Radio([\"positive\", \"negative\"], label=\"Select prompt\", elem_id=self.elem_id(\"prompt_type\"), value=\"positive\")\r\n                variations_delimiter = gr.Radio([\"comma\", \"space\"], label=\"Select joining char\", elem_id=self.elem_id(\"variations_delimiter\"), value=\"comma\")\r\n            with gr.Column():\r\n                margin_size = gr.Slider(label=\"Grid margins (px)\", minimum=0, maximum=500, value=0, step=2, elem_id=self.elem_id(\"margin_size\"))\r\n\r\n        return [put_at_start, different_seeds, prompt_type, variations_delimiter, margin_size]\r\n\r\n    def run(self, p, put_at_start, different_seeds, prompt_type, variations_delimiter, margin_size):\r\n        modules.processing.fix_seed(p)\r\n        # Raise error if promp type is not positive or negative\r\n        if prompt_type not in [\"positive\", \"negative\"]:\r\n            raise ValueError(f\"Unknown prompt type {prompt_type}\")\r\n        # Raise error if variations delimiter is not comma or space\r\n        if variations_delimiter not in [\"comma\", \"space\"]:\r\n            raise ValueError(f\"Unknown variations delimiter {variations_delimiter}\")\r\n\r\n        prompt = p.prompt if prompt_type == \"positive\" else p.negative_prompt\r\n        original_prompt = prompt[0] if type(prompt) == list else prompt\r\n        positive_prompt = p.prompt[0] if type(p.prompt) == list else p.prompt\r\n\r\n        delimiter = \", \" if variations_delimiter == \"comma\" else \" \"\r\n\r\n        all_prompts = []\r\n        prompt_matrix_parts = original_prompt.split(\"|\")\r\n        combination_count = 2 ** (len(prompt_matrix_parts) - 1)\r\n        for combination_num in range(combination_count):\r\n            selected_prompts = [text.strip().strip(',') for n, text in enumerate(prompt_matrix_parts[1:]) if combination_num & (1 << n)]\r\n\r\n            if put_at_start:\r\n                selected_prompts = selected_prompts + [prompt_matrix_parts[0]]\r\n            else:\r\n                selected_prompts = [prompt_matrix_parts[0]] + selected_prompts\r\n\r\n            all_prompts.append(delimiter.join(selected_prompts))\r\n\r\n        p.n_iter = math.ceil(len(all_prompts) / p.batch_size)\r\n        p.do_not_save_grid = True\r\n\r\n        print(f\"Prompt matrix will create {len(all_prompts)} images using a total of {p.n_iter} batches.\")\r\n\r\n        if prompt_type == \"positive\":\r\n            p.prompt = all_prompts\r\n        else:\r\n            p.negative_prompt = all_prompts\r\n        p.seed = [p.seed + (i if different_seeds else 0) for i in range(len(all_prompts))]\r\n        p.prompt_for_display = positive_prompt\r\n        processed = process_images(p)\r\n\r\n        grid = images.image_grid(processed.images, p.batch_size, rows=1 << ((len(prompt_matrix_parts) - 1) // 2)) \r\n        grid = images.draw_prompt_matrix(grid, processed.images[0].width, processed.images[1].height, prompt_matrix_parts, margin_size)\r\n        processed.images.insert(0, grid)\r\n        processed.index_of_first_image = 1\r\n        processed.infotexts.insert(0, processed.infotexts[0])\r\n\r\n        if opts.grid_save:\r\n            images.save_image(processed.images[0], p.outpath_grids, \"prompt_matrix\", extension=opts.grid_format, prompt=original_prompt, seed=processed.seed, grid=True, p=p)\r\n\r\n        return processed\r\n"
  },
  {
    "path": "scripts/prompts_from_file.py",
    "content": "import copy\r\nimport math\r\nimport os\r\nimport random\r\nimport sys\r\nimport traceback\r\nimport shlex\r\n\r\nimport modules.scripts as scripts\r\nimport gradio as gr\r\n\r\nfrom modules import sd_samplers\r\nfrom modules.processing import Processed, process_images\r\nfrom PIL import Image\r\nfrom modules.shared import opts, cmd_opts, state\r\n\r\n\r\ndef process_string_tag(tag):\r\n    return tag\r\n\r\n\r\ndef process_int_tag(tag):\r\n    return int(tag)\r\n\r\n\r\ndef process_float_tag(tag):\r\n    return float(tag)\r\n\r\n\r\ndef process_boolean_tag(tag):\r\n    return True if (tag == \"true\") else False\r\n\r\n\r\nprompt_tags = {\r\n    \"sd_model\": None,\r\n    \"outpath_samples\": process_string_tag,\r\n    \"outpath_grids\": process_string_tag,\r\n    \"prompt_for_display\": process_string_tag,\r\n    \"prompt\": process_string_tag,\r\n    \"negative_prompt\": process_string_tag,\r\n    \"styles\": process_string_tag,\r\n    \"seed\": process_int_tag,\r\n    \"subseed_strength\": process_float_tag,\r\n    \"subseed\": process_int_tag,\r\n    \"seed_resize_from_h\": process_int_tag,\r\n    \"seed_resize_from_w\": process_int_tag,\r\n    \"sampler_index\": process_int_tag,\r\n    \"sampler_name\": process_string_tag,\r\n    \"batch_size\": process_int_tag,\r\n    \"n_iter\": process_int_tag,\r\n    \"steps\": process_int_tag,\r\n    \"cfg_scale\": process_float_tag,\r\n    \"width\": process_int_tag,\r\n    \"height\": process_int_tag,\r\n    \"restore_faces\": process_boolean_tag,\r\n    \"tiling\": process_boolean_tag,\r\n    \"do_not_save_samples\": process_boolean_tag,\r\n    \"do_not_save_grid\": process_boolean_tag\r\n}\r\n\r\n\r\ndef cmdargs(line):\r\n    args = shlex.split(line)\r\n    pos = 0\r\n    res = {}\r\n\r\n    while pos < len(args):\r\n        arg = args[pos]\r\n\r\n        assert arg.startswith(\"--\"), f'must start with \"--\": {arg}'\r\n        assert pos+1 < len(args), f'missing argument for command line option {arg}'\r\n\r\n        tag = arg[2:]\r\n\r\n        if tag == \"prompt\" or tag == \"negative_prompt\":\r\n            pos += 1\r\n            prompt = args[pos]\r\n            pos += 1\r\n            while pos < len(args) and not args[pos].startswith(\"--\"):\r\n                prompt += \" \"\r\n                prompt += args[pos]\r\n                pos += 1\r\n            res[tag] = prompt\r\n            continue\r\n\r\n\r\n        func = prompt_tags.get(tag, None)\r\n        assert func, f'unknown commandline option: {arg}'\r\n\r\n        val = args[pos+1]\r\n        if tag == \"sampler_name\":\r\n            val = sd_samplers.samplers_map.get(val.lower(), None)\r\n\r\n        res[tag] = func(val)\r\n\r\n        pos += 2\r\n\r\n    return res\r\n\r\n\r\ndef load_prompt_file(file):\r\n    if file is None:\r\n        lines = []\r\n    else:\r\n        lines = [x.strip() for x in file.decode('utf8', errors='ignore').split(\"\\n\")]\r\n\r\n    return None, \"\\n\".join(lines), gr.update(lines=7)\r\n\r\n\r\nclass Script(scripts.Script):\r\n    def title(self):\r\n        return \"Prompts from file or textbox\"\r\n\r\n    def ui(self, is_img2img):       \r\n        checkbox_iterate = gr.Checkbox(label=\"Iterate seed every line\", value=False, elem_id=self.elem_id(\"checkbox_iterate\"))\r\n        checkbox_iterate_batch = gr.Checkbox(label=\"Use same random seed for all lines\", value=False, elem_id=self.elem_id(\"checkbox_iterate_batch\"))\r\n\r\n        prompt_txt = gr.Textbox(label=\"List of prompt inputs\", lines=1, elem_id=self.elem_id(\"prompt_txt\"))\r\n        file = gr.File(label=\"Upload prompt inputs\", type='binary', elem_id=self.elem_id(\"file\"))\r\n\r\n        file.change(fn=load_prompt_file, inputs=[file], outputs=[file, prompt_txt, prompt_txt])\r\n\r\n        # We start at one line. When the text changes, we jump to seven lines, or two lines if no \\n.\r\n        # We don't shrink back to 1, because that causes the control to ignore [enter], and it may\r\n        # be unclear to the user that shift-enter is needed.\r\n        prompt_txt.change(lambda tb: gr.update(lines=7) if (\"\\n\" in tb) else gr.update(lines=2), inputs=[prompt_txt], outputs=[prompt_txt])\r\n        return [checkbox_iterate, checkbox_iterate_batch, prompt_txt]\r\n\r\n    def run(self, p, checkbox_iterate, checkbox_iterate_batch, prompt_txt: str):\r\n        lines = [x.strip() for x in prompt_txt.splitlines()]\r\n        lines = [x for x in lines if len(x) > 0]\r\n\r\n        p.do_not_save_grid = True\r\n\r\n        job_count = 0\r\n        jobs = []\r\n\r\n        for line in lines:\r\n            if \"--\" in line:\r\n                try:\r\n                    args = cmdargs(line)\r\n                except Exception:\r\n                    print(f\"Error parsing line {line} as commandline:\", file=sys.stderr)\r\n                    print(traceback.format_exc(), file=sys.stderr)\r\n                    args = {\"prompt\": line}\r\n            else:\r\n                args = {\"prompt\": line}\r\n\r\n            job_count += args.get(\"n_iter\", p.n_iter)\r\n\r\n            jobs.append(args)\r\n\r\n        print(f\"Will process {len(lines)} lines in {job_count} jobs.\")\r\n        if (checkbox_iterate or checkbox_iterate_batch) and p.seed == -1:\r\n            p.seed = int(random.randrange(4294967294))\r\n\r\n        state.job_count = job_count\r\n\r\n        images = []\r\n        all_prompts = []\r\n        infotexts = []\r\n        for n, args in enumerate(jobs):\r\n            state.job = f\"{state.job_no + 1} out of {state.job_count}\"\r\n\r\n            copy_p = copy.copy(p)\r\n            for k, v in args.items():\r\n                setattr(copy_p, k, v)\r\n\r\n            proc = process_images(copy_p)\r\n            images += proc.images\r\n            \r\n            if checkbox_iterate:\r\n                p.seed = p.seed + (p.batch_size * p.n_iter)\r\n            all_prompts += proc.all_prompts\r\n            infotexts += proc.infotexts\r\n\r\n        return Processed(p, images, p.seed, \"\", all_prompts=all_prompts, infotexts=infotexts)\r\n"
  },
  {
    "path": "scripts/sd_upscale.py",
    "content": "import math\r\n\r\nimport modules.scripts as scripts\r\nimport gradio as gr\r\nfrom PIL import Image\r\n\r\nfrom modules import processing, shared, sd_samplers, images, devices\r\nfrom modules.processing import Processed\r\nfrom modules.shared import opts, cmd_opts, state\r\n\r\n\r\nclass Script(scripts.Script):\r\n    def title(self):\r\n        return \"SD upscale\"\r\n\r\n    def show(self, is_img2img):\r\n        return is_img2img\r\n\r\n    def ui(self, is_img2img):        \r\n        info = gr.HTML(\"<p style=\\\"margin-bottom:0.75em\\\">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>\")\r\n        overlap = gr.Slider(minimum=0, maximum=256, step=16, label='Tile overlap', value=64, elem_id=self.elem_id(\"overlap\"))\r\n        scale_factor = gr.Slider(minimum=1.0, maximum=4.0, step=0.05, label='Scale Factor', value=2.0, elem_id=self.elem_id(\"scale_factor\"))\r\n        upscaler_index = gr.Radio(label='Upscaler', choices=[x.name for x in shared.sd_upscalers], value=shared.sd_upscalers[0].name, type=\"index\", elem_id=self.elem_id(\"upscaler_index\"))\r\n\r\n        return [info, overlap, upscaler_index, scale_factor]\r\n\r\n    def run(self, p, _, overlap, upscaler_index, scale_factor):\r\n        if isinstance(upscaler_index, str):\r\n            upscaler_index = [x.name.lower() for x in shared.sd_upscalers].index(upscaler_index.lower())\r\n        processing.fix_seed(p)\r\n        upscaler = shared.sd_upscalers[upscaler_index]\r\n\r\n        p.extra_generation_params[\"SD upscale overlap\"] = overlap\r\n        p.extra_generation_params[\"SD upscale upscaler\"] = upscaler.name\r\n\r\n        initial_info = None\r\n        seed = p.seed\r\n\r\n        init_img = p.init_images[0]\r\n        init_img = images.flatten(init_img, opts.img2img_background_color)\r\n\r\n        if upscaler.name != \"None\":\r\n            img = upscaler.scaler.upscale(init_img, scale_factor, upscaler.data_path)\r\n        else:\r\n            img = init_img\r\n\r\n        devices.torch_gc()\r\n\r\n        grid = images.split_grid(img, tile_w=p.width, tile_h=p.height, overlap=overlap)\r\n\r\n        batch_size = p.batch_size\r\n        upscale_count = p.n_iter\r\n        p.n_iter = 1\r\n        p.do_not_save_grid = True\r\n        p.do_not_save_samples = True\r\n\r\n        work = []\r\n\r\n        for y, h, row in grid.tiles:\r\n            for tiledata in row:\r\n                work.append(tiledata[2])\r\n\r\n        batch_count = math.ceil(len(work) / batch_size)\r\n        state.job_count = batch_count * upscale_count\r\n\r\n        print(f\"SD upscaling will process a total of {len(work)} images tiled as {len(grid.tiles[0][2])}x{len(grid.tiles)} per upscale in a total of {state.job_count} batches.\")\r\n\r\n        result_images = []\r\n        for n in range(upscale_count):\r\n            start_seed = seed + n\r\n            p.seed = start_seed\r\n\r\n            work_results = []\r\n            for i in range(batch_count):\r\n                p.batch_size = batch_size\r\n                p.init_images = work[i * batch_size:(i + 1) * batch_size]\r\n\r\n                state.job = f\"Batch {i + 1 + n * batch_count} out of {state.job_count}\"\r\n                processed = processing.process_images(p)\r\n\r\n                if initial_info is None:\r\n                    initial_info = processed.info\r\n\r\n                p.seed = processed.seed + 1\r\n                work_results += processed.images\r\n\r\n            image_index = 0\r\n            for y, h, row in grid.tiles:\r\n                for tiledata in row:\r\n                    tiledata[2] = work_results[image_index] if image_index < len(work_results) else Image.new(\"RGB\", (p.width, p.height))\r\n                    image_index += 1\r\n\r\n            combined_image = images.combine_grid(grid)\r\n            result_images.append(combined_image)\r\n\r\n            if opts.samples_save:\r\n                images.save_image(combined_image, p.outpath_samples, \"\", start_seed, p.prompt, opts.samples_format, info=initial_info, p=p)\r\n\r\n        processed = Processed(p, result_images, seed, initial_info)\r\n\r\n        return processed\r\n"
  },
  {
    "path": "scripts/xyz_grid.py",
    "content": "from collections import namedtuple\r\nfrom copy import copy\r\nfrom itertools import permutations, chain\r\nimport random\r\nimport csv\r\nfrom io import StringIO\r\nfrom PIL import Image\r\nimport numpy as np\r\n\r\nimport modules.scripts as scripts\r\nimport gradio as gr\r\n\r\nfrom modules import images, paths, sd_samplers, processing, sd_models, sd_vae\r\nfrom modules.processing import process_images, Processed, StableDiffusionProcessingTxt2Img\r\nfrom modules.shared import opts, cmd_opts, state\r\nimport modules.shared as shared\r\nimport modules.sd_samplers\r\nimport modules.sd_models\r\nimport modules.sd_vae\r\nimport glob\r\nimport os\r\nimport re\r\n\r\nfrom modules.ui_components import ToolButton\r\n\r\nfill_values_symbol = \"\\U0001f4d2\"  # 📒\r\n\r\nAxisInfo = namedtuple('AxisInfo', ['axis', 'values'])\r\n\r\n\r\ndef apply_field(field):\r\n    def fun(p, x, xs):\r\n        setattr(p, field, x)\r\n\r\n    return fun\r\n\r\n\r\ndef apply_prompt(p, x, xs):\r\n    if xs[0] not in p.prompt and xs[0] not in p.negative_prompt:\r\n        raise RuntimeError(f\"Prompt S/R did not find {xs[0]} in prompt or negative prompt.\")\r\n\r\n    p.prompt = p.prompt.replace(xs[0], x)\r\n    p.negative_prompt = p.negative_prompt.replace(xs[0], x)\r\n\r\n\r\ndef apply_order(p, x, xs):\r\n    token_order = []\r\n\r\n    # Initally grab the tokens from the prompt, so they can be replaced in order of earliest seen\r\n    for token in x:\r\n        token_order.append((p.prompt.find(token), token))\r\n\r\n    token_order.sort(key=lambda t: t[0])\r\n\r\n    prompt_parts = []\r\n\r\n    # Split the prompt up, taking out the tokens\r\n    for _, token in token_order:\r\n        n = p.prompt.find(token)\r\n        prompt_parts.append(p.prompt[0:n])\r\n        p.prompt = p.prompt[n + len(token):]\r\n\r\n    # Rebuild the prompt with the tokens in the order we want\r\n    prompt_tmp = \"\"\r\n    for idx, part in enumerate(prompt_parts):\r\n        prompt_tmp += part\r\n        prompt_tmp += x[idx]\r\n    p.prompt = prompt_tmp + p.prompt\r\n\r\n\r\ndef apply_sampler(p, x, xs):\r\n    sampler_name = sd_samplers.samplers_map.get(x.lower(), None)\r\n    if sampler_name is None:\r\n        raise RuntimeError(f\"Unknown sampler: {x}\")\r\n\r\n    p.sampler_name = sampler_name\r\n\r\n\r\ndef confirm_samplers(p, xs):\r\n    for x in xs:\r\n        if x.lower() not in sd_samplers.samplers_map:\r\n            raise RuntimeError(f\"Unknown sampler: {x}\")\r\n\r\n\r\ndef apply_checkpoint(p, x, xs):\r\n    info = modules.sd_models.get_closet_checkpoint_match(x)\r\n    if info is None:\r\n        raise RuntimeError(f\"Unknown checkpoint: {x}\")\r\n    modules.sd_models.reload_model_weights(shared.sd_model, info)\r\n\r\n\r\ndef confirm_checkpoints(p, xs):\r\n    for x in xs:\r\n        if modules.sd_models.get_closet_checkpoint_match(x) is None:\r\n            raise RuntimeError(f\"Unknown checkpoint: {x}\")\r\n\r\n\r\ndef apply_clip_skip(p, x, xs):\r\n    opts.data[\"CLIP_stop_at_last_layers\"] = x\r\n\r\n\r\ndef apply_upscale_latent_space(p, x, xs):\r\n    if x.lower().strip() != '0':\r\n        opts.data[\"use_scale_latent_for_hires_fix\"] = True\r\n    else:\r\n        opts.data[\"use_scale_latent_for_hires_fix\"] = False\r\n\r\n\r\ndef find_vae(name: str):\r\n    if name.lower() in ['auto', 'automatic']:\r\n        return modules.sd_vae.unspecified\r\n    if name.lower() == 'none':\r\n        return None\r\n    else:\r\n        choices = [x for x in sorted(modules.sd_vae.vae_dict, key=lambda x: len(x)) if name.lower().strip() in x.lower()]\r\n        if len(choices) == 0:\r\n            print(f\"No VAE found for {name}; using automatic\")\r\n            return modules.sd_vae.unspecified\r\n        else:\r\n            return modules.sd_vae.vae_dict[choices[0]]\r\n\r\n\r\ndef apply_vae(p, x, xs):\r\n    modules.sd_vae.reload_vae_weights(shared.sd_model, vae_file=find_vae(x))\r\n\r\n\r\ndef apply_styles(p: StableDiffusionProcessingTxt2Img, x: str, _):\r\n    p.styles.extend(x.split(','))\r\n\r\n\r\ndef format_value_add_label(p, opt, x):\r\n    if type(x) == float:\r\n        x = round(x, 8)\r\n\r\n    return f\"{opt.label}: {x}\"\r\n\r\n\r\ndef format_value(p, opt, x):\r\n    if type(x) == float:\r\n        x = round(x, 8)\r\n    return x\r\n\r\n\r\ndef format_value_join_list(p, opt, x):\r\n    return \", \".join(x)\r\n\r\n\r\ndef do_nothing(p, x, xs):\r\n    pass\r\n\r\n\r\ndef format_nothing(p, opt, x):\r\n    return \"\"\r\n\r\n\r\ndef str_permutations(x):\r\n    \"\"\"dummy function for specifying it in AxisOption's type when you want to get a list of permutations\"\"\"\r\n    return x\r\n\r\n\r\nclass AxisOption:\r\n    def __init__(self, label, type, apply, format_value=format_value_add_label, confirm=None, cost=0.0, choices=None):\r\n        self.label = label\r\n        self.type = type\r\n        self.apply = apply\r\n        self.format_value = format_value\r\n        self.confirm = confirm\r\n        self.cost = cost\r\n        self.choices = choices\r\n\r\n\r\nclass AxisOptionImg2Img(AxisOption):\r\n    def __init__(self, *args, **kwargs):\r\n        super().__init__(*args, **kwargs)\r\n        self.is_img2img = True\r\n\r\nclass AxisOptionTxt2Img(AxisOption):\r\n    def __init__(self, *args, **kwargs):\r\n        super().__init__(*args, **kwargs)\r\n        self.is_img2img = False\r\n\r\n\r\naxis_options = [\r\n    AxisOption(\"Nothing\", str, do_nothing, format_value=format_nothing),\r\n    AxisOption(\"Seed\", int, apply_field(\"seed\")),\r\n    AxisOption(\"Var. seed\", int, apply_field(\"subseed\")),\r\n    AxisOption(\"Var. strength\", float, apply_field(\"subseed_strength\")),\r\n    AxisOption(\"Steps\", int, apply_field(\"steps\")),\r\n    AxisOptionTxt2Img(\"Hires steps\", int, apply_field(\"hr_second_pass_steps\")),\r\n    AxisOption(\"CFG Scale\", float, apply_field(\"cfg_scale\")),\r\n    AxisOptionImg2Img(\"Image CFG Scale\", float, apply_field(\"image_cfg_scale\")),\r\n    AxisOption(\"Prompt S/R\", str, apply_prompt, format_value=format_value),\r\n    AxisOption(\"Prompt order\", str_permutations, apply_order, format_value=format_value_join_list),\r\n    AxisOptionTxt2Img(\"Sampler\", str, apply_sampler, format_value=format_value, confirm=confirm_samplers, choices=lambda: [x.name for x in sd_samplers.samplers]),\r\n    AxisOptionImg2Img(\"Sampler\", str, apply_sampler, format_value=format_value, confirm=confirm_samplers, choices=lambda: [x.name for x in sd_samplers.samplers_for_img2img]),\r\n    AxisOption(\"Checkpoint name\", str, apply_checkpoint, format_value=format_value, confirm=confirm_checkpoints, cost=1.0, choices=lambda: list(sd_models.checkpoints_list)),\r\n    AxisOption(\"Sigma Churn\", float, apply_field(\"s_churn\")),\r\n    AxisOption(\"Sigma min\", float, apply_field(\"s_tmin\")),\r\n    AxisOption(\"Sigma max\", float, apply_field(\"s_tmax\")),\r\n    AxisOption(\"Sigma noise\", float, apply_field(\"s_noise\")),\r\n    AxisOption(\"Eta\", float, apply_field(\"eta\")),\r\n    AxisOption(\"Clip skip\", int, apply_clip_skip),\r\n    AxisOption(\"Denoising\", float, apply_field(\"denoising_strength\")),\r\n    AxisOptionTxt2Img(\"Hires upscaler\", str, apply_field(\"hr_upscaler\"), choices=lambda: [*shared.latent_upscale_modes, *[x.name for x in shared.sd_upscalers]]),\r\n    AxisOptionImg2Img(\"Cond. Image Mask Weight\", float, apply_field(\"inpainting_mask_weight\")),\r\n    AxisOption(\"VAE\", str, apply_vae, cost=0.7, choices=lambda: list(sd_vae.vae_dict)),\r\n    AxisOption(\"Styles\", str, apply_styles, choices=lambda: list(shared.prompt_styles.styles)),\r\n]\r\n\r\n\r\ndef draw_xyz_grid(p, xs, ys, zs, x_labels, y_labels, z_labels, cell, draw_legend, include_lone_images, include_sub_grids, first_axes_processed, second_axes_processed, margin_size):\r\n    hor_texts = [[images.GridAnnotation(x)] for x in x_labels]\r\n    ver_texts = [[images.GridAnnotation(y)] for y in y_labels]\r\n    title_texts = [[images.GridAnnotation(z)] for z in z_labels]\r\n\r\n    # Temporary list of all the images that are generated to be populated into the grid.\r\n    # Will be filled with empty images for any individual step that fails to process properly\r\n    image_cache = [None] * (len(xs) * len(ys) * len(zs))\r\n\r\n    processed_result = None\r\n    cell_mode = \"P\"\r\n    cell_size = (1, 1)\r\n\r\n    state.job_count = len(xs) * len(ys) * len(zs) * p.n_iter\r\n\r\n    def process_cell(x, y, z, ix, iy, iz):\r\n        nonlocal image_cache, processed_result, cell_mode, cell_size\r\n\r\n        def index(ix, iy, iz):\r\n            return ix + iy * len(xs) + iz * len(xs) * len(ys)\r\n\r\n        state.job = f\"{index(ix, iy, iz) + 1} out of {len(xs) * len(ys) * len(zs)}\"\r\n\r\n        processed: Processed = cell(x, y, z)\r\n\r\n        try:\r\n            # this dereference will throw an exception if the image was not processed\r\n            # (this happens in cases such as if the user stops the process from the UI)\r\n            processed_image = processed.images[0]\r\n\r\n            if processed_result is None:\r\n                # Use our first valid processed result as a template container to hold our full results\r\n                processed_result = copy(processed)\r\n                cell_mode = processed_image.mode\r\n                cell_size = processed_image.size\r\n                processed_result.images = [Image.new(cell_mode, cell_size)]\r\n                processed_result.all_prompts = [processed.prompt]\r\n                processed_result.all_seeds = [processed.seed]\r\n                processed_result.infotexts = [processed.infotexts[0]]\r\n\r\n            image_cache[index(ix, iy, iz)] = processed_image\r\n            if include_lone_images:\r\n                processed_result.images.append(processed_image)\r\n                processed_result.all_prompts.append(processed.prompt)\r\n                processed_result.all_seeds.append(processed.seed)\r\n                processed_result.infotexts.append(processed.infotexts[0])\r\n        except:\r\n            image_cache[index(ix, iy, iz)] = Image.new(cell_mode, cell_size)\r\n\r\n    if first_axes_processed == 'x':\r\n        for ix, x in enumerate(xs):\r\n            if second_axes_processed == 'y':\r\n                for iy, y in enumerate(ys):\r\n                    for iz, z in enumerate(zs):\r\n                        process_cell(x, y, z, ix, iy, iz)\r\n            else:\r\n                for iz, z in enumerate(zs):\r\n                    for iy, y in enumerate(ys):\r\n                        process_cell(x, y, z, ix, iy, iz)\r\n    elif first_axes_processed == 'y':\r\n        for iy, y in enumerate(ys):\r\n            if second_axes_processed == 'x':\r\n                for ix, x in enumerate(xs):\r\n                    for iz, z in enumerate(zs):\r\n                        process_cell(x, y, z, ix, iy, iz)\r\n            else:\r\n                for iz, z in enumerate(zs):\r\n                    for ix, x in enumerate(xs):\r\n                        process_cell(x, y, z, ix, iy, iz)\r\n    elif first_axes_processed == 'z':\r\n        for iz, z in enumerate(zs):\r\n            if second_axes_processed == 'x':\r\n                for ix, x in enumerate(xs):\r\n                    for iy, y in enumerate(ys):\r\n                        process_cell(x, y, z, ix, iy, iz)\r\n            else:\r\n                for iy, y in enumerate(ys):\r\n                    for ix, x in enumerate(xs):\r\n                        process_cell(x, y, z, ix, iy, iz)\r\n\r\n    if not processed_result:\r\n        print(\"Unexpected error: draw_xyz_grid failed to return even a single processed image\")\r\n        return Processed(p, [])\r\n\r\n    sub_grids = [None] * len(zs)\r\n    for i in range(len(zs)):\r\n        start_index = i * len(xs) * len(ys)\r\n        end_index = start_index + len(xs) * len(ys)\r\n        grid = images.image_grid(image_cache[start_index:end_index], rows=len(ys))\r\n        if draw_legend:\r\n            grid = images.draw_grid_annotations(grid, cell_size[0], cell_size[1], hor_texts, ver_texts, margin_size)\r\n        sub_grids[i] = grid\r\n        if include_sub_grids and len(zs) > 1:\r\n            processed_result.images.insert(i+1, grid)\r\n\r\n    sub_grid_size = sub_grids[0].size\r\n    z_grid = images.image_grid(sub_grids, rows=1)\r\n    if draw_legend:\r\n        z_grid = images.draw_grid_annotations(z_grid, sub_grid_size[0], sub_grid_size[1], title_texts, [[images.GridAnnotation()]])\r\n    processed_result.images[0] = z_grid\r\n\r\n    return processed_result, sub_grids\r\n\r\n\r\nclass SharedSettingsStackHelper(object):\r\n    def __enter__(self):\r\n        self.CLIP_stop_at_last_layers = opts.CLIP_stop_at_last_layers\r\n        self.vae = opts.sd_vae\r\n  \r\n    def __exit__(self, exc_type, exc_value, tb):\r\n        opts.data[\"sd_vae\"] = self.vae\r\n        modules.sd_models.reload_model_weights()\r\n        modules.sd_vae.reload_vae_weights()\r\n\r\n        opts.data[\"CLIP_stop_at_last_layers\"] = self.CLIP_stop_at_last_layers\r\n\r\n\r\nre_range = re.compile(r\"\\s*([+-]?\\s*\\d+)\\s*-\\s*([+-]?\\s*\\d+)(?:\\s*\\(([+-]\\d+)\\s*\\))?\\s*\")\r\nre_range_float = re.compile(r\"\\s*([+-]?\\s*\\d+(?:.\\d*)?)\\s*-\\s*([+-]?\\s*\\d+(?:.\\d*)?)(?:\\s*\\(([+-]\\d+(?:.\\d*)?)\\s*\\))?\\s*\")\r\n\r\nre_range_count = re.compile(r\"\\s*([+-]?\\s*\\d+)\\s*-\\s*([+-]?\\s*\\d+)(?:\\s*\\[(\\d+)\\s*\\])?\\s*\")\r\nre_range_count_float = re.compile(r\"\\s*([+-]?\\s*\\d+(?:.\\d*)?)\\s*-\\s*([+-]?\\s*\\d+(?:.\\d*)?)(?:\\s*\\[(\\d+(?:.\\d*)?)\\s*\\])?\\s*\")\r\n\r\n\r\nclass Script(scripts.Script):\r\n    def title(self):\r\n        return \"X/Y/Z plot\"\r\n\r\n    def ui(self, is_img2img):\r\n        self.current_axis_options = [x for x in axis_options if type(x) == AxisOption or x.is_img2img == is_img2img]\r\n\r\n        with gr.Row():\r\n            with gr.Column(scale=19):\r\n                with gr.Row():\r\n                    x_type = gr.Dropdown(label=\"X type\", choices=[x.label for x in self.current_axis_options], value=self.current_axis_options[1].label, type=\"index\", elem_id=self.elem_id(\"x_type\"))\r\n                    x_values = gr.Textbox(label=\"X values\", lines=1, elem_id=self.elem_id(\"x_values\"))\r\n                    fill_x_button = ToolButton(value=fill_values_symbol, elem_id=\"xyz_grid_fill_x_tool_button\", visible=False)\r\n\r\n                with gr.Row():\r\n                    y_type = gr.Dropdown(label=\"Y type\", choices=[x.label for x in self.current_axis_options], value=self.current_axis_options[0].label, type=\"index\", elem_id=self.elem_id(\"y_type\"))\r\n                    y_values = gr.Textbox(label=\"Y values\", lines=1, elem_id=self.elem_id(\"y_values\"))\r\n                    fill_y_button = ToolButton(value=fill_values_symbol, elem_id=\"xyz_grid_fill_y_tool_button\", visible=False)\r\n\r\n                with gr.Row():\r\n                    z_type = gr.Dropdown(label=\"Z type\", choices=[x.label for x in self.current_axis_options], value=self.current_axis_options[0].label, type=\"index\", elem_id=self.elem_id(\"z_type\"))\r\n                    z_values = gr.Textbox(label=\"Z values\", lines=1, elem_id=self.elem_id(\"z_values\"))\r\n                    fill_z_button = ToolButton(value=fill_values_symbol, elem_id=\"xyz_grid_fill_z_tool_button\", visible=False)\r\n\r\n        with gr.Row(variant=\"compact\", elem_id=\"axis_options\"):\r\n            with gr.Column():\r\n                draw_legend = gr.Checkbox(label='Draw legend', value=True, elem_id=self.elem_id(\"draw_legend\"))\r\n                no_fixed_seeds = gr.Checkbox(label='Keep -1 for seeds', value=False, elem_id=self.elem_id(\"no_fixed_seeds\"))\r\n            with gr.Column():\r\n                include_lone_images = gr.Checkbox(label='Include Sub Images', value=False, elem_id=self.elem_id(\"include_lone_images\"))\r\n                include_sub_grids = gr.Checkbox(label='Include Sub Grids', value=False, elem_id=self.elem_id(\"include_sub_grids\"))\r\n            with gr.Column():\r\n                margin_size = gr.Slider(label=\"Grid margins (px)\", minimum=0, maximum=500, value=0, step=2, elem_id=self.elem_id(\"margin_size\"))\r\n        \r\n        with gr.Row(variant=\"compact\", elem_id=\"swap_axes\"):\r\n            swap_xy_axes_button = gr.Button(value=\"Swap X/Y axes\", elem_id=\"xy_grid_swap_axes_button\")\r\n            swap_yz_axes_button = gr.Button(value=\"Swap Y/Z axes\", elem_id=\"yz_grid_swap_axes_button\")\r\n            swap_xz_axes_button = gr.Button(value=\"Swap X/Z axes\", elem_id=\"xz_grid_swap_axes_button\")\r\n\r\n        def swap_axes(axis1_type, axis1_values, axis2_type, axis2_values):\r\n            return self.current_axis_options[axis2_type].label, axis2_values, self.current_axis_options[axis1_type].label, axis1_values\r\n\r\n        xy_swap_args = [x_type, x_values, y_type, y_values]\r\n        swap_xy_axes_button.click(swap_axes, inputs=xy_swap_args, outputs=xy_swap_args)\r\n        yz_swap_args = [y_type, y_values, z_type, z_values]\r\n        swap_yz_axes_button.click(swap_axes, inputs=yz_swap_args, outputs=yz_swap_args)\r\n        xz_swap_args = [x_type, x_values, z_type, z_values]\r\n        swap_xz_axes_button.click(swap_axes, inputs=xz_swap_args, outputs=xz_swap_args)\r\n\r\n        def fill(x_type):\r\n            axis = self.current_axis_options[x_type]\r\n            return \", \".join(axis.choices()) if axis.choices else gr.update()\r\n\r\n        fill_x_button.click(fn=fill, inputs=[x_type], outputs=[x_values])\r\n        fill_y_button.click(fn=fill, inputs=[y_type], outputs=[y_values])\r\n        fill_z_button.click(fn=fill, inputs=[z_type], outputs=[z_values])\r\n\r\n        def select_axis(x_type):\r\n            return gr.Button.update(visible=self.current_axis_options[x_type].choices is not None)\r\n\r\n        x_type.change(fn=select_axis, inputs=[x_type], outputs=[fill_x_button])\r\n        y_type.change(fn=select_axis, inputs=[y_type], outputs=[fill_y_button])\r\n        z_type.change(fn=select_axis, inputs=[z_type], outputs=[fill_z_button])\r\n\r\n        self.infotext_fields = (\r\n            (x_type, \"X Type\"),\r\n            (x_values, \"X Values\"),\r\n            (y_type, \"Y Type\"),\r\n            (y_values, \"Y Values\"),\r\n            (z_type, \"Z Type\"),\r\n            (z_values, \"Z Values\"),\r\n        )\r\n\r\n        return [x_type, x_values, y_type, y_values, z_type, z_values, draw_legend, include_lone_images, include_sub_grids, no_fixed_seeds, margin_size]\r\n\r\n    def run(self, p, x_type, x_values, y_type, y_values, z_type, z_values, draw_legend, include_lone_images, include_sub_grids, no_fixed_seeds, margin_size):\r\n        if not no_fixed_seeds:\r\n            modules.processing.fix_seed(p)\r\n\r\n        if not opts.return_grid:\r\n            p.batch_size = 1\r\n\r\n        def process_axis(opt, vals):\r\n            if opt.label == 'Nothing':\r\n                return [0]\r\n\r\n            valslist = [x.strip() for x in chain.from_iterable(csv.reader(StringIO(vals)))]\r\n\r\n            if opt.type == int:\r\n                valslist_ext = []\r\n\r\n                for val in valslist:\r\n                    m = re_range.fullmatch(val)\r\n                    mc = re_range_count.fullmatch(val)\r\n                    if m is not None:\r\n                        start = int(m.group(1))\r\n                        end = int(m.group(2))+1\r\n                        step = int(m.group(3)) if m.group(3) is not None else 1\r\n\r\n                        valslist_ext += list(range(start, end, step))\r\n                    elif mc is not None:\r\n                        start = int(mc.group(1))\r\n                        end   = int(mc.group(2))\r\n                        num   = int(mc.group(3)) if mc.group(3) is not None else 1\r\n                        \r\n                        valslist_ext += [int(x) for x in np.linspace(start=start, stop=end, num=num).tolist()]\r\n                    else:\r\n                        valslist_ext.append(val)\r\n\r\n                valslist = valslist_ext\r\n            elif opt.type == float:\r\n                valslist_ext = []\r\n\r\n                for val in valslist:\r\n                    m = re_range_float.fullmatch(val)\r\n                    mc = re_range_count_float.fullmatch(val)\r\n                    if m is not None:\r\n                        start = float(m.group(1))\r\n                        end = float(m.group(2))\r\n                        step = float(m.group(3)) if m.group(3) is not None else 1\r\n\r\n                        valslist_ext += np.arange(start, end + step, step).tolist()\r\n                    elif mc is not None:\r\n                        start = float(mc.group(1))\r\n                        end   = float(mc.group(2))\r\n                        num   = int(mc.group(3)) if mc.group(3) is not None else 1\r\n                        \r\n                        valslist_ext += np.linspace(start=start, stop=end, num=num).tolist()\r\n                    else:\r\n                        valslist_ext.append(val)\r\n\r\n                valslist = valslist_ext\r\n            elif opt.type == str_permutations:\r\n                valslist = list(permutations(valslist))\r\n\r\n            valslist = [opt.type(x) for x in valslist]\r\n\r\n            # Confirm options are valid before starting\r\n            if opt.confirm:\r\n                opt.confirm(p, valslist)\r\n\r\n            return valslist\r\n\r\n        x_opt = self.current_axis_options[x_type]\r\n        xs = process_axis(x_opt, x_values)\r\n\r\n        y_opt = self.current_axis_options[y_type]\r\n        ys = process_axis(y_opt, y_values)\r\n\r\n        z_opt = self.current_axis_options[z_type]\r\n        zs = process_axis(z_opt, z_values)\r\n\r\n        def fix_axis_seeds(axis_opt, axis_list):\r\n            if axis_opt.label in ['Seed', 'Var. seed']:\r\n                return [int(random.randrange(4294967294)) if val is None or val == '' or val == -1 else val for val in axis_list]\r\n            else:\r\n                return axis_list\r\n\r\n        if not no_fixed_seeds:\r\n            xs = fix_axis_seeds(x_opt, xs)\r\n            ys = fix_axis_seeds(y_opt, ys)\r\n            zs = fix_axis_seeds(z_opt, zs)\r\n\r\n        if x_opt.label == 'Steps':\r\n            total_steps = sum(xs) * len(ys) * len(zs)\r\n        elif y_opt.label == 'Steps':\r\n            total_steps = sum(ys) * len(xs) * len(zs)\r\n        elif z_opt.label == 'Steps':\r\n            total_steps = sum(zs) * len(xs) * len(ys)\r\n        else:\r\n            total_steps = p.steps * len(xs) * len(ys) * len(zs)\r\n\r\n        if isinstance(p, StableDiffusionProcessingTxt2Img) and p.enable_hr:\r\n            if x_opt.label == \"Hires steps\":\r\n                total_steps += sum(xs) * len(ys) * len(zs)\r\n            elif y_opt.label == \"Hires steps\":\r\n                total_steps += sum(ys) * len(xs) * len(zs)\r\n            elif z_opt.label == \"Hires steps\":\r\n                total_steps += sum(zs) * len(xs) * len(ys)\r\n            elif p.hr_second_pass_steps:\r\n                total_steps += p.hr_second_pass_steps * len(xs) * len(ys) * len(zs)\r\n            else:\r\n                total_steps *= 2\r\n\r\n        total_steps *= p.n_iter\r\n\r\n        image_cell_count = p.n_iter * p.batch_size\r\n        cell_console_text = f\"; {image_cell_count} images per cell\" if image_cell_count > 1 else \"\"\r\n        plural_s = 's' if len(zs) > 1 else ''\r\n        print(f\"X/Y/Z plot will create {len(xs) * len(ys) * len(zs) * image_cell_count} images on {len(zs)} {len(xs)}x{len(ys)} grid{plural_s}{cell_console_text}. (Total steps to process: {total_steps})\")\r\n        shared.total_tqdm.updateTotal(total_steps)\r\n\r\n        grid_infotext = [None]\r\n\r\n        state.xyz_plot_x = AxisInfo(x_opt, xs)\r\n        state.xyz_plot_y = AxisInfo(y_opt, ys)\r\n        state.xyz_plot_z = AxisInfo(z_opt, zs)\r\n\r\n        # If one of the axes is very slow to change between (like SD model\r\n        # checkpoint), then make sure it is in the outer iteration of the nested\r\n        # `for` loop.\r\n        first_axes_processed = 'x'\r\n        second_axes_processed = 'y'\r\n        if x_opt.cost > y_opt.cost and x_opt.cost > z_opt.cost:\r\n            first_axes_processed = 'x'\r\n            if y_opt.cost > z_opt.cost:\r\n                second_axes_processed = 'y'\r\n            else:\r\n                second_axes_processed = 'z'\r\n        elif y_opt.cost > x_opt.cost and y_opt.cost > z_opt.cost:\r\n            first_axes_processed = 'y'\r\n            if x_opt.cost > z_opt.cost:\r\n                second_axes_processed = 'x'\r\n            else:\r\n                second_axes_processed = 'z'\r\n        elif z_opt.cost > x_opt.cost and z_opt.cost > y_opt.cost:\r\n            first_axes_processed = 'z'\r\n            if x_opt.cost > y_opt.cost:\r\n                second_axes_processed = 'x'\r\n            else:\r\n                second_axes_processed = 'y'\r\n\r\n        def cell(x, y, z):\r\n            if shared.state.interrupted:\r\n                return Processed(p, [], p.seed, \"\")\r\n\r\n            pc = copy(p)\r\n            pc.styles = pc.styles[:]\r\n            x_opt.apply(pc, x, xs)\r\n            y_opt.apply(pc, y, ys)\r\n            z_opt.apply(pc, z, zs)\r\n\r\n            res = process_images(pc)\r\n\r\n            if grid_infotext[0] is None:\r\n                pc.extra_generation_params = copy(pc.extra_generation_params)\r\n                pc.extra_generation_params['Script'] = self.title()\r\n\r\n                if x_opt.label != 'Nothing':\r\n                    pc.extra_generation_params[\"X Type\"] = x_opt.label\r\n                    pc.extra_generation_params[\"X Values\"] = x_values\r\n                    if x_opt.label in [\"Seed\", \"Var. seed\"] and not no_fixed_seeds:\r\n                        pc.extra_generation_params[\"Fixed X Values\"] = \", \".join([str(x) for x in xs])\r\n\r\n                if y_opt.label != 'Nothing':\r\n                    pc.extra_generation_params[\"Y Type\"] = y_opt.label\r\n                    pc.extra_generation_params[\"Y Values\"] = y_values\r\n                    if y_opt.label in [\"Seed\", \"Var. seed\"] and not no_fixed_seeds:\r\n                        pc.extra_generation_params[\"Fixed Y Values\"] = \", \".join([str(y) for y in ys])\r\n\r\n                if z_opt.label != 'Nothing':\r\n                    pc.extra_generation_params[\"Z Type\"] = z_opt.label\r\n                    pc.extra_generation_params[\"Z Values\"] = z_values\r\n                    if z_opt.label in [\"Seed\", \"Var. seed\"] and not no_fixed_seeds:\r\n                        pc.extra_generation_params[\"Fixed Z Values\"] = \", \".join([str(z) for z in zs])\r\n\r\n                grid_infotext[0] = processing.create_infotext(pc, pc.all_prompts, pc.all_seeds, pc.all_subseeds)\r\n\r\n            return res\r\n\r\n        with SharedSettingsStackHelper():\r\n            processed, sub_grids = draw_xyz_grid(\r\n                p,\r\n                xs=xs,\r\n                ys=ys,\r\n                zs=zs,\r\n                x_labels=[x_opt.format_value(p, x_opt, x) for x in xs],\r\n                y_labels=[y_opt.format_value(p, y_opt, y) for y in ys],\r\n                z_labels=[z_opt.format_value(p, z_opt, z) for z in zs],\r\n                cell=cell,\r\n                draw_legend=draw_legend,\r\n                include_lone_images=include_lone_images,\r\n                include_sub_grids=include_sub_grids,\r\n                first_axes_processed=first_axes_processed,\r\n                second_axes_processed=second_axes_processed,\r\n                margin_size=margin_size\r\n            )\r\n\r\n        if opts.grid_save and len(sub_grids) > 1:\r\n            for sub_grid in sub_grids:\r\n                images.save_image(sub_grid, p.outpath_grids, \"xyz_grid\", info=grid_infotext[0], extension=opts.grid_format, prompt=p.prompt, seed=processed.seed, grid=True, p=p)\r\n\r\n        if opts.grid_save:\r\n            images.save_image(processed.images[0], p.outpath_grids, \"xyz_grid\", info=grid_infotext[0], extension=opts.grid_format, prompt=p.prompt, seed=processed.seed, grid=True, p=p)\r\n\r\n        return processed\r\n"
  },
  {
    "path": "style.css",
    "content": ".container {\r\n    max-width: 100%;\r\n}\r\n\r\n.token-counter{\r\n    position: absolute;\r\n    display: inline-block;\r\n    right: 2em;\r\n    min-width: 0 !important;\r\n    width: auto;\r\n    z-index: 100;\r\n}\r\n\r\n.token-counter.error span{\r\n    box-shadow: 0 0 0.0 0.3em rgba(255,0,0,0.15), inset 0 0 0.6em rgba(255,0,0,0.075);\r\n    border: 2px solid rgba(255,0,0,0.4) !important;\r\n}\r\n\r\n.token-counter div{\r\n    display: inline;\r\n}\r\n\r\n.token-counter span{\r\n    padding: 0.1em 0.75em;\r\n}\r\n\r\n#sh{\r\n    min-width: 2em;\r\n    min-height: 2em;\r\n    max-width: 2em;\r\n    max-height: 2em;\r\n    flex-grow: 0;\r\n    padding-left: 0.25em;\r\n    padding-right: 0.25em;\r\n    margin: 0.1em 0;\r\n    opacity: 0%;\r\n    cursor: default;\r\n}\r\n\r\n.output-html p {margin: 0 0.5em;}\r\n\r\n.row > *,\r\n.row > .gr-form > * {\r\n    min-width: min(120px, 100%);\r\n    flex: 1 1 0%;\r\n}\r\n\r\n.performance {\r\n    font-size: 0.85em;\r\n    color: #444;\r\n}\r\n\r\n.performance p{\r\n    display: inline-block;\r\n}\r\n\r\n.performance .time {\r\n    margin-right: 0;\r\n}\r\n\r\n.performance .vram {\r\n}\r\n\r\n#txt2img_generate, #img2img_generate {\r\n    min-height: 4.5em;\r\n}\r\n\r\n@media screen and (min-width: 2500px) {\r\n    #txt2img_gallery, #img2img_gallery {\r\n        min-height: 768px;\r\n    }\r\n}\r\n\r\n#txt2img_gallery img, #img2img_gallery img{\r\n    object-fit: scale-down;\r\n}\r\n#txt2img_actions_column, #img2img_actions_column {\r\n\tmargin: 0.35rem 0.75rem 0.35rem 0;\t\r\n}\r\n#script_list {\r\n    padding: .625rem .75rem 0 .625rem;\r\n}\r\n.justify-center.overflow-x-scroll {\r\n    justify-content: left;\r\n}\r\n\r\n.justify-center.overflow-x-scroll button:first-of-type {\r\n    margin-left: auto;\r\n}\r\n\r\n.justify-center.overflow-x-scroll button:last-of-type {\r\n    margin-right: auto;\r\n}\r\n\r\n[id$=_random_seed], [id$=_random_subseed], [id$=_reuse_seed], [id$=_reuse_subseed], #open_folder{\r\n    min-width: 2.3em;\r\n    height: 2.5em;\r\n    flex-grow: 0;\r\n    padding-left: 0.25em;\r\n    padding-right: 0.25em;\r\n}\r\n\r\n#hidden_element{\r\n    display: none;\r\n}\r\n\r\n[id$=_seed_row], [id$=_subseed_row]{\r\n    gap: 0.5rem;\r\n    padding: 0.6em;\r\n}\r\n\r\n[id$=_subseed_show_box]{\r\n    min-width: auto;\r\n    flex-grow: 0;\r\n}\r\n\r\n[id$=_subseed_show_box] > div{\r\n    border: 0;\r\n    height: 100%;\r\n}\r\n\r\n[id$=_subseed_show]{\r\n    min-width: auto;\r\n    flex-grow: 0;\r\n    padding: 0;\r\n}\r\n\r\n[id$=_subseed_show] label{\r\n    height: 100%;\r\n}\r\n\r\n#txt2img_actions_column, #img2img_actions_column{\r\n    gap: 0;\r\n    margin-right: .75rem;\r\n}\r\n\r\n#txt2img_tools, #img2img_tools{\r\n    gap: 0.4em;\r\n}\r\n\r\n#interrogate_col{\r\n    min-width: 0 !important;\r\n    max-width: 8em !important;\r\n    margin-right: 1em;\r\n    gap: 0;\r\n}\r\n#interrogate, #deepbooru{\r\n    margin: 0em 0.25em 0.5em 0.25em;\r\n    min-width: 8em;\r\n    max-width: 8em;\r\n}\r\n\r\n#style_pos_col, #style_neg_col{\r\n    min-width: 8em !important;\r\n}\r\n\r\n#txt2img_styles_row, #img2img_styles_row{\r\n    gap: 0.25em;\r\n    margin-top: 0.3em;\r\n}\r\n\r\n#txt2img_styles_row > button, #img2img_styles_row > button{\r\n    margin: 0;\r\n}\r\n\r\n#txt2img_styles, #img2img_styles{\r\n    padding: 0;\r\n}\r\n\r\n#txt2img_styles > label > div, #img2img_styles > label > div{\r\n    min-height: 3.2em;\r\n}\r\n\r\nul.list-none{\r\n    max-height: 35em;\r\n    z-index: 2000;\r\n}\r\n\r\n.gr-form{\r\n    background: transparent;\r\n}\r\n\r\n.my-4{\r\n    margin-top: 0;\r\n    margin-bottom: 0;\r\n}\r\n\r\n#resize_mode{\r\n    flex: 1.5;\r\n}\r\n\r\nbutton{\r\n    align-self: stretch !important;\r\n}\r\n\r\n.overflow-hidden, .gr-panel{\r\n    overflow: visible !important;\r\n}\r\n\r\n#x_type, #y_type{\r\n    max-width: 10em;\r\n}\r\n\r\n#txt2img_preview, #img2img_preview, #ti_preview{\r\n    position: absolute;\r\n    width: 320px;\r\n    left: 0;\r\n    right: 0;\r\n    margin-left: auto;\r\n    margin-right: auto;\r\n    margin-top: 34px;\r\n    z-index: 100;\r\n    border: none;\r\n    border-top-left-radius: 0;\r\n    border-top-right-radius: 0;\r\n}\r\n\r\n@media screen and (min-width: 768px) {\r\n    #txt2img_preview, #img2img_preview, #ti_preview {\r\n        position: absolute;\r\n    }\r\n}\r\n\r\n@media screen and (max-width: 767px) {\r\n    #txt2img_preview, #img2img_preview, #ti_preview {\r\n        position: relative;\r\n    }\r\n}\r\n\r\n#txt2img_preview div.left-0.top-0, #img2img_preview div.left-0.top-0, #ti_preview div.left-0.top-0{\r\n    display: none;\r\n}\r\n\r\nfieldset span.text-gray-500, .gr-block.gr-box span.text-gray-500,  label.block span{\r\n    position: absolute;\r\n    top: -0.7em;\r\n    line-height: 1.2em;\r\n    padding: 0;\r\n    margin: 0 0.5em;\r\n\r\n    background-color: white;\r\n    box-shadow:  6px 0 6px 0px white, -6px 0 6px 0px white;\r\n\r\n    z-index: 300;\r\n}\r\n\r\n.dark fieldset span.text-gray-500, .dark .gr-block.gr-box span.text-gray-500, .dark label.block span{\r\n    background-color: rgb(31, 41, 55);\r\n    box-shadow: none;\r\n    border: 1px solid rgba(128, 128, 128, 0.1);\r\n    border-radius: 6px;\r\n    padding: 0.1em 0.5em;\r\n}\r\n\r\n#txt2img_column_batch, #img2img_column_batch{\r\n    min-width: min(13.5em, 100%) !important;\r\n}\r\n\r\n#settings fieldset span.text-gray-500, #settings .gr-block.gr-box span.text-gray-500, #settings label.block span{\r\n    position: relative;\r\n    border: none;\r\n    margin-right: 8em;\r\n}\r\n\r\n#settings .gr-panel div.flex-col div.justify-between div{\r\n    position: relative;\r\n    z-index: 200;\r\n}\r\n\r\n#settings{\r\n    display: block;\r\n}\r\n\r\n#settings > div{\r\n    border: none;\r\n    margin-left: 10em;\r\n}\r\n\r\n#settings > div.flex-wrap{\r\n    float: left;\r\n    display: block;\r\n    margin-left: 0;\r\n    width: 10em;\r\n}\r\n\r\n#settings > div.flex-wrap button{\r\n    display: block;\r\n    border: none;\r\n    text-align: left;\r\n}\r\n\r\n#settings_result{\r\n    height: 1.4em;\r\n    margin: 0 1.2em;\r\n}\r\n\r\ninput[type=\"range\"]{\r\n    margin: 0.5em 0 -0.3em 0;\r\n}\r\n\r\n#mask_bug_info {\r\n  text-align: center;\r\n  display: block;\r\n  margin-top: -0.75em;\r\n  margin-bottom: -0.75em;\r\n}\r\n\r\n#txt2img_negative_prompt, #img2img_negative_prompt{\r\n}\r\n\r\n/* gradio 3.8 adds opacity to progressbar which makes it blink; disable it here */\r\n.transition.opacity-20 {\r\n  opacity: 1 !important;\r\n}\r\n\r\n/* more gradio's garbage cleanup */\r\n.min-h-\\[4rem\\] { min-height: unset !important; }\r\n.min-h-\\[6rem\\] { min-height: unset !important; }\r\n\r\n.progressDiv{\r\n    position: relative;\r\n    height: 20px;\r\n    background: #b4c0cc;\r\n    border-radius: 3px !important;\r\n    margin-bottom: -3px;\r\n}\r\n\r\n.dark .progressDiv{\r\n    background: #424c5b;\r\n}\r\n\r\n.progressDiv .progress{\r\n    width: 0%;\r\n    height: 20px;\r\n    background: #0060df;\r\n    color: white;\r\n    font-weight: bold;\r\n    line-height: 20px;\r\n    padding: 0 8px 0 0;\r\n    text-align: right;\r\n    border-radius: 3px;\r\n    overflow: visible;\r\n    white-space: nowrap;\r\n    padding: 0 0.5em;\r\n}\r\n\r\n.livePreview{\r\n    position: absolute;\r\n    z-index: 300;\r\n    background-color: white;\r\n    margin: -4px;\r\n}\r\n\r\n.dark .livePreview{\r\n    background-color: rgb(17 24 39 / var(--tw-bg-opacity));\r\n}\r\n\r\n.livePreview img{\r\n    position: absolute;\r\n    object-fit: contain;\r\n    width: 100%;\r\n    height: 100%;\r\n}\r\n\r\n#lightboxModal{\r\n  display: none;\r\n  position: fixed;\r\n  z-index: 1001;\r\n  padding-top: 100px;\r\n  left: 0;\r\n  top: 0;\r\n  width: 100%;\r\n  height: 100%;\r\n  overflow: auto;\r\n  background-color: rgba(20, 20, 20, 0.95);\r\n  user-select: none;\r\n  -webkit-user-select: none;\r\n}\r\n\r\n.modalControls {\r\n    display: grid;\r\n    grid-template-columns: 32px 32px 32px 1fr 32px;\r\n    grid-template-areas: \"zoom tile save space close\";\r\n    position: absolute;\r\n    top: 0;\r\n    left: 0;\r\n    right: 0;\r\n    padding: 16px;\r\n    gap: 16px;\r\n    background-color: rgba(0,0,0,0.2);\r\n}\r\n\r\n.modalClose {\r\n    grid-area: close;\r\n}\r\n\r\n.modalZoom {\r\n    grid-area: zoom;\r\n}\r\n\r\n.modalSave {\r\n    grid-area: save;\r\n}\r\n\r\n.modalTileImage {\r\n    grid-area: tile;\r\n}\r\n\r\n.modalClose,\r\n.modalZoom,\r\n.modalTileImage {\r\n  color: white;\r\n  font-size: 35px;\r\n  font-weight: bold;\r\n  cursor: pointer;\r\n}\r\n\r\n.modalSave {\r\n    color: white;\r\n    font-size: 28px;\r\n    margin-top: 8px;\r\n    font-weight: bold;\r\n    cursor: pointer;\r\n}\r\n\r\n.modalClose:hover,\r\n.modalClose:focus,\r\n.modalSave:hover,\r\n.modalSave:focus,\r\n.modalZoom:hover,\r\n.modalZoom:focus {\r\n  color: #999;\r\n  text-decoration: none;\r\n  cursor: pointer;\r\n}\r\n\r\n#modalImage {\r\n    display: block;\r\n    margin-left: auto;\r\n    margin-right: auto;\r\n    margin-top: auto;\r\n    width: auto;\r\n}\r\n\r\n.modalImageFullscreen {\r\n    object-fit: contain;\r\n    height: 90%;\r\n}\r\n\r\n.modalPrev,\r\n.modalNext {\r\n  cursor: pointer;\r\n  position: absolute;\r\n  top: 50%;\r\n  width: auto;\r\n  padding: 16px;\r\n  margin-top: -50px;\r\n  color: white;\r\n  font-weight: bold;\r\n  font-size: 20px;\r\n  transition: 0.6s ease;\r\n  border-radius: 0 3px 3px 0;\r\n  user-select: none;\r\n  -webkit-user-select: none;\r\n}\r\n\r\n.modalNext {\r\n  right: 0;\r\n  border-radius: 3px 0 0 3px;\r\n}\r\n\r\n.modalPrev:hover,\r\n.modalNext:hover {\r\n  background-color: rgba(0, 0, 0, 0.8);\r\n}\r\n\r\n#imageARPreview{\r\n    position:absolute;\r\n    top:0px;\r\n    left:0px;\r\n    border:2px solid red;\r\n    background:rgba(255, 0, 0, 0.3);\r\n    z-index: 900;\r\n    pointer-events:none;\r\n    display:none\r\n}\r\n\r\n#txt2img_generate_box, #img2img_generate_box{\r\n    position: relative;\r\n}\r\n\r\n#txt2img_interrupt, #img2img_interrupt, #txt2img_skip, #img2img_skip{\r\n    position: absolute;\r\n    width: 50%;\r\n    height: 100%;\r\n    background: #b4c0cc;\r\n    display: none;\r\n}\r\n\r\n#txt2img_interrupt, #img2img_interrupt{\r\n    left: 0;\r\n    border-radius: 0.5rem 0 0 0.5rem;\r\n}\r\n#txt2img_skip, #img2img_skip{\r\n    right: 0;\r\n    border-radius: 0 0.5rem 0.5rem 0;\r\n}\r\n\r\n.red {\r\n\tcolor: red;\r\n}\r\n\r\n.gallery-item {\r\n    --tw-bg-opacity: 0 !important;\r\n}\r\n\r\n#context-menu{\r\n    z-index:9999;\r\n    position:absolute;\r\n    display:block;\r\n    padding:0px 0;\r\n    border:2px solid #a55000;\r\n    border-radius:8px;\r\n    box-shadow:1px 1px 2px #CE6400;\r\n    width: 200px;\r\n}\r\n\r\n.context-menu-items{\r\n    list-style: none;\r\n    margin: 0;\r\n    padding: 0;\r\n}\r\n\r\n.context-menu-items a{\r\n    display:block;\r\n    padding:5px;\r\n    cursor:pointer;\r\n}\r\n\r\n.context-menu-items a:hover{\r\n    background: #a55000;\r\n}\r\n\r\n#quicksettings {\r\n    width: fit-content;\r\n}\r\n\r\n#quicksettings > div, #quicksettings > fieldset{\r\n    max-width: 24em;\r\n    min-width: 24em;\r\n    padding: 0;\r\n    border: none;\r\n    box-shadow: none;\r\n    background: none;\r\n    margin-right: 10px;\r\n}\r\n\r\n#quicksettings > div > div > div > label > span {\r\n    position: relative;\r\n    margin-right: 9em;\r\n    margin-bottom: -1em;\r\n}\r\n\r\ncanvas[key=\"mask\"] {\r\n    z-index: 12 !important;\r\n    filter: invert();\r\n    mix-blend-mode: multiply;\r\n    pointer-events: none;\r\n}\r\n\r\n\r\n/* gradio 3.4.1 stuff for editable scrollbar values */\r\n.gr-box > div > div > input.gr-text-input{\r\n    position: absolute;\r\n    right: 0.5em;\r\n    top: -0.6em;\r\n    z-index: 400;\r\n    width: 6em;\r\n}\r\n#quicksettings .gr-box > div > div > input.gr-text-input {\r\n  top: -1.12em;\r\n}\r\n\r\n.row.gr-compact{\r\n    overflow: visible;\r\n}\r\n\r\n#img2img_image, #img2img_image > .h-60, #img2img_image > .h-60 > div, #img2img_image > .h-60 > div > img,\r\n#img2img_sketch, #img2img_sketch > .h-60, #img2img_sketch > .h-60 > div, #img2img_sketch > .h-60 > div > img,\r\n#img2maskimg, #img2maskimg > .h-60, #img2maskimg > .h-60 > div, #img2maskimg > .h-60 > div > img,\r\n#inpaint_sketch, #inpaint_sketch > .h-60, #inpaint_sketch > .h-60 > div, #inpaint_sketch > .h-60 > div > img\r\n{\r\n    height: 480px !important;\r\n    max-height: 480px !important;\r\n    min-height: 480px !important;\r\n}\r\n\r\n/* Extensions */\r\n\r\n#tab_extensions table{\r\n    border-collapse: collapse;\r\n}\r\n\r\n#tab_extensions table td, #tab_extensions table th{\r\n    border: 1px solid #ccc;\r\n    padding: 0.25em 0.5em;\r\n}\r\n\r\n#tab_extensions table input[type=\"checkbox\"]{\r\n    margin-right: 0.5em;\r\n}\r\n\r\n#tab_extensions button{\r\n    max-width: 16em;\r\n}\r\n\r\n#tab_extensions input[disabled=\"disabled\"]{\r\n    opacity: 0.5;\r\n}\r\n\r\n.extension-tag{\r\n    font-weight: bold;\r\n    font-size: 95%;\r\n}\r\n\r\n#available_extensions .info{\r\n    margin: 0;\r\n}\r\n\r\n#available_extensions .date_added{\r\n    opacity: 0.85;\r\n    font-size: 90%;\r\n}\r\n\r\n#image_buttons_txt2img button, #image_buttons_img2img button, #image_buttons_extras button{\r\n    min-width: auto;\r\n    padding-left: 0.5em;\r\n    padding-right: 0.5em;\r\n}\r\n\r\n.gr-form{\r\n    background-color: white;\r\n}\r\n\r\n.dark .gr-form{\r\n    background-color: rgb(31 41 55 / var(--tw-bg-opacity));\r\n}\r\n\r\n.gr-button-tool, .gr-button-tool-top{\r\n    max-width: 2.5em;\r\n    min-width: 2.5em !important;\r\n    height: 2.4em;\r\n}\r\n\r\n.gr-button-tool{\r\n    margin: 0.6em 0em 0.55em 0;\r\n}\r\n\r\n.gr-button-tool-top, #settings .gr-button-tool{\r\n    margin: 1.6em 0.7em 0.55em 0;\r\n}\r\n\r\n\r\n#modelmerger_results_container{\r\n    margin-top: 1em;\r\n    overflow: visible;\r\n}\r\n\r\n#modelmerger_models{\r\n    gap: 0;\r\n}\r\n\r\n\r\n#quicksettings .gr-button-tool{\r\n    margin: 0;\r\n    border-color: unset;\r\n\tbackground-color: unset;\r\n}\r\n\r\n#modelmerger_interp_description>p {\r\n    margin: 0!important;\r\n    text-align: center;\r\n}\r\n#modelmerger_interp_description {\r\n    margin: 0.35rem 0.75rem 1.23rem;\r\n}\r\n#img2img_settings > div.gr-form, #txt2img_settings > div.gr-form {\r\n    padding-top: 0.9em;\r\n    padding-bottom: 0.9em;\r\n}\r\n#txt2img_settings {\r\n    padding-top: 1.16em;\r\n    padding-bottom: 0.9em;\r\n}\r\n#img2img_settings {\r\n    padding-bottom: 0.9em;\r\n}\r\n\r\n#img2img_settings div.gr-form .gr-form, #txt2img_settings div.gr-form .gr-form, #train_tabs div.gr-form .gr-form{\r\n    border: none;\r\n    padding-bottom: 0.5em;\r\n}\r\n\r\nfooter {\r\n    display: none !important;\r\n}\r\n\r\n#footer{\r\n    text-align: center;\r\n}\r\n\r\n#footer div{\r\n    display: inline-block;\r\n}\r\n\r\n#footer .versions{\r\n    font-size: 85%;\r\n    opacity: 0.85;\r\n}\r\n\r\n#txtimg_hr_finalres{\r\n    min-height: 0 !important;\r\n    padding: .625rem .75rem;\r\n    margin-left: -0.75em\r\n\r\n}\r\n\r\n#txtimg_hr_finalres .resolution{\r\n    font-weight: bold;\r\n}\r\n\r\n#txt2img_checkboxes, #img2img_checkboxes{\r\n    margin-bottom: 0.5em;\r\n    margin-left: 0em;\r\n}\r\n#txt2img_checkboxes > div, #img2img_checkboxes > div{\r\n    flex: 0;\r\n    white-space: nowrap;\r\n    min-width: auto;\r\n}\r\n\r\n#img2img_copy_to_img2img, #img2img_copy_to_sketch, #img2img_copy_to_inpaint, #img2img_copy_to_inpaint_sketch{\r\n    margin-left: 0em;\r\n}\r\n\r\n#axis_options {\r\n    margin-left: 0em;\r\n}\r\n\r\n.inactive{\r\n    opacity: 0.5;\r\n}\r\n\r\n[id*='_prompt_container']{\r\n    gap: 0;\r\n}\r\n\r\n[id*='_prompt_container'] > div{\r\n    margin: -0.4em 0 0 0;\r\n}\r\n\r\n.gr-compact {\r\n    border: none;\r\n}\r\n\r\n.dark .gr-compact{\r\n    background-color: rgb(31 41 55 / var(--tw-bg-opacity));\r\n\tmargin-left: 0;\r\n}\r\n\r\n.gr-compact{\r\n    overflow: visible;\r\n}\r\n\r\n.gr-compact > *{\r\n}\r\n\r\n.gr-compact .gr-block, .gr-compact .gr-form{\r\n    border: none;\r\n    box-shadow: none;\r\n}\r\n\r\n.gr-compact .gr-box{\r\n    border-radius: .5rem !important;\r\n    border-width: 1px !important;\r\n}\r\n\r\n#mode_img2img > div > div{\r\n    gap: 0 !important;\r\n}\r\n\r\n[id*='img2img_copy_to_'] {\r\n    border: none;\r\n}\r\n\r\n[id*='img2img_copy_to_'] > button {\r\n}\r\n\r\n[id*='img2img_label_copy_to_'] {\r\n    font-size: 1.0em;\r\n    font-weight: bold;\r\n    text-align: center;\r\n    line-height: 2.4em;\r\n}\r\n\r\n.extra-networks > div > [id *= '_extra_']{\r\n    margin: 0.3em;\r\n}\r\n\r\n.extra-network-subdirs{\r\n    padding: 0.2em 0.35em;\r\n}\r\n\r\n.extra-network-subdirs button{\r\n    margin: 0 0.15em;\r\n}\r\n\r\n#txt2img_extra_networks .search, #img2img_extra_networks .search{\r\n    display: inline-block;\r\n    max-width: 16em;\r\n    margin: 0.3em;\r\n    align-self: center;\r\n}\r\n\r\n#txt2img_extra_view, #img2img_extra_view {\r\n    width: auto;\r\n}\r\n\r\n.extra-network-cards .nocards, .extra-network-thumbs .nocards{\r\n    margin: 1.25em 0.5em 0.5em 0.5em;\r\n}\r\n\r\n.extra-network-cards .nocards h1, .extra-network-thumbs .nocards h1{\r\n    font-size: 1.5em;\r\n    margin-bottom: 1em;\r\n}\r\n\r\n.extra-network-cards .nocards li, .extra-network-thumbs .nocards li{\r\n    margin-left: 0.5em;\r\n}\r\n\r\n.extra-network-thumbs {\r\n    display: flex;\r\n    flex-flow: row wrap;\r\n    gap: 10px;\r\n}\r\n\r\n.extra-network-thumbs .card {\r\n    height: 6em;\r\n    width: 6em;\r\n    cursor: pointer;\r\n    background-image: url('./file=html/card-no-preview.png');\r\n    background-size: cover;\r\n    background-position: center center;\r\n    position: relative;\r\n}\r\n\r\n.extra-network-thumbs .card:hover .additional a {\r\n    display: block;\r\n}\r\n\r\n.extra-network-thumbs .actions .additional a {\r\n    background-image: url('./file=html/image-update.svg');\r\n    background-repeat: no-repeat;\r\n    background-size: cover;\r\n    background-position: center center;\r\n    position: absolute;\r\n    top: 0;\r\n    left: 0;\r\n    width: 24px;\r\n    height: 24px;\r\n    display: none;\r\n    font-size: 0;\r\n    text-align: -9999;\r\n}\r\n\r\n.extra-network-thumbs .actions .name {\r\n    position: absolute;\r\n    bottom: 0;\r\n    font-size: 10px;\r\n    padding: 3px;\r\n    width: 100%;\r\n    overflow: hidden;\r\n    white-space: nowrap;\r\n    text-overflow: ellipsis;\r\n    background: rgba(0,0,0,.5);\r\n    color: white;\r\n}\r\n\r\n.extra-network-thumbs .card:hover .actions .name {\r\n    white-space: normal;\r\n    word-break: break-all;\r\n}\r\n\r\n.extra-network-cards .card{\r\n    display: inline-block;\r\n    margin: 0.5em;\r\n    width: 16em;\r\n    height: 24em;\r\n    box-shadow: 0 0 5px rgba(128, 128, 128, 0.5);\r\n    border-radius: 0.2em;\r\n    position: relative;\r\n\r\n    background-size: auto 100%;\r\n    background-position: center;\r\n    overflow: hidden;\r\n    cursor: pointer;\r\n\r\n    background-image: url('./file=html/card-no-preview.png')\r\n}\r\n\r\n.extra-network-cards .card:hover{\r\n    box-shadow: 0 0 2px 0.3em rgba(0, 128, 255, 0.35);\r\n}\r\n\r\n.extra-network-cards .card .actions .additional{\r\n    display: none;\r\n}\r\n\r\n.extra-network-cards .card .actions{\r\n    position: absolute;\r\n    bottom: 0;\r\n    left: 0;\r\n    right: 0;\r\n    padding: 0.5em;\r\n    color: white;\r\n    background: rgba(0,0,0,0.5);\r\n    box-shadow: 0 0 0.25em 0.25em rgba(0,0,0,0.5);\r\n    text-shadow: 0 0 0.2em black;\r\n}\r\n\r\n.extra-network-cards .card .actions:hover{\r\n    box-shadow: 0 0 0.75em 0.75em rgba(0,0,0,0.5) !important;\r\n}\r\n\r\n.extra-network-cards .card .actions .name{\r\n    font-size: 1.7em;\r\n    font-weight: bold;\r\n    line-break: anywhere;\r\n}\r\n\r\n.extra-network-cards .card .actions:hover .additional{\r\n    display: block;\r\n}\r\n\r\n.extra-network-cards .card ul{\r\n    margin: 0.25em 0 0.75em 0.25em;\r\n    cursor: unset;\r\n}\r\n\r\n.extra-network-cards .card ul a{\r\n    cursor: pointer;\r\n}\r\n\r\n.extra-network-cards .card ul a:hover{\r\n    color: red;\r\n}\r\n\r\n[id*='_prompt_container'] > div {\r\n\tmargin: 0!important;\r\n}\r\n"
  },
  {
    "path": "styles.csv",
    "content": "﻿name,prompt,negative_prompt\r\nNone,,\r\nnaifu基础起手式,\"masterpiece, best quality, \",\"lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry\"\r\n"
  },
  {
    "path": "tags/temp/emb.txt",
    "content": ""
  },
  {
    "path": "tags/temp/wc.txt",
    "content": ""
  },
  {
    "path": "test/__init__.py",
    "content": ""
  },
  {
    "path": "test/basic_features/__init__.py",
    "content": ""
  },
  {
    "path": "test/basic_features/extras_test.py",
    "content": "import unittest\r\nimport requests\r\nfrom gradio.processing_utils import encode_pil_to_base64\r\nfrom PIL import Image\r\n\r\nclass TestExtrasWorking(unittest.TestCase):\r\n    def setUp(self):\r\n        self.url_extras_single = \"http://localhost:7860/sdapi/v1/extra-single-image\"\r\n        self.extras_single = {\r\n            \"resize_mode\": 0,\r\n            \"show_extras_results\": True,\r\n            \"gfpgan_visibility\": 0,\r\n            \"codeformer_visibility\": 0,\r\n            \"codeformer_weight\": 0,\r\n            \"upscaling_resize\": 2,\r\n            \"upscaling_resize_w\": 128,\r\n            \"upscaling_resize_h\": 128,\r\n            \"upscaling_crop\": True,\r\n            \"upscaler_1\": \"None\",\r\n            \"upscaler_2\": \"None\",\r\n            \"extras_upscaler_2_visibility\": 0,\r\n            \"image\": encode_pil_to_base64(Image.open(r\"test/test_files/img2img_basic.png\"))\r\n            }\r\n\r\n    def test_simple_upscaling_performed(self):\r\n        self.extras_single[\"upscaler_1\"] = \"Lanczos\"\r\n        self.assertEqual(requests.post(self.url_extras_single, json=self.extras_single).status_code, 200)\r\n\r\n\r\nclass TestPngInfoWorking(unittest.TestCase):\r\n    def setUp(self):\r\n        self.url_png_info = \"http://localhost:7860/sdapi/v1/extra-single-image\"\r\n        self.png_info = {\r\n            \"image\": encode_pil_to_base64(Image.open(r\"test/test_files/img2img_basic.png\"))\r\n        }\r\n\r\n    def test_png_info_performed(self):\r\n        self.assertEqual(requests.post(self.url_png_info, json=self.png_info).status_code, 200)\r\n\r\n\r\nclass TestInterrogateWorking(unittest.TestCase):\r\n    def setUp(self):\r\n        self.url_interrogate = \"http://localhost:7860/sdapi/v1/extra-single-image\"\r\n        self.interrogate = {\r\n            \"image\": encode_pil_to_base64(Image.open(r\"test/test_files/img2img_basic.png\")),\r\n            \"model\": \"clip\"\r\n        }\r\n\r\n    def test_interrogate_performed(self):\r\n        self.assertEqual(requests.post(self.url_interrogate, json=self.interrogate).status_code, 200)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n    unittest.main()\r\n"
  },
  {
    "path": "test/basic_features/img2img_test.py",
    "content": "import unittest\r\nimport requests\r\nfrom gradio.processing_utils import encode_pil_to_base64\r\nfrom PIL import Image\r\n\r\n\r\nclass TestImg2ImgWorking(unittest.TestCase):\r\n    def setUp(self):\r\n        self.url_img2img = \"http://localhost:7860/sdapi/v1/img2img\"\r\n        self.simple_img2img = {\r\n            \"init_images\": [encode_pil_to_base64(Image.open(r\"test/test_files/img2img_basic.png\"))],\r\n            \"resize_mode\": 0,\r\n            \"denoising_strength\": 0.75,\r\n            \"mask\": None,\r\n            \"mask_blur\": 4,\r\n            \"inpainting_fill\": 0,\r\n            \"inpaint_full_res\": False,\r\n            \"inpaint_full_res_padding\": 0,\r\n            \"inpainting_mask_invert\": False,\r\n            \"prompt\": \"example prompt\",\r\n            \"styles\": [],\r\n            \"seed\": -1,\r\n            \"subseed\": -1,\r\n            \"subseed_strength\": 0,\r\n            \"seed_resize_from_h\": -1,\r\n            \"seed_resize_from_w\": -1,\r\n            \"batch_size\": 1,\r\n            \"n_iter\": 1,\r\n            \"steps\": 3,\r\n            \"cfg_scale\": 7,\r\n            \"width\": 64,\r\n            \"height\": 64,\r\n            \"restore_faces\": False,\r\n            \"tiling\": False,\r\n            \"negative_prompt\": \"\",\r\n            \"eta\": 0,\r\n            \"s_churn\": 0,\r\n            \"s_tmax\": 0,\r\n            \"s_tmin\": 0,\r\n            \"s_noise\": 1,\r\n            \"override_settings\": {},\r\n            \"sampler_index\": \"Euler a\",\r\n            \"include_init_images\": False\r\n            }\r\n\r\n    def test_img2img_simple_performed(self):\r\n        self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200)\r\n\r\n    def test_inpainting_masked_performed(self):\r\n        self.simple_img2img[\"mask\"] = encode_pil_to_base64(Image.open(r\"test/test_files/mask_basic.png\"))\r\n        self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200)\r\n\r\n    def test_inpainting_with_inverted_masked_performed(self):\r\n        self.simple_img2img[\"mask\"] = encode_pil_to_base64(Image.open(r\"test/test_files/mask_basic.png\"))\r\n        self.simple_img2img[\"inpainting_mask_invert\"] = True\r\n        self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200)\r\n\r\n    def test_img2img_sd_upscale_performed(self):\r\n        self.simple_img2img[\"script_name\"] = \"sd upscale\"\r\n        self.simple_img2img[\"script_args\"] = [\"\", 8, \"Lanczos\", 2.0]\r\n\r\n        self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n    unittest.main()\r\n"
  },
  {
    "path": "test/basic_features/txt2img_test.py",
    "content": "import unittest\r\nimport requests\r\n\r\n\r\nclass TestTxt2ImgWorking(unittest.TestCase):\r\n    def setUp(self):\r\n        self.url_txt2img = \"http://localhost:7860/sdapi/v1/txt2img\"\r\n        self.simple_txt2img = {\r\n            \"enable_hr\": False,\r\n            \"denoising_strength\": 0,\r\n            \"firstphase_width\": 0,\r\n            \"firstphase_height\": 0,\r\n            \"prompt\": \"example prompt\",\r\n            \"styles\": [],\r\n            \"seed\": -1,\r\n            \"subseed\": -1,\r\n            \"subseed_strength\": 0,\r\n            \"seed_resize_from_h\": -1,\r\n            \"seed_resize_from_w\": -1,\r\n            \"batch_size\": 1,\r\n            \"n_iter\": 1,\r\n            \"steps\": 3,\r\n            \"cfg_scale\": 7,\r\n            \"width\": 64,\r\n            \"height\": 64,\r\n            \"restore_faces\": False,\r\n            \"tiling\": False,\r\n            \"negative_prompt\": \"\",\r\n            \"eta\": 0,\r\n            \"s_churn\": 0,\r\n            \"s_tmax\": 0,\r\n            \"s_tmin\": 0,\r\n            \"s_noise\": 1,\r\n            \"sampler_index\": \"Euler a\"\r\n        }\r\n\r\n    def test_txt2img_simple_performed(self):\r\n        self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)\r\n\r\n    def test_txt2img_with_negative_prompt_performed(self):\r\n        self.simple_txt2img[\"negative_prompt\"] = \"example negative prompt\"\r\n        self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)\r\n\r\n    def test_txt2img_with_complex_prompt_performed(self):\r\n        self.simple_txt2img[\"prompt\"] = \"((emphasis)), (emphasis1:1.1), [to:1], [from::2], [from:to:0.3], [alt|alt1]\"\r\n        self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)\r\n\r\n    def test_txt2img_not_square_image_performed(self):\r\n        self.simple_txt2img[\"height\"] = 128\r\n        self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)\r\n\r\n    def test_txt2img_with_hrfix_performed(self):\r\n        self.simple_txt2img[\"enable_hr\"] = True\r\n        self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)\r\n\r\n    def test_txt2img_with_tiling_performed(self):\r\n        self.simple_txt2img[\"tiling\"] = True\r\n        self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)\r\n\r\n    def test_txt2img_with_restore_faces_performed(self):\r\n        self.simple_txt2img[\"restore_faces\"] = True\r\n        self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)\r\n\r\n    def test_txt2img_with_vanilla_sampler_performed(self):\r\n        self.simple_txt2img[\"sampler_index\"] = \"PLMS\"\r\n        self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)\r\n        self.simple_txt2img[\"sampler_index\"] = \"DDIM\"\r\n        self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)\r\n\r\n    def test_txt2img_multiple_batches_performed(self):\r\n        self.simple_txt2img[\"n_iter\"] = 2\r\n        self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)\r\n\r\n    def test_txt2img_batch_performed(self):\r\n        self.simple_txt2img[\"batch_size\"] = 2\r\n        self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n    unittest.main()\r\n"
  },
  {
    "path": "test/basic_features/utils_test.py",
    "content": "import unittest\r\nimport requests\r\n\r\nclass UtilsTests(unittest.TestCase):\r\n  def setUp(self):\r\n    self.url_options = \"http://localhost:7860/sdapi/v1/options\"\r\n    self.url_cmd_flags = \"http://localhost:7860/sdapi/v1/cmd-flags\"\r\n    self.url_samplers = \"http://localhost:7860/sdapi/v1/samplers\"\r\n    self.url_upscalers = \"http://localhost:7860/sdapi/v1/upscalers\"\r\n    self.url_sd_models = \"http://localhost:7860/sdapi/v1/sd-models\"\r\n    self.url_hypernetworks = \"http://localhost:7860/sdapi/v1/hypernetworks\"\r\n    self.url_face_restorers = \"http://localhost:7860/sdapi/v1/face-restorers\"\r\n    self.url_realesrgan_models = \"http://localhost:7860/sdapi/v1/realesrgan-models\"\r\n    self.url_prompt_styles = \"http://localhost:7860/sdapi/v1/prompt-styles\"\r\n    self.url_embeddings = \"http://localhost:7860/sdapi/v1/embeddings\"\r\n\r\n  def test_options_get(self):\r\n    self.assertEqual(requests.get(self.url_options).status_code, 200)\r\n\r\n  def test_options_write(self):\r\n    response = requests.get(self.url_options)\r\n    self.assertEqual(response.status_code, 200)\r\n\r\n    pre_value = response.json()[\"send_seed\"]\r\n\r\n    self.assertEqual(requests.post(self.url_options, json={\"send_seed\":not pre_value}).status_code, 200)\r\n\r\n    response = requests.get(self.url_options)\r\n    self.assertEqual(response.status_code, 200)\r\n    self.assertEqual(response.json()[\"send_seed\"], not pre_value)\r\n\r\n    requests.post(self.url_options, json={\"send_seed\": pre_value})\r\n\r\n  def test_cmd_flags(self):\r\n    self.assertEqual(requests.get(self.url_cmd_flags).status_code, 200)\r\n\r\n  def test_samplers(self):\r\n    self.assertEqual(requests.get(self.url_samplers).status_code, 200)\r\n\r\n  def test_upscalers(self):\r\n    self.assertEqual(requests.get(self.url_upscalers).status_code, 200)\r\n\r\n  def test_sd_models(self):\r\n    self.assertEqual(requests.get(self.url_sd_models).status_code, 200)\r\n\r\n  def test_hypernetworks(self):\r\n    self.assertEqual(requests.get(self.url_hypernetworks).status_code, 200)\r\n\r\n  def test_face_restorers(self):\r\n    self.assertEqual(requests.get(self.url_face_restorers).status_code, 200)\r\n  \r\n  def test_realesrgan_models(self):\r\n    self.assertEqual(requests.get(self.url_realesrgan_models).status_code, 200)\r\n  \r\n  def test_prompt_styles(self):\r\n    self.assertEqual(requests.get(self.url_prompt_styles).status_code, 200)\r\n\r\n  def test_embeddings(self):\r\n    self.assertEqual(requests.get(self.url_embeddings).status_code, 200)\r\n\r\nif __name__ == \"__main__\":\r\n    unittest.main()\r\n"
  },
  {
    "path": "test/server_poll.py",
    "content": "import unittest\r\nimport requests\r\nimport time\r\n\r\n\r\ndef run_tests(proc, test_dir):\r\n    timeout_threshold = 240\r\n    start_time = time.time()\r\n    while time.time()-start_time < timeout_threshold:\r\n        try:\r\n            requests.head(\"http://localhost:7860/\")\r\n            break\r\n        except requests.exceptions.ConnectionError:\r\n            if proc.poll() is not None:\r\n                break\r\n    if proc.poll() is None:\r\n        if test_dir is None:\r\n            test_dir = \"test\"\r\n        suite = unittest.TestLoader().discover(test_dir, pattern=\"*_test.py\", top_level_dir=\"test\")\r\n        result = unittest.TextTestRunner(verbosity=2).run(suite)\r\n        return len(result.failures) + len(result.errors)\r\n    else:\r\n        print(\"Launch unsuccessful\")\r\n        return 1\r\n"
  },
  {
    "path": "textual_inversion_templates/hypernetwork.txt",
    "content": "a photo of a [filewords]\r\na rendering of a [filewords]\r\na cropped photo of the [filewords]\r\nthe photo of a [filewords]\r\na photo of a clean [filewords]\r\na photo of a dirty [filewords]\r\na dark photo of the [filewords]\r\na photo of my [filewords]\r\na photo of the cool [filewords]\r\na close-up photo of a [filewords]\r\na bright photo of the [filewords]\r\na cropped photo of a [filewords]\r\na photo of the [filewords]\r\na good photo of the [filewords]\r\na photo of one [filewords]\r\na close-up photo of the [filewords]\r\na rendition of the [filewords]\r\na photo of the clean [filewords]\r\na rendition of a [filewords]\r\na photo of a nice [filewords]\r\na good photo of a [filewords]\r\na photo of the nice [filewords]\r\na photo of the small [filewords]\r\na photo of the weird [filewords]\r\na photo of the large [filewords]\r\na photo of a cool [filewords]\r\na photo of a small [filewords]\r\n"
  },
  {
    "path": "textual_inversion_templates/none.txt",
    "content": "picture\r\n"
  },
  {
    "path": "textual_inversion_templates/style.txt",
    "content": "a painting, art by [name]\r\na rendering, art by [name]\r\na cropped painting, art by [name]\r\nthe painting, art by [name]\r\na clean painting, art by [name]\r\na dirty painting, art by [name]\r\na dark painting, art by [name]\r\na picture, art by [name]\r\na cool painting, art by [name]\r\na close-up painting, art by [name]\r\na bright painting, art by [name]\r\na cropped painting, art by [name]\r\na good painting, art by [name]\r\na close-up painting, art by [name]\r\na rendition, art by [name]\r\na nice painting, art by [name]\r\na small painting, art by [name]\r\na weird painting, art by [name]\r\na large painting, art by [name]\r\n"
  },
  {
    "path": "textual_inversion_templates/style_filewords.txt",
    "content": "a painting of [filewords], art by [name]\r\na rendering of [filewords], art by [name]\r\na cropped painting of [filewords], art by [name]\r\nthe painting of [filewords], art by [name]\r\na clean painting of [filewords], art by [name]\r\na dirty painting of [filewords], art by [name]\r\na dark painting of [filewords], art by [name]\r\na picture of [filewords], art by [name]\r\na cool painting of [filewords], art by [name]\r\na close-up painting of [filewords], art by [name]\r\na bright painting of [filewords], art by [name]\r\na cropped painting of [filewords], art by [name]\r\na good painting of [filewords], art by [name]\r\na close-up painting of [filewords], art by [name]\r\na rendition of [filewords], art by [name]\r\na nice painting of [filewords], art by [name]\r\na small painting of [filewords], art by [name]\r\na weird painting of [filewords], art by [name]\r\na large painting of [filewords], art by [name]\r\n"
  },
  {
    "path": "textual_inversion_templates/subject.txt",
    "content": "a photo of a [name]\r\na rendering of a [name]\r\na cropped photo of the [name]\r\nthe photo of a [name]\r\na photo of a clean [name]\r\na photo of a dirty [name]\r\na dark photo of the [name]\r\na photo of my [name]\r\na photo of the cool [name]\r\na close-up photo of a [name]\r\na bright photo of the [name]\r\na cropped photo of a [name]\r\na photo of the [name]\r\na good photo of the [name]\r\na photo of one [name]\r\na close-up photo of the [name]\r\na rendition of the [name]\r\na photo of the clean [name]\r\na rendition of a [name]\r\na photo of a nice [name]\r\na good photo of a [name]\r\na photo of the nice [name]\r\na photo of the small [name]\r\na photo of the weird [name]\r\na photo of the large [name]\r\na photo of a cool [name]\r\na photo of a small [name]\r\n"
  },
  {
    "path": "textual_inversion_templates/subject_filewords.txt",
    "content": "a photo of a [name], [filewords]\r\na rendering of a [name], [filewords]\r\na cropped photo of the [name], [filewords]\r\nthe photo of a [name], [filewords]\r\na photo of a clean [name], [filewords]\r\na photo of a dirty [name], [filewords]\r\na dark photo of the [name], [filewords]\r\na photo of my [name], [filewords]\r\na photo of the cool [name], [filewords]\r\na close-up photo of a [name], [filewords]\r\na bright photo of the [name], [filewords]\r\na cropped photo of a [name], [filewords]\r\na photo of the [name], [filewords]\r\na good photo of the [name], [filewords]\r\na photo of one [name], [filewords]\r\na close-up photo of the [name], [filewords]\r\na rendition of the [name], [filewords]\r\na photo of the clean [name], [filewords]\r\na rendition of a [name], [filewords]\r\na photo of a nice [name], [filewords]\r\na good photo of a [name], [filewords]\r\na photo of the nice [name], [filewords]\r\na photo of the small [name], [filewords]\r\na photo of the weird [name], [filewords]\r\na photo of the large [name], [filewords]\r\na photo of a cool [name], [filewords]\r\na photo of a small [name], [filewords]\r\n"
  },
  {
    "path": "tmp/stderr.txt",
    "content": "^C"
  },
  {
    "path": "tmp/stdout.txt",
    "content": ""
  },
  {
    "path": "tmp/tagAutocompletePath.txt",
    "content": "extensions/a1111-sd-webui-tagcomplete/tags"
  },
  {
    "path": "ui-config.json",
    "content": "{\r\n    \"txt2img/Prompt/visible\": true,\r\n    \"txt2img/Prompt/value\": \"\",\r\n    \"txt2img/Negative prompt/visible\": true,\r\n    \"txt2img/Negative prompt/value\": \"\",\r\n    \"txt2img/Styles/visible\": true,\r\n    \"txt2img/Styles/value\": [],\r\n    \"txt2img/Sampling method/visible\": true,\r\n    \"txt2img/Sampling method/value\": \"Euler a\",\r\n    \"txt2img/Sampling steps/visible\": true,\r\n    \"txt2img/Sampling steps/value\": 20,\r\n    \"txt2img/Sampling steps/minimum\": 1,\r\n    \"txt2img/Sampling steps/maximum\": 150,\r\n    \"txt2img/Sampling steps/step\": 1,\r\n    \"txt2img/Width/visible\": true,\r\n    \"txt2img/Width/value\": 512,\r\n    \"txt2img/Width/minimum\": 64,\r\n    \"txt2img/Width/maximum\": 2048,\r\n    \"txt2img/Width/step\": 8,\r\n    \"txt2img/Height/visible\": true,\r\n    \"txt2img/Height/value\": 512,\r\n    \"txt2img/Height/minimum\": 64,\r\n    \"txt2img/Height/maximum\": 2048,\r\n    \"txt2img/Height/step\": 8,\r\n    \"txt2img/Batch count/visible\": true,\r\n    \"txt2img/Batch count/value\": 1,\r\n    \"txt2img/Batch count/minimum\": 1,\r\n    \"txt2img/Batch count/maximum\": 100,\r\n    \"txt2img/Batch count/step\": 1,\r\n    \"txt2img/Batch size/visible\": true,\r\n    \"txt2img/Batch size/value\": 1,\r\n    \"txt2img/Batch size/minimum\": 1,\r\n    \"txt2img/Batch size/maximum\": 8,\r\n    \"txt2img/Batch size/step\": 1,\r\n    \"txt2img/CFG Scale/visible\": true,\r\n    \"txt2img/CFG Scale/value\": 7.0,\r\n    \"txt2img/CFG Scale/minimum\": 1.0,\r\n    \"txt2img/CFG Scale/maximum\": 30.0,\r\n    \"txt2img/CFG Scale/step\": 0.5,\r\n    \"txt2img/Seed/visible\": true,\r\n    \"txt2img/Seed/value\": -1.0,\r\n    \"txt2img/Extra/visible\": true,\r\n    \"txt2img/Extra/value\": false,\r\n    \"txt2img/Variation seed/visible\": true,\r\n    \"txt2img/Variation seed/value\": -1.0,\r\n    \"txt2img/Variation strength/visible\": true,\r\n    \"txt2img/Variation strength/value\": 0.0,\r\n    \"txt2img/Variation strength/minimum\": 0,\r\n    \"txt2img/Variation strength/maximum\": 1,\r\n    \"txt2img/Variation strength/step\": 0.01,\r\n    \"txt2img/Resize seed from width/visible\": true,\r\n    \"txt2img/Resize seed from width/value\": 0,\r\n    \"txt2img/Resize seed from width/minimum\": 0,\r\n    \"txt2img/Resize seed from width/maximum\": 2048,\r\n    \"txt2img/Resize seed from width/step\": 8,\r\n    \"txt2img/Resize seed from height/visible\": true,\r\n    \"txt2img/Resize seed from height/value\": 0,\r\n    \"txt2img/Resize seed from height/minimum\": 0,\r\n    \"txt2img/Resize seed from height/maximum\": 2048,\r\n    \"txt2img/Resize seed from height/step\": 8,\r\n    \"txt2img/Restore faces/visible\": true,\r\n    \"txt2img/Restore faces/value\": false,\r\n    \"txt2img/Tiling/visible\": true,\r\n    \"txt2img/Tiling/value\": false,\r\n    \"txt2img/Hires. fix/visible\": true,\r\n    \"txt2img/Hires. fix/value\": false,\r\n    \"txt2img/Upscaler/visible\": true,\r\n    \"txt2img/Upscaler/value\": \"Latent\",\r\n    \"txt2img/Hires steps/visible\": true,\r\n    \"txt2img/Hires steps/value\": 0,\r\n    \"txt2img/Hires steps/minimum\": 0,\r\n    \"txt2img/Hires steps/maximum\": 150,\r\n    \"txt2img/Hires steps/step\": 1,\r\n    \"txt2img/Denoising strength/visible\": true,\r\n    \"txt2img/Denoising strength/value\": 0.7,\r\n    \"txt2img/Denoising strength/minimum\": 0.0,\r\n    \"txt2img/Denoising strength/maximum\": 1.0,\r\n    \"txt2img/Denoising strength/step\": 0.01,\r\n    \"txt2img/Upscale by/visible\": true,\r\n    \"txt2img/Upscale by/value\": 2.0,\r\n    \"txt2img/Upscale by/minimum\": 1.0,\r\n    \"txt2img/Upscale by/maximum\": 4.0,\r\n    \"txt2img/Upscale by/step\": 0.05,\r\n    \"txt2img/Resize width to/visible\": true,\r\n    \"txt2img/Resize width to/value\": 0,\r\n    \"txt2img/Resize width to/minimum\": 0,\r\n    \"txt2img/Resize width to/maximum\": 2048,\r\n    \"txt2img/Resize width to/step\": 8,\r\n    \"txt2img/Resize height to/visible\": true,\r\n    \"txt2img/Resize height to/value\": 0,\r\n    \"txt2img/Resize height to/minimum\": 0,\r\n    \"txt2img/Resize height to/maximum\": 2048,\r\n    \"txt2img/Resize height to/step\": 8,\r\n    \"txt2img/Override settings/value\": null,\r\n    \"customscript/additional_networks.py/txt2img/Enable/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Enable/value\": false,\r\n    \"customscript/additional_networks.py/txt2img/Network module 1/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Network module 1/value\": \"LoRA\",\r\n    \"customscript/additional_networks.py/txt2img/Model 1/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Model 1/value\": \"None\",\r\n    \"customscript/additional_networks.py/txt2img/Weight 1/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Weight 1/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 1/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 1/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 1/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/Network module 2/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Network module 2/value\": \"LoRA\",\r\n    \"customscript/additional_networks.py/txt2img/Model 2/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Model 2/value\": \"None\",\r\n    \"customscript/additional_networks.py/txt2img/Weight 2/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Weight 2/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 2/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 2/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 2/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/Network module 3/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Network module 3/value\": \"LoRA\",\r\n    \"customscript/additional_networks.py/txt2img/Model 3/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Model 3/value\": \"None\",\r\n    \"customscript/additional_networks.py/txt2img/Weight 3/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Weight 3/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 3/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 3/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 3/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/Network module 4/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Network module 4/value\": \"LoRA\",\r\n    \"customscript/additional_networks.py/txt2img/Model 4/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Model 4/value\": \"None\",\r\n    \"customscript/additional_networks.py/txt2img/Weight 4/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Weight 4/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 4/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 4/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 4/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/Network module 5/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Network module 5/value\": \"LoRA\",\r\n    \"customscript/additional_networks.py/txt2img/Model 5/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Model 5/value\": \"None\",\r\n    \"customscript/additional_networks.py/txt2img/Weight 5/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Weight 5/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 5/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 5/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/Weight 5/step\": 0.05,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic weight/visible\": true,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic weight/value\": 0.9,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic weight/minimum\": 0,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic weight/maximum\": 1,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic weight/step\": 0.01,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic steps/visible\": true,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic steps/value\": 5,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic steps/minimum\": 0,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic steps/maximum\": 50,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic steps/step\": 1,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic learning rate/visible\": true,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic learning rate/value\": \"0.0001\",\r\n    \"customscript/aesthetic.py/txt2img/Slerp interpolation/visible\": true,\r\n    \"customscript/aesthetic.py/txt2img/Slerp interpolation/value\": false,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic imgs embedding/visible\": true,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic imgs embedding/value\": \"None\",\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic text for imgs/visible\": true,\r\n    \"customscript/aesthetic.py/txt2img/Aesthetic text for imgs/value\": \"\",\r\n    \"customscript/aesthetic.py/txt2img/Slerp angle/visible\": true,\r\n    \"customscript/aesthetic.py/txt2img/Slerp angle/value\": 0.1,\r\n    \"customscript/aesthetic.py/txt2img/Slerp angle/minimum\": 0,\r\n    \"customscript/aesthetic.py/txt2img/Slerp angle/maximum\": 1,\r\n    \"customscript/aesthetic.py/txt2img/Slerp angle/step\": 0.01,\r\n    \"customscript/aesthetic.py/txt2img/Is negative text/visible\": true,\r\n    \"customscript/aesthetic.py/txt2img/Is negative text/value\": false,\r\n    \"txt2img/Script/visible\": true,\r\n    \"txt2img/Script/value\": \"None\",\r\n    \"customscript/prompt_matrix.py/txt2img/Put variable parts at start of prompt/visible\": true,\r\n    \"customscript/prompt_matrix.py/txt2img/Put variable parts at start of prompt/value\": false,\r\n    \"customscript/prompt_matrix.py/txt2img/Use different seed for each picture/visible\": true,\r\n    \"customscript/prompt_matrix.py/txt2img/Use different seed for each picture/value\": false,\r\n    \"customscript/prompt_matrix.py/txt2img/Select prompt/visible\": true,\r\n    \"customscript/prompt_matrix.py/txt2img/Select prompt/value\": \"positive\",\r\n    \"customscript/prompt_matrix.py/txt2img/Select joining char/visible\": true,\r\n    \"customscript/prompt_matrix.py/txt2img/Select joining char/value\": \"comma\",\r\n    \"customscript/prompt_matrix.py/txt2img/Grid margins (px)/visible\": true,\r\n    \"customscript/prompt_matrix.py/txt2img/Grid margins (px)/value\": 0,\r\n    \"customscript/prompt_matrix.py/txt2img/Grid margins (px)/minimum\": 0,\r\n    \"customscript/prompt_matrix.py/txt2img/Grid margins (px)/maximum\": 500,\r\n    \"customscript/prompt_matrix.py/txt2img/Grid margins (px)/step\": 2,\r\n    \"customscript/prompts_from_file.py/txt2img/Iterate seed every line/visible\": true,\r\n    \"customscript/prompts_from_file.py/txt2img/Iterate seed every line/value\": false,\r\n    \"customscript/prompts_from_file.py/txt2img/Use same random seed for all lines/visible\": true,\r\n    \"customscript/prompts_from_file.py/txt2img/Use same random seed for all lines/value\": false,\r\n    \"customscript/prompts_from_file.py/txt2img/List of prompt inputs/visible\": true,\r\n    \"customscript/prompts_from_file.py/txt2img/List of prompt inputs/value\": \"\",\r\n    \"customscript/xyz_grid.py/txt2img/X type/visible\": true,\r\n    \"customscript/xyz_grid.py/txt2img/X type/value\": \"Seed\",\r\n    \"customscript/xyz_grid.py/txt2img/X values/visible\": true,\r\n    \"customscript/xyz_grid.py/txt2img/X values/value\": \"\",\r\n    \"customscript/xyz_grid.py/txt2img/Y type/visible\": true,\r\n    \"customscript/xyz_grid.py/txt2img/Y type/value\": \"Nothing\",\r\n    \"customscript/xyz_grid.py/txt2img/Y values/visible\": true,\r\n    \"customscript/xyz_grid.py/txt2img/Y values/value\": \"\",\r\n    \"customscript/xyz_grid.py/txt2img/Z type/visible\": true,\r\n    \"customscript/xyz_grid.py/txt2img/Z type/value\": \"Nothing\",\r\n    \"customscript/xyz_grid.py/txt2img/Z values/visible\": true,\r\n    \"customscript/xyz_grid.py/txt2img/Z values/value\": \"\",\r\n    \"customscript/xyz_grid.py/txt2img/Draw legend/visible\": true,\r\n    \"customscript/xyz_grid.py/txt2img/Draw legend/value\": true,\r\n    \"customscript/xyz_grid.py/txt2img/Keep -1 for seeds/visible\": true,\r\n    \"customscript/xyz_grid.py/txt2img/Keep -1 for seeds/value\": false,\r\n    \"customscript/xyz_grid.py/txt2img/Include Sub Images/visible\": true,\r\n    \"customscript/xyz_grid.py/txt2img/Include Sub Images/value\": false,\r\n    \"customscript/xyz_grid.py/txt2img/Include Sub Grids/visible\": true,\r\n    \"customscript/xyz_grid.py/txt2img/Include Sub Grids/value\": false,\r\n    \"customscript/xyz_grid.py/txt2img/Grid margins (px)/visible\": true,\r\n    \"customscript/xyz_grid.py/txt2img/Grid margins (px)/value\": 0,\r\n    \"customscript/xyz_grid.py/txt2img/Grid margins (px)/minimum\": 0,\r\n    \"customscript/xyz_grid.py/txt2img/Grid margins (px)/maximum\": 500,\r\n    \"customscript/xyz_grid.py/txt2img/Grid margins (px)/step\": 2,\r\n    \"img2img/Prompt/visible\": true,\r\n    \"img2img/Prompt/value\": \"\",\r\n    \"img2img/Negative prompt/visible\": true,\r\n    \"img2img/Negative prompt/value\": \"\",\r\n    \"img2img/Styles/visible\": true,\r\n    \"img2img/Styles/value\": [],\r\n    \"img2img/Input directory/visible\": true,\r\n    \"img2img/Input directory/value\": \"\",\r\n    \"img2img/Output directory/visible\": true,\r\n    \"img2img/Output directory/value\": \"\",\r\n    \"img2img/Inpaint batch mask directory (required for inpaint batch processing only)/visible\": true,\r\n    \"img2img/Inpaint batch mask directory (required for inpaint batch processing only)/value\": \"\",\r\n    \"img2img/Resize mode/visible\": true,\r\n    \"img2img/Resize mode/value\": \"Just resize\",\r\n    \"img2img/Mask blur/visible\": true,\r\n    \"img2img/Mask blur/value\": 4,\r\n    \"img2img/Mask blur/minimum\": 0,\r\n    \"img2img/Mask blur/maximum\": 64,\r\n    \"img2img/Mask blur/step\": 1,\r\n    \"img2img/Mask transparency/value\": 0,\r\n    \"img2img/Mask transparency/minimum\": 0,\r\n    \"img2img/Mask transparency/maximum\": 100,\r\n    \"img2img/Mask transparency/step\": 1,\r\n    \"img2img/Mask mode/visible\": true,\r\n    \"img2img/Mask mode/value\": \"Inpaint masked\",\r\n    \"img2img/Masked content/visible\": true,\r\n    \"img2img/Masked content/value\": \"original\",\r\n    \"img2img/Inpaint area/visible\": true,\r\n    \"img2img/Inpaint area/value\": \"Whole picture\",\r\n    \"img2img/Only masked padding, pixels/visible\": true,\r\n    \"img2img/Only masked padding, pixels/value\": 32,\r\n    \"img2img/Only masked padding, pixels/minimum\": 0,\r\n    \"img2img/Only masked padding, pixels/maximum\": 256,\r\n    \"img2img/Only masked padding, pixels/step\": 4,\r\n    \"img2img/Sampling method/visible\": true,\r\n    \"img2img/Sampling method/value\": \"Euler a\",\r\n    \"img2img/Sampling steps/visible\": true,\r\n    \"img2img/Sampling steps/value\": 20,\r\n    \"img2img/Sampling steps/minimum\": 1,\r\n    \"img2img/Sampling steps/maximum\": 150,\r\n    \"img2img/Sampling steps/step\": 1,\r\n    \"img2img/Width/visible\": true,\r\n    \"img2img/Width/value\": 512,\r\n    \"img2img/Width/minimum\": 64,\r\n    \"img2img/Width/maximum\": 2048,\r\n    \"img2img/Width/step\": 8,\r\n    \"img2img/Height/visible\": true,\r\n    \"img2img/Height/value\": 512,\r\n    \"img2img/Height/minimum\": 64,\r\n    \"img2img/Height/maximum\": 2048,\r\n    \"img2img/Height/step\": 8,\r\n    \"img2img/Batch count/visible\": true,\r\n    \"img2img/Batch count/value\": 1,\r\n    \"img2img/Batch count/minimum\": 1,\r\n    \"img2img/Batch count/maximum\": 100,\r\n    \"img2img/Batch count/step\": 1,\r\n    \"img2img/Batch size/visible\": true,\r\n    \"img2img/Batch size/value\": 1,\r\n    \"img2img/Batch size/minimum\": 1,\r\n    \"img2img/Batch size/maximum\": 8,\r\n    \"img2img/Batch size/step\": 1,\r\n    \"img2img/CFG Scale/visible\": true,\r\n    \"img2img/CFG Scale/value\": 7.0,\r\n    \"img2img/CFG Scale/minimum\": 1.0,\r\n    \"img2img/CFG Scale/maximum\": 30.0,\r\n    \"img2img/CFG Scale/step\": 0.5,\r\n    \"img2img/Image CFG Scale/value\": 1.5,\r\n    \"img2img/Image CFG Scale/minimum\": 0,\r\n    \"img2img/Image CFG Scale/maximum\": 3.0,\r\n    \"img2img/Image CFG Scale/step\": 0.05,\r\n    \"img2img/Denoising strength/visible\": true,\r\n    \"img2img/Denoising strength/value\": 0.75,\r\n    \"img2img/Denoising strength/minimum\": 0.0,\r\n    \"img2img/Denoising strength/maximum\": 1.0,\r\n    \"img2img/Denoising strength/step\": 0.01,\r\n    \"img2img/Seed/visible\": true,\r\n    \"img2img/Seed/value\": -1.0,\r\n    \"img2img/Extra/visible\": true,\r\n    \"img2img/Extra/value\": false,\r\n    \"img2img/Variation seed/visible\": true,\r\n    \"img2img/Variation seed/value\": -1.0,\r\n    \"img2img/Variation strength/visible\": true,\r\n    \"img2img/Variation strength/value\": 0.0,\r\n    \"img2img/Variation strength/minimum\": 0,\r\n    \"img2img/Variation strength/maximum\": 1,\r\n    \"img2img/Variation strength/step\": 0.01,\r\n    \"img2img/Resize seed from width/visible\": true,\r\n    \"img2img/Resize seed from width/value\": 0,\r\n    \"img2img/Resize seed from width/minimum\": 0,\r\n    \"img2img/Resize seed from width/maximum\": 2048,\r\n    \"img2img/Resize seed from width/step\": 8,\r\n    \"img2img/Resize seed from height/visible\": true,\r\n    \"img2img/Resize seed from height/value\": 0,\r\n    \"img2img/Resize seed from height/minimum\": 0,\r\n    \"img2img/Resize seed from height/maximum\": 2048,\r\n    \"img2img/Resize seed from height/step\": 8,\r\n    \"img2img/Restore faces/visible\": true,\r\n    \"img2img/Restore faces/value\": false,\r\n    \"img2img/Tiling/visible\": true,\r\n    \"img2img/Tiling/value\": false,\r\n    \"img2img/Override settings/value\": null,\r\n    \"customscript/additional_networks.py/img2img/Enable/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Enable/value\": false,\r\n    \"customscript/additional_networks.py/img2img/Network module 1/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Network module 1/value\": \"LoRA\",\r\n    \"customscript/additional_networks.py/img2img/Model 1/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Model 1/value\": \"None\",\r\n    \"customscript/additional_networks.py/img2img/Weight 1/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Weight 1/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 1/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 1/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 1/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/Network module 2/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Network module 2/value\": \"LoRA\",\r\n    \"customscript/additional_networks.py/img2img/Model 2/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Model 2/value\": \"None\",\r\n    \"customscript/additional_networks.py/img2img/Weight 2/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Weight 2/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 2/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 2/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 2/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/Network module 3/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Network module 3/value\": \"LoRA\",\r\n    \"customscript/additional_networks.py/img2img/Model 3/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Model 3/value\": \"None\",\r\n    \"customscript/additional_networks.py/img2img/Weight 3/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Weight 3/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 3/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 3/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 3/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/Network module 4/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Network module 4/value\": \"LoRA\",\r\n    \"customscript/additional_networks.py/img2img/Model 4/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Model 4/value\": \"None\",\r\n    \"customscript/additional_networks.py/img2img/Weight 4/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Weight 4/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 4/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 4/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 4/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/Network module 5/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Network module 5/value\": \"LoRA\",\r\n    \"customscript/additional_networks.py/img2img/Model 5/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Model 5/value\": \"None\",\r\n    \"customscript/additional_networks.py/img2img/Weight 5/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Weight 5/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 5/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 5/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/Weight 5/step\": 0.05,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic weight/visible\": true,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic weight/value\": 0.9,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic weight/minimum\": 0,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic weight/maximum\": 1,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic weight/step\": 0.01,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic steps/visible\": true,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic steps/value\": 5,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic steps/minimum\": 0,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic steps/maximum\": 50,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic steps/step\": 1,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic learning rate/visible\": true,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic learning rate/value\": \"0.0001\",\r\n    \"customscript/aesthetic.py/img2img/Slerp interpolation/visible\": true,\r\n    \"customscript/aesthetic.py/img2img/Slerp interpolation/value\": false,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic imgs embedding/visible\": true,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic imgs embedding/value\": \"None\",\r\n    \"customscript/aesthetic.py/img2img/Aesthetic text for imgs/visible\": true,\r\n    \"customscript/aesthetic.py/img2img/Aesthetic text for imgs/value\": \"\",\r\n    \"customscript/aesthetic.py/img2img/Slerp angle/visible\": true,\r\n    \"customscript/aesthetic.py/img2img/Slerp angle/value\": 0.1,\r\n    \"customscript/aesthetic.py/img2img/Slerp angle/minimum\": 0,\r\n    \"customscript/aesthetic.py/img2img/Slerp angle/maximum\": 1,\r\n    \"customscript/aesthetic.py/img2img/Slerp angle/step\": 0.01,\r\n    \"customscript/aesthetic.py/img2img/Is negative text/visible\": true,\r\n    \"customscript/aesthetic.py/img2img/Is negative text/value\": false,\r\n    \"img2img/Script/visible\": true,\r\n    \"img2img/Script/value\": \"None\",\r\n    \"customscript/img2imgalt.py/img2img/Override `Sampling method` to Euler?(this method is built for it)/visible\": true,\r\n    \"customscript/img2imgalt.py/img2img/Override `Sampling method` to Euler?(this method is built for it)/value\": true,\r\n    \"customscript/img2imgalt.py/img2img/Override `prompt` to the same value as `original prompt`?(and `negative prompt`)/visible\": true,\r\n    \"customscript/img2imgalt.py/img2img/Override `prompt` to the same value as `original prompt`?(and `negative prompt`)/value\": true,\r\n    \"customscript/img2imgalt.py/img2img/Original prompt/visible\": true,\r\n    \"customscript/img2imgalt.py/img2img/Original prompt/value\": \"\",\r\n    \"customscript/img2imgalt.py/img2img/Original negative prompt/visible\": true,\r\n    \"customscript/img2imgalt.py/img2img/Original negative prompt/value\": \"\",\r\n    \"customscript/img2imgalt.py/img2img/Override `Sampling Steps` to the same value as `Decode steps`?/visible\": true,\r\n    \"customscript/img2imgalt.py/img2img/Override `Sampling Steps` to the same value as `Decode steps`?/value\": true,\r\n    \"customscript/img2imgalt.py/img2img/Decode steps/visible\": true,\r\n    \"customscript/img2imgalt.py/img2img/Decode steps/value\": 50,\r\n    \"customscript/img2imgalt.py/img2img/Decode steps/minimum\": 1,\r\n    \"customscript/img2imgalt.py/img2img/Decode steps/maximum\": 150,\r\n    \"customscript/img2imgalt.py/img2img/Decode steps/step\": 1,\r\n    \"customscript/img2imgalt.py/img2img/Override `Denoising strength` to 1?/visible\": true,\r\n    \"customscript/img2imgalt.py/img2img/Override `Denoising strength` to 1?/value\": true,\r\n    \"customscript/img2imgalt.py/img2img/Decode CFG scale/visible\": true,\r\n    \"customscript/img2imgalt.py/img2img/Decode CFG scale/value\": 1.0,\r\n    \"customscript/img2imgalt.py/img2img/Decode CFG scale/minimum\": 0.0,\r\n    \"customscript/img2imgalt.py/img2img/Decode CFG scale/maximum\": 15.0,\r\n    \"customscript/img2imgalt.py/img2img/Decode CFG scale/step\": 0.1,\r\n    \"customscript/img2imgalt.py/img2img/Randomness/visible\": true,\r\n    \"customscript/img2imgalt.py/img2img/Randomness/value\": 0.0,\r\n    \"customscript/img2imgalt.py/img2img/Randomness/minimum\": 0.0,\r\n    \"customscript/img2imgalt.py/img2img/Randomness/maximum\": 1.0,\r\n    \"customscript/img2imgalt.py/img2img/Randomness/step\": 0.01,\r\n    \"customscript/img2imgalt.py/img2img/Sigma adjustment for finding noise for image/visible\": true,\r\n    \"customscript/img2imgalt.py/img2img/Sigma adjustment for finding noise for image/value\": false,\r\n    \"customscript/loopback.py/img2img/Loops/visible\": true,\r\n    \"customscript/loopback.py/img2img/Loops/value\": 4,\r\n    \"customscript/loopback.py/img2img/Loops/minimum\": 1,\r\n    \"customscript/loopback.py/img2img/Loops/maximum\": 32,\r\n    \"customscript/loopback.py/img2img/Loops/step\": 1,\r\n    \"customscript/loopback.py/img2img/Denoising strength change factor/visible\": true,\r\n    \"customscript/loopback.py/img2img/Denoising strength change factor/value\": 1,\r\n    \"customscript/loopback.py/img2img/Denoising strength change factor/minimum\": 0.9,\r\n    \"customscript/loopback.py/img2img/Denoising strength change factor/maximum\": 1.1,\r\n    \"customscript/loopback.py/img2img/Denoising strength change factor/step\": 0.01,\r\n    \"customscript/loopback.py/img2img/Append interrogated prompt at each iteration/visible\": true,\r\n    \"customscript/loopback.py/img2img/Append interrogated prompt at each iteration/value\": \"None\",\r\n    \"customscript/outpainting_mk_2.py/img2img/Pixels to expand/visible\": true,\r\n    \"customscript/outpainting_mk_2.py/img2img/Pixels to expand/value\": 128,\r\n    \"customscript/outpainting_mk_2.py/img2img/Pixels to expand/minimum\": 8,\r\n    \"customscript/outpainting_mk_2.py/img2img/Pixels to expand/maximum\": 256,\r\n    \"customscript/outpainting_mk_2.py/img2img/Pixels to expand/step\": 8,\r\n    \"customscript/outpainting_mk_2.py/img2img/Mask blur/visible\": true,\r\n    \"customscript/outpainting_mk_2.py/img2img/Mask blur/value\": 8,\r\n    \"customscript/outpainting_mk_2.py/img2img/Mask blur/minimum\": 0,\r\n    \"customscript/outpainting_mk_2.py/img2img/Mask blur/maximum\": 64,\r\n    \"customscript/outpainting_mk_2.py/img2img/Mask blur/step\": 1,\r\n    \"customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/visible\": true,\r\n    \"customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/value\": 1.0,\r\n    \"customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/minimum\": 0.0,\r\n    \"customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/maximum\": 4.0,\r\n    \"customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/step\": 0.01,\r\n    \"customscript/outpainting_mk_2.py/img2img/Color variation/visible\": true,\r\n    \"customscript/outpainting_mk_2.py/img2img/Color variation/value\": 0.05,\r\n    \"customscript/outpainting_mk_2.py/img2img/Color variation/minimum\": 0.0,\r\n    \"customscript/outpainting_mk_2.py/img2img/Color variation/maximum\": 1.0,\r\n    \"customscript/outpainting_mk_2.py/img2img/Color variation/step\": 0.01,\r\n    \"customscript/poor_mans_outpainting.py/img2img/Pixels to expand/visible\": true,\r\n    \"customscript/poor_mans_outpainting.py/img2img/Pixels to expand/value\": 128,\r\n    \"customscript/poor_mans_outpainting.py/img2img/Pixels to expand/minimum\": 8,\r\n    \"customscript/poor_mans_outpainting.py/img2img/Pixels to expand/maximum\": 256,\r\n    \"customscript/poor_mans_outpainting.py/img2img/Pixels to expand/step\": 8,\r\n    \"customscript/poor_mans_outpainting.py/img2img/Mask blur/visible\": true,\r\n    \"customscript/poor_mans_outpainting.py/img2img/Mask blur/value\": 4,\r\n    \"customscript/poor_mans_outpainting.py/img2img/Mask blur/minimum\": 0,\r\n    \"customscript/poor_mans_outpainting.py/img2img/Mask blur/maximum\": 64,\r\n    \"customscript/poor_mans_outpainting.py/img2img/Mask blur/step\": 1,\r\n    \"customscript/poor_mans_outpainting.py/img2img/Masked content/visible\": true,\r\n    \"customscript/poor_mans_outpainting.py/img2img/Masked content/value\": \"fill\",\r\n    \"customscript/prompt_matrix.py/img2img/Put variable parts at start of prompt/visible\": true,\r\n    \"customscript/prompt_matrix.py/img2img/Put variable parts at start of prompt/value\": false,\r\n    \"customscript/prompt_matrix.py/img2img/Use different seed for each picture/visible\": true,\r\n    \"customscript/prompt_matrix.py/img2img/Use different seed for each picture/value\": false,\r\n    \"customscript/prompt_matrix.py/img2img/Select prompt/visible\": true,\r\n    \"customscript/prompt_matrix.py/img2img/Select prompt/value\": \"positive\",\r\n    \"customscript/prompt_matrix.py/img2img/Select joining char/visible\": true,\r\n    \"customscript/prompt_matrix.py/img2img/Select joining char/value\": \"comma\",\r\n    \"customscript/prompt_matrix.py/img2img/Grid margins (px)/visible\": true,\r\n    \"customscript/prompt_matrix.py/img2img/Grid margins (px)/value\": 0,\r\n    \"customscript/prompt_matrix.py/img2img/Grid margins (px)/minimum\": 0,\r\n    \"customscript/prompt_matrix.py/img2img/Grid margins (px)/maximum\": 500,\r\n    \"customscript/prompt_matrix.py/img2img/Grid margins (px)/step\": 2,\r\n    \"customscript/prompts_from_file.py/img2img/Iterate seed every line/visible\": true,\r\n    \"customscript/prompts_from_file.py/img2img/Iterate seed every line/value\": false,\r\n    \"customscript/prompts_from_file.py/img2img/Use same random seed for all lines/visible\": true,\r\n    \"customscript/prompts_from_file.py/img2img/Use same random seed for all lines/value\": false,\r\n    \"customscript/prompts_from_file.py/img2img/List of prompt inputs/visible\": true,\r\n    \"customscript/prompts_from_file.py/img2img/List of prompt inputs/value\": \"\",\r\n    \"customscript/sd_upscale.py/img2img/Tile overlap/visible\": true,\r\n    \"customscript/sd_upscale.py/img2img/Tile overlap/value\": 64,\r\n    \"customscript/sd_upscale.py/img2img/Tile overlap/minimum\": 0,\r\n    \"customscript/sd_upscale.py/img2img/Tile overlap/maximum\": 256,\r\n    \"customscript/sd_upscale.py/img2img/Tile overlap/step\": 16,\r\n    \"customscript/sd_upscale.py/img2img/Scale Factor/visible\": true,\r\n    \"customscript/sd_upscale.py/img2img/Scale Factor/value\": 2.0,\r\n    \"customscript/sd_upscale.py/img2img/Scale Factor/minimum\": 1.0,\r\n    \"customscript/sd_upscale.py/img2img/Scale Factor/maximum\": 4.0,\r\n    \"customscript/sd_upscale.py/img2img/Scale Factor/step\": 0.05,\r\n    \"customscript/sd_upscale.py/img2img/Upscaler/visible\": true,\r\n    \"customscript/sd_upscale.py/img2img/Upscaler/value\": \"None\",\r\n    \"customscript/xyz_grid.py/img2img/X type/visible\": true,\r\n    \"customscript/xyz_grid.py/img2img/X type/value\": \"Seed\",\r\n    \"customscript/xyz_grid.py/img2img/X values/visible\": true,\r\n    \"customscript/xyz_grid.py/img2img/X values/value\": \"\",\r\n    \"customscript/xyz_grid.py/img2img/Y type/visible\": true,\r\n    \"customscript/xyz_grid.py/img2img/Y type/value\": \"Nothing\",\r\n    \"customscript/xyz_grid.py/img2img/Y values/visible\": true,\r\n    \"customscript/xyz_grid.py/img2img/Y values/value\": \"\",\r\n    \"customscript/xyz_grid.py/img2img/Z type/visible\": true,\r\n    \"customscript/xyz_grid.py/img2img/Z type/value\": \"Nothing\",\r\n    \"customscript/xyz_grid.py/img2img/Z values/visible\": true,\r\n    \"customscript/xyz_grid.py/img2img/Z values/value\": \"\",\r\n    \"customscript/xyz_grid.py/img2img/Draw legend/visible\": true,\r\n    \"customscript/xyz_grid.py/img2img/Draw legend/value\": true,\r\n    \"customscript/xyz_grid.py/img2img/Keep -1 for seeds/visible\": true,\r\n    \"customscript/xyz_grid.py/img2img/Keep -1 for seeds/value\": false,\r\n    \"customscript/xyz_grid.py/img2img/Include Sub Images/visible\": true,\r\n    \"customscript/xyz_grid.py/img2img/Include Sub Images/value\": false,\r\n    \"customscript/xyz_grid.py/img2img/Include Sub Grids/visible\": true,\r\n    \"customscript/xyz_grid.py/img2img/Include Sub Grids/value\": false,\r\n    \"customscript/xyz_grid.py/img2img/Grid margins (px)/visible\": true,\r\n    \"customscript/xyz_grid.py/img2img/Grid margins (px)/value\": 0,\r\n    \"customscript/xyz_grid.py/img2img/Grid margins (px)/minimum\": 0,\r\n    \"customscript/xyz_grid.py/img2img/Grid margins (px)/maximum\": 500,\r\n    \"customscript/xyz_grid.py/img2img/Grid margins (px)/step\": 2,\r\n    \"extras/Input directory/visible\": true,\r\n    \"extras/Input directory/value\": \"\",\r\n    \"extras/Output directory/visible\": true,\r\n    \"extras/Output directory/value\": \"\",\r\n    \"extras/Show result images/visible\": true,\r\n    \"extras/Show result images/value\": true,\r\n    \"customscript/postprocessing_upscale.py/extras/Resize/visible\": true,\r\n    \"customscript/postprocessing_upscale.py/extras/Resize/value\": 4,\r\n    \"customscript/postprocessing_upscale.py/extras/Resize/minimum\": 1.0,\r\n    \"customscript/postprocessing_upscale.py/extras/Resize/maximum\": 8.0,\r\n    \"customscript/postprocessing_upscale.py/extras/Resize/step\": 0.05,\r\n    \"customscript/postprocessing_upscale.py/extras/Width/visible\": true,\r\n    \"customscript/postprocessing_upscale.py/extras/Width/value\": 512,\r\n    \"customscript/postprocessing_upscale.py/extras/Height/visible\": true,\r\n    \"customscript/postprocessing_upscale.py/extras/Height/value\": 512,\r\n    \"customscript/postprocessing_upscale.py/extras/Crop to fit/visible\": true,\r\n    \"customscript/postprocessing_upscale.py/extras/Crop to fit/value\": true,\r\n    \"customscript/postprocessing_upscale.py/extras/Upscaler 1/visible\": true,\r\n    \"customscript/postprocessing_upscale.py/extras/Upscaler 1/value\": \"None\",\r\n    \"customscript/postprocessing_upscale.py/extras/Upscaler 2/visible\": true,\r\n    \"customscript/postprocessing_upscale.py/extras/Upscaler 2/value\": \"None\",\r\n    \"customscript/postprocessing_upscale.py/extras/Upscaler 2 visibility/visible\": true,\r\n    \"customscript/postprocessing_upscale.py/extras/Upscaler 2 visibility/value\": 0.0,\r\n    \"customscript/postprocessing_upscale.py/extras/Upscaler 2 visibility/minimum\": 0.0,\r\n    \"customscript/postprocessing_upscale.py/extras/Upscaler 2 visibility/maximum\": 1.0,\r\n    \"customscript/postprocessing_upscale.py/extras/Upscaler 2 visibility/step\": 0.001,\r\n    \"customscript/postprocessing_gfpgan.py/extras/GFPGAN visibility/visible\": true,\r\n    \"customscript/postprocessing_gfpgan.py/extras/GFPGAN visibility/value\": 0,\r\n    \"customscript/postprocessing_gfpgan.py/extras/GFPGAN visibility/minimum\": 0.0,\r\n    \"customscript/postprocessing_gfpgan.py/extras/GFPGAN visibility/maximum\": 1.0,\r\n    \"customscript/postprocessing_gfpgan.py/extras/GFPGAN visibility/step\": 0.001,\r\n    \"customscript/postprocessing_codeformer.py/extras/CodeFormer visibility/visible\": true,\r\n    \"customscript/postprocessing_codeformer.py/extras/CodeFormer visibility/value\": 0,\r\n    \"customscript/postprocessing_codeformer.py/extras/CodeFormer visibility/minimum\": 0.0,\r\n    \"customscript/postprocessing_codeformer.py/extras/CodeFormer visibility/maximum\": 1.0,\r\n    \"customscript/postprocessing_codeformer.py/extras/CodeFormer visibility/step\": 0.001,\r\n    \"customscript/postprocessing_codeformer.py/extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/visible\": true,\r\n    \"customscript/postprocessing_codeformer.py/extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/value\": 0,\r\n    \"customscript/postprocessing_codeformer.py/extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/minimum\": 0.0,\r\n    \"customscript/postprocessing_codeformer.py/extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/maximum\": 1.0,\r\n    \"customscript/postprocessing_codeformer.py/extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/step\": 0.001,\r\n    \"modelmerger/Primary model (A)/visible\": true,\r\n    \"modelmerger/Primary model (A)/value\": null,\r\n    \"modelmerger/Secondary model (B)/visible\": true,\r\n    \"modelmerger/Secondary model (B)/value\": null,\r\n    \"modelmerger/Tertiary model (C)/visible\": true,\r\n    \"modelmerger/Tertiary model (C)/value\": null,\r\n    \"modelmerger/Custom Name (Optional)/visible\": true,\r\n    \"modelmerger/Custom Name (Optional)/value\": \"\",\r\n    \"modelmerger/Multiplier (M) - set to 0 to get model A/visible\": true,\r\n    \"modelmerger/Multiplier (M) - set to 0 to get model A/value\": 0.3,\r\n    \"modelmerger/Multiplier (M) - set to 0 to get model A/minimum\": 0.0,\r\n    \"modelmerger/Multiplier (M) - set to 0 to get model A/maximum\": 1.0,\r\n    \"modelmerger/Multiplier (M) - set to 0 to get model A/step\": 0.05,\r\n    \"modelmerger/Interpolation Method/visible\": true,\r\n    \"modelmerger/Interpolation Method/value\": \"Weighted sum\",\r\n    \"modelmerger/Checkpoint format/visible\": true,\r\n    \"modelmerger/Checkpoint format/value\": \"ckpt\",\r\n    \"modelmerger/Save as float16/visible\": true,\r\n    \"modelmerger/Save as float16/value\": false,\r\n    \"modelmerger/Copy config from/visible\": true,\r\n    \"modelmerger/Copy config from/value\": \"A, B or C\",\r\n    \"modelmerger/Bake in VAE/visible\": true,\r\n    \"modelmerger/Bake in VAE/value\": \"None\",\r\n    \"modelmerger/Discard weights with matching name/visible\": true,\r\n    \"modelmerger/Discard weights with matching name/value\": \"\",\r\n    \"train/Name/visible\": true,\r\n    \"train/Name/value\": \"\",\r\n    \"train/Initialization text/visible\": true,\r\n    \"train/Initialization text/value\": \"*\",\r\n    \"train/Number of vectors per token/visible\": true,\r\n    \"train/Number of vectors per token/value\": 1,\r\n    \"train/Number of vectors per token/minimum\": 1,\r\n    \"train/Number of vectors per token/maximum\": 75,\r\n    \"train/Number of vectors per token/step\": 1,\r\n    \"train/Overwrite Old Embedding/visible\": true,\r\n    \"train/Overwrite Old Embedding/value\": false,\r\n    \"train/Enter hypernetwork layer structure/visible\": true,\r\n    \"train/Enter hypernetwork layer structure/value\": \"1, 2, 1\",\r\n    \"train/Select activation function of hypernetwork. Recommended : Swish / Linear(none)/visible\": true,\r\n    \"train/Select activation function of hypernetwork. Recommended : Swish / Linear(none)/value\": \"linear\",\r\n    \"train/Select Layer weights initialization. Recommended: Kaiming for relu-like, Xavier for sigmoid-like, Normal otherwise/visible\": true,\r\n    \"train/Select Layer weights initialization. Recommended: Kaiming for relu-like, Xavier for sigmoid-like, Normal otherwise/value\": \"Normal\",\r\n    \"train/Add layer normalization/visible\": true,\r\n    \"train/Add layer normalization/value\": false,\r\n    \"train/Use dropout/visible\": true,\r\n    \"train/Use dropout/value\": false,\r\n    \"train/Enter hypernetwork Dropout structure (or empty). Recommended : 0~0.35 incrementing sequence: 0, 0.05, 0.15/visible\": true,\r\n    \"train/Enter hypernetwork Dropout structure (or empty). Recommended : 0~0.35 incrementing sequence: 0, 0.05, 0.15/value\": \"0, 0, 0\",\r\n    \"train/Overwrite Old Hypernetwork/visible\": true,\r\n    \"train/Overwrite Old Hypernetwork/value\": false,\r\n    \"train/Source directory/visible\": true,\r\n    \"train/Source directory/value\": \"\",\r\n    \"train/Destination directory/visible\": true,\r\n    \"train/Destination directory/value\": \"\",\r\n    \"train/Width/visible\": true,\r\n    \"train/Width/value\": 512,\r\n    \"train/Width/minimum\": 64,\r\n    \"train/Width/maximum\": 2048,\r\n    \"train/Width/step\": 8,\r\n    \"train/Height/visible\": true,\r\n    \"train/Height/value\": 512,\r\n    \"train/Height/minimum\": 64,\r\n    \"train/Height/maximum\": 2048,\r\n    \"train/Height/step\": 8,\r\n    \"train/Existing Caption txt Action/visible\": true,\r\n    \"train/Existing Caption txt Action/value\": \"ignore\",\r\n    \"train/Create flipped copies/visible\": true,\r\n    \"train/Create flipped copies/value\": false,\r\n    \"train/Split oversized images/visible\": true,\r\n    \"train/Split oversized images/value\": false,\r\n    \"train/Auto focal point crop/visible\": true,\r\n    \"train/Auto focal point crop/value\": false,\r\n    \"train/Auto-sized crop/visible\": true,\r\n    \"train/Auto-sized crop/value\": false,\r\n    \"train/Use BLIP for caption/visible\": true,\r\n    \"train/Use BLIP for caption/value\": false,\r\n    \"train/Use deepbooru for caption/visible\": true,\r\n    \"train/Use deepbooru for caption/value\": false,\r\n    \"train/Split image threshold/visible\": true,\r\n    \"train/Split image threshold/value\": 0.5,\r\n    \"train/Split image threshold/minimum\": 0.0,\r\n    \"train/Split image threshold/maximum\": 1.0,\r\n    \"train/Split image threshold/step\": 0.05,\r\n    \"train/Split image overlap ratio/visible\": true,\r\n    \"train/Split image overlap ratio/value\": 0.2,\r\n    \"train/Split image overlap ratio/minimum\": 0.0,\r\n    \"train/Split image overlap ratio/maximum\": 0.9,\r\n    \"train/Split image overlap ratio/step\": 0.05,\r\n    \"train/Focal point face weight/visible\": true,\r\n    \"train/Focal point face weight/value\": 0.9,\r\n    \"train/Focal point face weight/minimum\": 0.0,\r\n    \"train/Focal point face weight/maximum\": 1.0,\r\n    \"train/Focal point face weight/step\": 0.05,\r\n    \"train/Focal point entropy weight/visible\": true,\r\n    \"train/Focal point entropy weight/value\": 0.15,\r\n    \"train/Focal point entropy weight/minimum\": 0.0,\r\n    \"train/Focal point entropy weight/maximum\": 1.0,\r\n    \"train/Focal point entropy weight/step\": 0.05,\r\n    \"train/Focal point edges weight/visible\": true,\r\n    \"train/Focal point edges weight/value\": 0.5,\r\n    \"train/Focal point edges weight/minimum\": 0.0,\r\n    \"train/Focal point edges weight/maximum\": 1.0,\r\n    \"train/Focal point edges weight/step\": 0.05,\r\n    \"train/Create debug image/visible\": true,\r\n    \"train/Create debug image/value\": false,\r\n    \"train/Dimension lower bound/visible\": true,\r\n    \"train/Dimension lower bound/value\": 384,\r\n    \"train/Dimension lower bound/minimum\": 64,\r\n    \"train/Dimension lower bound/maximum\": 2048,\r\n    \"train/Dimension lower bound/step\": 8,\r\n    \"train/Dimension upper bound/visible\": true,\r\n    \"train/Dimension upper bound/value\": 768,\r\n    \"train/Dimension upper bound/minimum\": 64,\r\n    \"train/Dimension upper bound/maximum\": 2048,\r\n    \"train/Dimension upper bound/step\": 8,\r\n    \"train/Area lower bound/visible\": true,\r\n    \"train/Area lower bound/value\": 4096,\r\n    \"train/Area lower bound/minimum\": 4096,\r\n    \"train/Area lower bound/maximum\": 4194304,\r\n    \"train/Area lower bound/step\": 1,\r\n    \"train/Area upper bound/visible\": true,\r\n    \"train/Area upper bound/value\": 409600,\r\n    \"train/Area upper bound/minimum\": 4096,\r\n    \"train/Area upper bound/maximum\": 4194304,\r\n    \"train/Area upper bound/step\": 1,\r\n    \"train/Resizing objective/visible\": true,\r\n    \"train/Resizing objective/value\": \"Maximize area\",\r\n    \"train/Error threshold/visible\": true,\r\n    \"train/Error threshold/value\": 0.1,\r\n    \"train/Error threshold/minimum\": 0,\r\n    \"train/Error threshold/maximum\": 1,\r\n    \"train/Error threshold/step\": 0.01,\r\n    \"train/Embedding/visible\": true,\r\n    \"train/Embedding/value\": null,\r\n    \"train/Hypernetwork/visible\": true,\r\n    \"train/Hypernetwork/value\": null,\r\n    \"train/Embedding Learning rate/visible\": true,\r\n    \"train/Embedding Learning rate/value\": \"0.005\",\r\n    \"train/Hypernetwork Learning rate/visible\": true,\r\n    \"train/Hypernetwork Learning rate/value\": \"0.00001\",\r\n    \"train/Gradient Clipping/visible\": true,\r\n    \"train/Gradient Clipping/value\": \"disabled\",\r\n    \"train/Batch size/visible\": true,\r\n    \"train/Batch size/value\": 1,\r\n    \"train/Gradient accumulation steps/visible\": true,\r\n    \"train/Gradient accumulation steps/value\": 1,\r\n    \"train/Dataset directory/visible\": true,\r\n    \"train/Dataset directory/value\": \"\",\r\n    \"train/Log directory/visible\": true,\r\n    \"train/Log directory/value\": \"textual_inversion\",\r\n    \"train/Prompt template/visible\": true,\r\n    \"train/Prompt template/value\": \"style_filewords.txt\",\r\n    \"train/Do not resize images/visible\": true,\r\n    \"train/Do not resize images/value\": false,\r\n    \"train/Max steps/visible\": true,\r\n    \"train/Max steps/value\": 100000,\r\n    \"train/Save an image to log directory every N steps, 0 to disable/visible\": true,\r\n    \"train/Save an image to log directory every N steps, 0 to disable/value\": 500,\r\n    \"train/Save a copy of embedding to log directory every N steps, 0 to disable/visible\": true,\r\n    \"train/Save a copy of embedding to log directory every N steps, 0 to disable/value\": 500,\r\n    \"train/Use PNG alpha channel as loss weight/visible\": true,\r\n    \"train/Use PNG alpha channel as loss weight/value\": false,\r\n    \"train/Save images with embedding in PNG chunks/visible\": true,\r\n    \"train/Save images with embedding in PNG chunks/value\": true,\r\n    \"train/Read parameters (prompt, etc...) from txt2img tab when making previews/visible\": true,\r\n    \"train/Read parameters (prompt, etc...) from txt2img tab when making previews/value\": false,\r\n    \"train/Shuffle tags by ',' when creating prompts./visible\": true,\r\n    \"train/Shuffle tags by ',' when creating prompts./value\": false,\r\n    \"train/Drop out tags when creating prompts./visible\": true,\r\n    \"train/Drop out tags when creating prompts./value\": 0,\r\n    \"train/Drop out tags when creating prompts./minimum\": 0,\r\n    \"train/Drop out tags when creating prompts./maximum\": 1,\r\n    \"train/Drop out tags when creating prompts./step\": 0.1,\r\n    \"train/Choose latent sampling method/visible\": true,\r\n    \"train/Choose latent sampling method/value\": \"once\",\r\n    \"customscript/additional_networks.py/txt2img/Separate UNet/Text Encoder weights/visible\": true,\r\n    \"customscript/additional_networks.py/txt2img/Separate UNet/Text Encoder weights/value\": false,\r\n    \"txt2img/Weight 1/visible\": true,\r\n    \"txt2img/Weight 1/value\": 1.0,\r\n    \"txt2img/Weight 1/minimum\": -1.0,\r\n    \"txt2img/Weight 1/maximum\": 2.0,\r\n    \"txt2img/Weight 1/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 1/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 1/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 1/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 1/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 1/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 1/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 1/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 1/step\": 0.05,\r\n    \"txt2img/Weight 2/visible\": true,\r\n    \"txt2img/Weight 2/value\": 1.0,\r\n    \"txt2img/Weight 2/minimum\": -1.0,\r\n    \"txt2img/Weight 2/maximum\": 2.0,\r\n    \"txt2img/Weight 2/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 2/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 2/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 2/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 2/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 2/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 2/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 2/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 2/step\": 0.05,\r\n    \"txt2img/Weight 3/visible\": true,\r\n    \"txt2img/Weight 3/value\": 1.0,\r\n    \"txt2img/Weight 3/minimum\": -1.0,\r\n    \"txt2img/Weight 3/maximum\": 2.0,\r\n    \"txt2img/Weight 3/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 3/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 3/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 3/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 3/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 3/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 3/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 3/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 3/step\": 0.05,\r\n    \"txt2img/Weight 4/visible\": true,\r\n    \"txt2img/Weight 4/value\": 1.0,\r\n    \"txt2img/Weight 4/minimum\": -1.0,\r\n    \"txt2img/Weight 4/maximum\": 2.0,\r\n    \"txt2img/Weight 4/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 4/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 4/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 4/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 4/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 4/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 4/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 4/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 4/step\": 0.05,\r\n    \"txt2img/Weight 5/visible\": true,\r\n    \"txt2img/Weight 5/value\": 1.0,\r\n    \"txt2img/Weight 5/minimum\": -1.0,\r\n    \"txt2img/Weight 5/maximum\": 2.0,\r\n    \"txt2img/Weight 5/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 5/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 5/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 5/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/UNet Weight 5/step\": 0.05,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 5/value\": 1.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 5/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 5/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/txt2img/TEnc Weight 5/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/Separate UNet/Text Encoder weights/visible\": true,\r\n    \"customscript/additional_networks.py/img2img/Separate UNet/Text Encoder weights/value\": false,\r\n    \"img2img/Weight 1/visible\": true,\r\n    \"img2img/Weight 1/value\": 1.0,\r\n    \"img2img/Weight 1/minimum\": -1.0,\r\n    \"img2img/Weight 1/maximum\": 2.0,\r\n    \"img2img/Weight 1/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 1/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 1/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 1/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 1/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 1/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 1/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 1/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 1/step\": 0.05,\r\n    \"img2img/Weight 2/visible\": true,\r\n    \"img2img/Weight 2/value\": 1.0,\r\n    \"img2img/Weight 2/minimum\": -1.0,\r\n    \"img2img/Weight 2/maximum\": 2.0,\r\n    \"img2img/Weight 2/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 2/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 2/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 2/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 2/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 2/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 2/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 2/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 2/step\": 0.05,\r\n    \"img2img/Weight 3/visible\": true,\r\n    \"img2img/Weight 3/value\": 1.0,\r\n    \"img2img/Weight 3/minimum\": -1.0,\r\n    \"img2img/Weight 3/maximum\": 2.0,\r\n    \"img2img/Weight 3/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 3/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 3/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 3/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 3/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 3/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 3/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 3/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 3/step\": 0.05,\r\n    \"img2img/Weight 4/visible\": true,\r\n    \"img2img/Weight 4/value\": 1.0,\r\n    \"img2img/Weight 4/minimum\": -1.0,\r\n    \"img2img/Weight 4/maximum\": 2.0,\r\n    \"img2img/Weight 4/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 4/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 4/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 4/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 4/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 4/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 4/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 4/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 4/step\": 0.05,\r\n    \"img2img/Weight 5/visible\": true,\r\n    \"img2img/Weight 5/value\": 1.0,\r\n    \"img2img/Weight 5/minimum\": -1.0,\r\n    \"img2img/Weight 5/maximum\": 2.0,\r\n    \"img2img/Weight 5/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 5/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 5/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 5/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/UNet Weight 5/step\": 0.05,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 5/value\": 1.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 5/minimum\": -1.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 5/maximum\": 2.0,\r\n    \"customscript/additional_networks.py/img2img/TEnc Weight 5/step\": 0.05\r\n}"
  },
  {
    "path": "webui-macos-env.sh",
    "content": "#!/bin/bash\r\n####################################################################\r\n#                          macOS defaults                          #\r\n# Please modify webui-user.sh to change these instead of this file #\r\n####################################################################\r\n\r\nif [[ -x \"$(command -v python3.10)\" ]]\r\nthen\r\n    python_cmd=\"python3.10\"\r\nfi\r\n\r\nexport install_dir=\"$HOME\"\r\nexport COMMANDLINE_ARGS=\"--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate\"\r\nexport TORCH_COMMAND=\"pip install torch==1.12.1 torchvision==0.13.1\"\r\nexport K_DIFFUSION_REPO=\"https://github.com/brkirch/k-diffusion.git\"\r\nexport K_DIFFUSION_COMMIT_HASH=\"51c9778f269cedb55a4d88c79c0246d35bdadb71\"\r\nexport PYTORCH_ENABLE_MPS_FALLBACK=1\r\n\r\n####################################################################\r\n"
  },
  {
    "path": "webui-user.bat",
    "content": "@echo off\r\n\r\nset PYTHON=\r\nset GIT=\r\nset VENV_DIR=\r\nset COMMANDLINE_ARGS=\r\n\r\ncall webui.bat\r\n"
  },
  {
    "path": "webui-user.sh",
    "content": "#!/bin/bash\r\n#########################################################\r\n# Uncomment and change the variables below to your need:#\r\n#########################################################\r\n\r\n# Install directory without trailing slash\r\n#install_dir=\"/home/$(whoami)\"\r\n\r\n# Name of the subdirectory\r\n#clone_dir=\"stable-diffusion-webui\"\r\n\r\n# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS=\"--medvram --opt-split-attention\"\r\n#export COMMANDLINE_ARGS=\"\"\r\n\r\n# python3 executable\r\n#python_cmd=\"python3\"\r\n\r\n# git executable\r\n#export GIT=\"git\"\r\n\r\n# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv)\r\n#venv_dir=\"venv\"\r\n\r\n# script to launch to start the app\r\n#export LAUNCH_SCRIPT=\"launch.py\"\r\n\r\n# install command for torch\r\n#export TORCH_COMMAND=\"pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113\"\r\n\r\n# Requirements file to use for stable-diffusion-webui\r\n#export REQS_FILE=\"requirements_versions.txt\"\r\n\r\n# Fixed git repos\r\n#export K_DIFFUSION_PACKAGE=\"\"\r\n#export GFPGAN_PACKAGE=\"\"\r\n\r\n# Fixed git commits\r\n#export STABLE_DIFFUSION_COMMIT_HASH=\"\"\r\n#export TAMING_TRANSFORMERS_COMMIT_HASH=\"\"\r\n#export CODEFORMER_COMMIT_HASH=\"\"\r\n#export BLIP_COMMIT_HASH=\"\"\r\n\r\n# Uncomment to enable accelerated launch\r\n#export ACCELERATE=\"True\"\r\n\r\n###########################################\r\n"
  },
  {
    "path": "webui.bat",
    "content": "@echo off\r\n\r\nif not defined PYTHON (set PYTHON=python)\r\nif not defined VENV_DIR (set \"VENV_DIR=%~dp0%venv\")\r\n\r\n\r\nset ERROR_REPORTING=FALSE\r\n\r\nmkdir tmp 2>NUL\r\n\r\n%PYTHON% -c \"\" >tmp/stdout.txt 2>tmp/stderr.txt\r\nif %ERRORLEVEL% == 0 goto :check_pip\r\necho Couldn't launch python\r\ngoto :show_stdout_stderr\r\n\r\n:check_pip\r\n%PYTHON% -mpip --help >tmp/stdout.txt 2>tmp/stderr.txt\r\nif %ERRORLEVEL% == 0 goto :start_venv\r\nif \"%PIP_INSTALLER_LOCATION%\" == \"\" goto :show_stdout_stderr\r\n%PYTHON% \"%PIP_INSTALLER_LOCATION%\" >tmp/stdout.txt 2>tmp/stderr.txt\r\nif %ERRORLEVEL% == 0 goto :start_venv\r\necho Couldn't install pip\r\ngoto :show_stdout_stderr\r\n\r\n:start_venv\r\nif [\"%VENV_DIR%\"] == [\"-\"] goto :skip_venv\r\nif [\"%SKIP_VENV%\"] == [\"1\"] goto :skip_venv\r\n\r\ndir \"%VENV_DIR%\\Scripts\\Python.exe\" >tmp/stdout.txt 2>tmp/stderr.txt\r\nif %ERRORLEVEL% == 0 goto :activate_venv\r\n\r\nfor /f \"delims=\" %%i in ('CALL %PYTHON% -c \"import sys; print(sys.executable)\"') do set PYTHON_FULLNAME=\"%%i\"\r\necho Creating venv in directory %VENV_DIR% using python %PYTHON_FULLNAME%\r\n%PYTHON_FULLNAME% -m venv \"%VENV_DIR%\" >tmp/stdout.txt 2>tmp/stderr.txt\r\nif %ERRORLEVEL% == 0 goto :activate_venv\r\necho Unable to create venv in directory \"%VENV_DIR%\"\r\ngoto :show_stdout_stderr\r\n\r\n:activate_venv\r\nset PYTHON=\"%VENV_DIR%\\Scripts\\Python.exe\"\r\necho venv %PYTHON%\r\n\r\n:skip_venv\r\nif [%ACCELERATE%] == [\"True\"] goto :accelerate\r\ngoto :launch\r\n\r\n:accelerate\r\necho Checking for accelerate\r\nset ACCELERATE=\"%VENV_DIR%\\Scripts\\accelerate.exe\"\r\nif EXIST %ACCELERATE% goto :accelerate_launch\r\n\r\n:launch\r\n%PYTHON% launch.py %*\r\npause\r\nexit /b\r\n\r\n:accelerate_launch\r\necho Accelerating\r\n%ACCELERATE% launch --num_cpu_threads_per_process=6 launch.py\r\npause\r\nexit /b\r\n\r\n:show_stdout_stderr\r\n\r\necho.\r\necho exit code: %errorlevel%\r\n\r\nfor /f %%i in (\"tmp\\stdout.txt\") do set size=%%~zi\r\nif %size% equ 0 goto :show_stderr\r\necho.\r\necho stdout:\r\ntype tmp\\stdout.txt\r\n\r\n:show_stderr\r\nfor /f %%i in (\"tmp\\stderr.txt\") do set size=%%~zi\r\nif %size% equ 0 goto :show_stderr\r\necho.\r\necho stderr:\r\ntype tmp\\stderr.txt\r\n\r\n:endofscript\r\n\r\necho.\r\necho Launch unsuccessful. Exiting.\r\npause\r\n"
  },
  {
    "path": "webui.py",
    "content": "import os\r\nimport sys\r\nimport time\r\nimport importlib\r\nimport signal\r\nimport re\r\nfrom fastapi import FastAPI\r\nfrom fastapi.middleware.cors import CORSMiddleware\r\nfrom fastapi.middleware.gzip import GZipMiddleware\r\nfrom packaging import version\r\n\r\nimport logging\r\nlogging.getLogger(\"xformers\").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())\r\n\r\nfrom modules import import_hook, errors, extra_networks, ui_extra_networks_checkpoints\r\nfrom modules import extra_networks_hypernet, ui_extra_networks_hypernets, ui_extra_networks_textual_inversion\r\nfrom modules.call_queue import wrap_queued_call, queue_lock, wrap_gradio_gpu_call\r\n\r\nimport torch\r\n\r\n# Truncate version number of nightly/local build of PyTorch to not cause exceptions with CodeFormer or Safetensors\r\nif \".dev\" in torch.__version__ or \"+git\" in torch.__version__:\r\n    torch.__long_version__ = torch.__version__\r\n    torch.__version__ = re.search(r'[\\d.]+[\\d]', torch.__version__).group(0)\r\n\r\nfrom modules import shared, devices, sd_samplers, upscaler, extensions, localization, ui_tempdir, ui_extra_networks\r\nimport modules.codeformer_model as codeformer\r\nimport modules.face_restoration\r\nimport modules.gfpgan_model as gfpgan\r\nimport modules.img2img\r\n\r\nimport modules.lowvram\r\nimport modules.paths\r\nimport modules.scripts\r\nimport modules.sd_hijack\r\nimport modules.sd_models\r\nimport modules.sd_vae\r\nimport modules.txt2img\r\nimport modules.script_callbacks\r\nimport modules.textual_inversion.textual_inversion\r\nimport modules.progress\r\n\r\nimport modules.ui\r\nfrom modules import modelloader\r\nfrom modules.shared import cmd_opts\r\nimport modules.hypernetworks.hypernetwork\r\n\r\n\r\nif cmd_opts.server_name:\r\n    server_name = cmd_opts.server_name\r\nelse:\r\n    server_name = \"0.0.0.0\" if cmd_opts.listen else None\r\n\r\n\r\ndef check_versions():\r\n    if shared.cmd_opts.skip_version_check:\r\n        return\r\n\r\n    expected_torch_version = \"1.13.1\"\r\n\r\n    if version.parse(torch.__version__) < version.parse(expected_torch_version):\r\n        errors.print_error_explanation(f\"\"\"\r\nYou are running torch {torch.__version__}.\r\nThe program is tested to work with torch {expected_torch_version}.\r\nTo reinstall the desired version, run with commandline flag --reinstall-torch.\r\nBeware that this will cause a lot of large files to be downloaded, as well as\r\nthere are reports of issues with training tab on the latest version.\r\n\r\nUse --skip-version-check commandline argument to disable this check.\r\n        \"\"\".strip())\r\n\r\n    expected_xformers_version = \"0.0.16rc425\"\r\n    if shared.xformers_available:\r\n        import xformers\r\n\r\n        if version.parse(xformers.__version__) < version.parse(expected_xformers_version):\r\n            errors.print_error_explanation(f\"\"\"\r\nYou are running xformers {xformers.__version__}.\r\nThe program is tested to work with xformers {expected_xformers_version}.\r\nTo reinstall the desired version, run with commandline flag --reinstall-xformers.\r\n\r\nUse --skip-version-check commandline argument to disable this check.\r\n            \"\"\".strip())\r\n\r\n\r\ndef initialize():\r\n    check_versions()\r\n\r\n    extensions.list_extensions()\r\n    localization.list_localizations(cmd_opts.localizations_dir)\r\n\r\n    if cmd_opts.ui_debug_mode:\r\n        shared.sd_upscalers = upscaler.UpscalerLanczos().scalers\r\n        modules.scripts.load_scripts()\r\n        return\r\n\r\n    modelloader.cleanup_models()\r\n    modules.sd_models.setup_model()\r\n    codeformer.setup_model(cmd_opts.codeformer_models_path)\r\n    gfpgan.setup_model(cmd_opts.gfpgan_models_path)\r\n\r\n    modelloader.list_builtin_upscalers()\r\n    modules.scripts.load_scripts()\r\n    modelloader.load_upscalers()\r\n\r\n    modules.sd_vae.refresh_vae_list()\r\n\r\n    modules.textual_inversion.textual_inversion.list_textual_inversion_templates()\r\n\r\n    try:\r\n        modules.sd_models.load_model()\r\n    except Exception as e:\r\n        errors.display(e, \"loading stable diffusion model\")\r\n        print(\"\", file=sys.stderr)\r\n        print(\"Stable diffusion model failed to load, exiting\", file=sys.stderr)\r\n        exit(1)\r\n\r\n    shared.opts.data[\"sd_model_checkpoint\"] = shared.sd_model.sd_checkpoint_info.title\r\n\r\n    shared.opts.onchange(\"sd_model_checkpoint\", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()))\r\n    shared.opts.onchange(\"sd_vae\", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)\r\n    shared.opts.onchange(\"sd_vae_as_default\", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)\r\n    shared.opts.onchange(\"temp_dir\", ui_tempdir.on_tmpdir_changed)\r\n\r\n    shared.reload_hypernetworks()\r\n\r\n    ui_extra_networks.intialize()\r\n    ui_extra_networks.register_page(ui_extra_networks_textual_inversion.ExtraNetworksPageTextualInversion())\r\n    ui_extra_networks.register_page(ui_extra_networks_hypernets.ExtraNetworksPageHypernetworks())\r\n    ui_extra_networks.register_page(ui_extra_networks_checkpoints.ExtraNetworksPageCheckpoints())\r\n\r\n    extra_networks.initialize()\r\n    extra_networks.register_extra_network(extra_networks_hypernet.ExtraNetworkHypernet())\r\n\r\n    if cmd_opts.tls_keyfile is not None and cmd_opts.tls_keyfile is not None:\r\n\r\n        try:\r\n            if not os.path.exists(cmd_opts.tls_keyfile):\r\n                print(\"Invalid path to TLS keyfile given\")\r\n            if not os.path.exists(cmd_opts.tls_certfile):\r\n                print(f\"Invalid path to TLS certfile: '{cmd_opts.tls_certfile}'\")\r\n        except TypeError:\r\n            cmd_opts.tls_keyfile = cmd_opts.tls_certfile = None\r\n            print(\"TLS setup invalid, running webui without TLS\")\r\n        else:\r\n            print(\"Running with TLS\")\r\n\r\n    # make the program just exit at ctrl+c without waiting for anything\r\n    def sigint_handler(sig, frame):\r\n        print(f'Interrupted with signal {sig} in {frame}')\r\n        os._exit(0)\r\n\r\n    signal.signal(signal.SIGINT, sigint_handler)\r\n\r\n\r\ndef setup_cors(app):\r\n    if cmd_opts.cors_allow_origins and cmd_opts.cors_allow_origins_regex:\r\n        app.add_middleware(CORSMiddleware, allow_origins=cmd_opts.cors_allow_origins.split(','), allow_origin_regex=cmd_opts.cors_allow_origins_regex, allow_methods=['*'], allow_credentials=True, allow_headers=['*'])\r\n    elif cmd_opts.cors_allow_origins:\r\n        app.add_middleware(CORSMiddleware, allow_origins=cmd_opts.cors_allow_origins.split(','), allow_methods=['*'], allow_credentials=True, allow_headers=['*'])\r\n    elif cmd_opts.cors_allow_origins_regex:\r\n        app.add_middleware(CORSMiddleware, allow_origin_regex=cmd_opts.cors_allow_origins_regex, allow_methods=['*'], allow_credentials=True, allow_headers=['*'])\r\n\r\n\r\ndef create_api(app):\r\n    from modules.api.api import Api\r\n    api = Api(app, queue_lock)\r\n    return api\r\n\r\n\r\ndef wait_on_server(demo=None):\r\n    while 1:\r\n        time.sleep(0.5)\r\n        if shared.state.need_restart:\r\n            shared.state.need_restart = False\r\n            time.sleep(0.5)\r\n            demo.close()\r\n            time.sleep(0.5)\r\n            break\r\n\r\n\r\ndef api_only():\r\n    initialize()\r\n\r\n    app = FastAPI()\r\n    setup_cors(app)\r\n    app.add_middleware(GZipMiddleware, minimum_size=1000)\r\n    api = create_api(app)\r\n\r\n    modules.script_callbacks.app_started_callback(None, app)\r\n\r\n    api.launch(server_name=\"0.0.0.0\" if cmd_opts.listen else \"127.0.0.1\", port=cmd_opts.port if cmd_opts.port else 7861)\r\n\r\n\r\ndef webui():\r\n    launch_api = cmd_opts.api\r\n    initialize()\r\n\r\n    while 1:\r\n        if shared.opts.clean_temp_dir_at_start:\r\n            ui_tempdir.cleanup_tmpdr()\r\n\r\n        modules.script_callbacks.before_ui_callback()\r\n\r\n        shared.demo = modules.ui.create_ui()\r\n\r\n        if cmd_opts.gradio_queue:\r\n            shared.demo.queue(64)\r\n\r\n        gradio_auth_creds = []\r\n        if cmd_opts.gradio_auth:\r\n            gradio_auth_creds += cmd_opts.gradio_auth.strip('\"').replace('\\n', '').split(',')\r\n        if cmd_opts.gradio_auth_path:\r\n            with open(cmd_opts.gradio_auth_path, 'r', encoding=\"utf8\") as file:\r\n                for line in file.readlines():\r\n                    gradio_auth_creds += [x.strip() for x in line.split(',')]\r\n\r\n        app, local_url, share_url = shared.demo.launch(\r\n            share=cmd_opts.share,\r\n            server_name=server_name,\r\n            server_port=cmd_opts.port,\r\n            ssl_keyfile=cmd_opts.tls_keyfile,\r\n            ssl_certfile=cmd_opts.tls_certfile,\r\n            debug=cmd_opts.gradio_debug,\r\n            auth=[tuple(cred.split(':')) for cred in gradio_auth_creds] if gradio_auth_creds else None,\r\n            inbrowser=cmd_opts.autolaunch,\r\n            prevent_thread_lock=True\r\n        )\r\n        # after initial launch, disable --autolaunch for subsequent restarts\r\n        cmd_opts.autolaunch = False\r\n\r\n        # gradio uses a very open CORS policy via app.user_middleware, which makes it possible for\r\n        # an attacker to trick the user into opening a malicious HTML page, which makes a request to the\r\n        # running web ui and do whatever the attacker wants, including installing an extension and\r\n        # running its code. We disable this here. Suggested by RyotaK.\r\n        app.user_middleware = [x for x in app.user_middleware if x.cls.__name__ != 'CORSMiddleware']\r\n\r\n        setup_cors(app)\r\n\r\n        app.add_middleware(GZipMiddleware, minimum_size=1000)\r\n\r\n        modules.progress.setup_progress_api(app)\r\n\r\n        if launch_api:\r\n            create_api(app)\r\n\r\n        ui_extra_networks.add_pages_to_demo(app)\r\n\r\n        modules.script_callbacks.app_started_callback(shared.demo, app)\r\n\r\n        wait_on_server(shared.demo)\r\n        print('Restarting UI...')\r\n\r\n        sd_samplers.set_samplers()\r\n\r\n        modules.script_callbacks.script_unloaded_callback()\r\n        extensions.list_extensions()\r\n\r\n        localization.list_localizations(cmd_opts.localizations_dir)\r\n\r\n        modelloader.forbid_loaded_nonbuiltin_upscalers()\r\n        modules.scripts.reload_scripts()\r\n        modules.script_callbacks.model_loaded_callback(shared.sd_model)\r\n        modelloader.load_upscalers()\r\n\r\n        for module in [module for name, module in sys.modules.items() if name.startswith(\"modules.ui\")]:\r\n            importlib.reload(module)\r\n\r\n        modules.sd_models.list_models()\r\n\r\n        shared.reload_hypernetworks()\r\n\r\n        ui_extra_networks.intialize()\r\n        ui_extra_networks.register_page(ui_extra_networks_textual_inversion.ExtraNetworksPageTextualInversion())\r\n        ui_extra_networks.register_page(ui_extra_networks_hypernets.ExtraNetworksPageHypernetworks())\r\n        ui_extra_networks.register_page(ui_extra_networks_checkpoints.ExtraNetworksPageCheckpoints())\r\n\r\n        extra_networks.initialize()\r\n        extra_networks.register_extra_network(extra_networks_hypernet.ExtraNetworkHypernet())\r\n\r\n\r\nif __name__ == \"__main__\":\r\n    if cmd_opts.nowebui:\r\n        api_only()\r\n    else:\r\n        webui()\r\n"
  },
  {
    "path": "webui.sh",
    "content": "#!/usr/bin/env bash\r\n#################################################\r\n# Please do not make any changes to this file,  #\r\n# change the variables in webui-user.sh instead #\r\n#################################################\r\n\r\n# If run from macOS, load defaults from webui-macos-env.sh\r\nif [[ \"$OSTYPE\" == \"darwin\"* ]]; then\r\n    if [[ -f webui-macos-env.sh ]]\r\n        then\r\n        source ./webui-macos-env.sh\r\n    fi\r\nfi\r\n\r\n# Read variables from webui-user.sh\r\n# shellcheck source=/dev/null\r\nif [[ -f webui-user.sh ]]\r\nthen\r\n    source ./webui-user.sh\r\nfi\r\n\r\n# Set defaults\r\n# Install directory without trailing slash\r\nif [[ -z \"${install_dir}\" ]]\r\nthen\r\n    install_dir=\"/home/$(whoami)\"\r\nfi\r\n\r\n# Name of the subdirectory (defaults to stable-diffusion-webui)\r\nif [[ -z \"${clone_dir}\" ]]\r\nthen\r\n    clone_dir=\"stable-diffusion-webui\"\r\nfi\r\n\r\n# python3 executable\r\nif [[ -z \"${python_cmd}\" ]]\r\nthen\r\n    python_cmd=\"python3\"\r\nfi\r\n\r\n# git executable\r\nif [[ -z \"${GIT}\" ]]\r\nthen\r\n    export GIT=\"git\"\r\nfi\r\n\r\n# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv)\r\nif [[ -z \"${venv_dir}\" ]]\r\nthen\r\n    venv_dir=\"venv\"\r\nfi\r\n\r\nif [[ -z \"${LAUNCH_SCRIPT}\" ]]\r\nthen\r\n    LAUNCH_SCRIPT=\"launch.py\"\r\nfi\r\n\r\n# this script cannot be run as root by default\r\ncan_run_as_root=0\r\n\r\n# read any command line flags to the webui.sh script\r\nwhile getopts \"f\" flag > /dev/null 2>&1\r\ndo\r\n    case ${flag} in\r\n        f) can_run_as_root=1;;\r\n        *) break;;\r\n    esac\r\ndone\r\n\r\n# Disable sentry logging\r\nexport ERROR_REPORTING=FALSE\r\n\r\n# Do not reinstall existing pip packages on Debian/Ubuntu\r\nexport PIP_IGNORE_INSTALLED=0\r\n\r\n# Pretty print\r\ndelimiter=\"################################################################\"\r\n\r\nprintf \"\\n%s\\n\" \"${delimiter}\"\r\nprintf \"\\e[1m\\e[32mInstall script for stable-diffusion + Web UI\\n\"\r\nprintf \"\\e[1m\\e[34mTested on Debian 11 (Bullseye)\\e[0m\"\r\nprintf \"\\n%s\\n\" \"${delimiter}\"\r\n\r\n# Do not run as root\r\nif [[ $(id -u) -eq 0 && can_run_as_root -eq 0 ]]\r\nthen\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    printf \"\\e[1m\\e[31mERROR: This script must not be launched as root, aborting...\\e[0m\"\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    exit 1\r\nelse\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    printf \"Running on \\e[1m\\e[32m%s\\e[0m user\" \"$(whoami)\"\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\nfi\r\n\r\nif [[ -d .git ]]\r\nthen\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    printf \"Repo already cloned, using it as install directory\"\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    install_dir=\"${PWD}/../\"\r\n    clone_dir=\"${PWD##*/}\"\r\nfi\r\n\r\n# Check prerequisites\r\ngpu_info=$(lspci 2>/dev/null | grep VGA)\r\ncase \"$gpu_info\" in\r\n    *\"Navi 1\"*|*\"Navi 2\"*) export HSA_OVERRIDE_GFX_VERSION=10.3.0\r\n    ;;\r\n    *\"Renoir\"*) export HSA_OVERRIDE_GFX_VERSION=9.0.0\r\n        printf \"\\n%s\\n\" \"${delimiter}\"\r\n        printf \"Experimental support for Renoir: make sure to have at least 4GB of VRAM and 10GB of RAM or enable cpu mode: --use-cpu all --no-half\"\r\n        printf \"\\n%s\\n\" \"${delimiter}\"\r\n    ;;\r\n    *) \r\n    ;;\r\nesac\r\nif echo \"$gpu_info\" | grep -q \"AMD\" && [[ -z \"${TORCH_COMMAND}\" ]]\r\nthen\r\n    export TORCH_COMMAND=\"pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2\"\r\nfi  \r\n\r\nfor preq in \"${GIT}\" \"${python_cmd}\"\r\ndo\r\n    if ! hash \"${preq}\" &>/dev/null\r\n    then\r\n        printf \"\\n%s\\n\" \"${delimiter}\"\r\n        printf \"\\e[1m\\e[31mERROR: %s is not installed, aborting...\\e[0m\" \"${preq}\"\r\n        printf \"\\n%s\\n\" \"${delimiter}\"\r\n        exit 1\r\n    fi\r\ndone\r\n\r\nif ! \"${python_cmd}\" -c \"import venv\" &>/dev/null\r\nthen\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    printf \"\\e[1m\\e[31mERROR: python3-venv is not installed, aborting...\\e[0m\"\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    exit 1\r\nfi\r\n\r\ncd \"${install_dir}\"/ || { printf \"\\e[1m\\e[31mERROR: Can't cd to %s/, aborting...\\e[0m\" \"${install_dir}\"; exit 1; }\r\nif [[ -d \"${clone_dir}\" ]]\r\nthen\r\n    cd \"${clone_dir}\"/ || { printf \"\\e[1m\\e[31mERROR: Can't cd to %s/%s/, aborting...\\e[0m\" \"${install_dir}\" \"${clone_dir}\"; exit 1; }\r\nelse\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    printf \"Clone stable-diffusion-webui\"\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    \"${GIT}\" clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git \"${clone_dir}\"\r\n    cd \"${clone_dir}\"/ || { printf \"\\e[1m\\e[31mERROR: Can't cd to %s/%s/, aborting...\\e[0m\" \"${install_dir}\" \"${clone_dir}\"; exit 1; }\r\nfi\r\n\r\nprintf \"\\n%s\\n\" \"${delimiter}\"\r\nprintf \"Create and activate python venv\"\r\nprintf \"\\n%s\\n\" \"${delimiter}\"\r\ncd \"${install_dir}\"/\"${clone_dir}\"/ || { printf \"\\e[1m\\e[31mERROR: Can't cd to %s/%s/, aborting...\\e[0m\" \"${install_dir}\" \"${clone_dir}\"; exit 1; }\r\nif [[ ! -d \"${venv_dir}\" ]]\r\nthen\r\n    \"${python_cmd}\" -m venv \"${venv_dir}\"\r\n    first_launch=1\r\nfi\r\n# shellcheck source=/dev/null\r\nif [[ -f \"${venv_dir}\"/bin/activate ]]\r\nthen\r\n    source \"${venv_dir}\"/bin/activate\r\nelse\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    printf \"\\e[1m\\e[31mERROR: Cannot activate python venv, aborting...\\e[0m\"\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    exit 1\r\nfi\r\n\r\nif [[ ! -z \"${ACCELERATE}\" ]] && [ ${ACCELERATE}=\"True\" ] && [ -x \"$(command -v accelerate)\" ]\r\nthen\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    printf \"Accelerating launch.py...\"\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    exec accelerate launch --num_cpu_threads_per_process=6 \"${LAUNCH_SCRIPT}\" \"$@\"\r\nelse\r\n    printf \"\\n%s\\n\" \"${delimiter}\"\r\n    printf \"Launching launch.py...\"\r\n    printf \"\\n%s\\n\" \"${delimiter}\"      \r\n    exec \"${python_cmd}\" \"${LAUNCH_SCRIPT}\" \"$@\"\r\nfi\r\n"
  }
]