[
  {
    "path": ".gitignore",
    "content": "/results/\n/.idea/\n"
  },
  {
    "path": "LICENSE.txt",
    "content": "                    GNU GENERAL PUBLIC LICENSE\n                       Version 3, 29 June 2007\n\n Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The GNU General Public License is a free, copyleft license for\nsoftware and other kinds of works.\n\n  The licenses for most software and other practical works are designed\nto take away your freedom to share and change the works.  By contrast,\nthe GNU General Public License is intended to guarantee your freedom to\nshare and change all versions of a program--to make sure it remains free\nsoftware for all its users.  We, the Free Software Foundation, use the\nGNU General Public License for most of our software; it applies also to\nany other work released this way by its authors.  You can apply it to\nyour programs, too.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthem if you wish), that you receive source code or can get it if you\nwant it, that you can change the software or use pieces of it in new\nfree programs, and that you know you can do these things.\n\n  To protect your rights, we need to prevent others from denying you\nthese rights or asking you to surrender the rights.  Therefore, you have\ncertain responsibilities if you distribute copies of the software, or if\nyou modify it: responsibilities to respect the freedom of others.\n\n  For example, if you distribute copies of such a program, whether\ngratis or for a fee, you must pass on to the recipients the same\nfreedoms that you received.  You must make sure that they, too, receive\nor can get the source code.  And you must show them these terms so they\nknow their rights.\n\n  Developers that use the GNU GPL protect your rights with two steps:\n(1) assert copyright on the software, and (2) offer you this License\ngiving you legal permission to copy, distribute and/or modify it.\n\n  For the developers' and authors' protection, the GPL clearly explains\nthat there is no warranty for this free software.  For both users' and\nauthors' sake, the GPL requires that modified versions be marked as\nchanged, so that their problems will not be attributed erroneously to\nauthors of previous versions.\n\n  Some devices are designed to deny users access to install or run\nmodified versions of the software inside them, although the manufacturer\ncan do so.  This is fundamentally incompatible with the aim of\nprotecting users' freedom to change the software.  The systematic\npattern of such abuse occurs in the area of products for individuals to\nuse, which is precisely where it is most unacceptable.  Therefore, we\nhave designed this version of the GPL to prohibit the practice for those\nproducts.  If such problems arise substantially in other domains, we\nstand ready to extend this provision to those domains in future versions\nof the GPL, as needed to protect the freedom of users.\n\n  Finally, every program is threatened constantly by software patents.\nStates should not allow patents to restrict development and use of\nsoftware on general-purpose computers, but in those that do, we wish to\navoid the special danger that patents applied to a free program could\nmake it effectively proprietary.  To prevent this, the GPL assures that\npatents cannot be used to render the program non-free.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                       TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to version 3 of the GNU General Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\nworks, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\n\"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work\nin a fashion requiring copyright permission, other than the making of an\nexact copy.  The resulting work is called a \"modified version\" of the\nearlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based\non the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\npermission, would make you directly or secondarily liable for\ninfringement under applicable copyright law, except executing it on a\ncomputer or modifying a private copy.  Propagation includes copying,\ndistribution (with or without modification), making available to the\npublic, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\nparties to make or receive copies.  Mere interaction with a user through\na computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\"\nto the extent that it includes a convenient and prominently visible\nfeature that (1) displays an appropriate copyright notice, and (2)\ntells the user that there is no warranty for the work (except to the\nextent that warranties are provided), that licensees may convey the\nwork under this License, and how to view a copy of this License.  If\nthe interface presents a list of user commands or options, such as a\nmenu, a prominent item in the list meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work\nfor making modifications to it.  \"Object code\" means any non-source\nform of a work.\n\n  A \"Standard Interface\" means an interface that either is an official\nstandard defined by a recognized standards body, or, in the case of\ninterfaces specified for a particular programming language, one that\nis widely used among developers working in that language.\n\n  The \"System Libraries\" of an executable work include anything, other\nthan the work as a whole, that (a) is included in the normal form of\npackaging a Major Component, but which is not part of that Major\nComponent, and (b) serves only to enable use of the work with that\nMajor Component, or to implement a Standard Interface for which an\nimplementation is available to the public in source code form.  A\n\"Major Component\", in this context, means a major essential component\n(kernel, window system, and so on) of the specific operating system\n(if any) on which the executable work runs, or a compiler used to\nproduce the work, or an object code interpreter used to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all\nthe source code needed to generate, install, and (for an executable\nwork) run the object code and to modify the work, including scripts to\ncontrol those activities.  However, it does not include the work's\nSystem Libraries, or general-purpose tools or generally available free\nprograms which are used unmodified in performing those activities but\nwhich are not part of the work.  For example, Corresponding Source\nincludes interface definition files associated with source files for\nthe work, and the source code for shared libraries and dynamically\nlinked subprograms that the work is specifically designed to require,\nsuch as by intimate data communication or control flow between those\nsubprograms and other parts of the work.\n\n  The Corresponding Source need not include anything that users\ncan regenerate automatically from other parts of the Corresponding\nSource.\n\n  The Corresponding Source for a work in source code form is that\nsame work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\ncopyright on the Program, and are irrevocable provided the stated\nconditions are met.  This License explicitly affirms your unlimited\npermission to run the unmodified Program.  The output from running a\ncovered work is covered by this License only if the output, given its\ncontent, constitutes a covered work.  This License acknowledges your\nrights of fair use or other equivalent, as provided by copyright law.\n\n  You may make, run and propagate covered works that you do not\nconvey, without conditions so long as your license otherwise remains\nin force.  You may convey covered works to others for the sole purpose\nof having them make modifications exclusively for you, or provide you\nwith facilities for running those works, provided that you comply with\nthe terms of this License in conveying all material for which you do\nnot control copyright.  Those thus making or running the covered works\nfor you must do so exclusively on your behalf, under your direction\nand control, on terms that prohibit them from making any copies of\nyour copyrighted material outside their relationship with you.\n\n  Conveying under any other circumstances is permitted solely under\nthe conditions stated below.  Sublicensing is not allowed; section 10\nmakes it unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\nmeasure under any applicable law fulfilling obligations under article\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\nsimilar laws prohibiting or restricting circumvention of such\nmeasures.\n\n  When you convey a covered work, you waive any legal power to forbid\ncircumvention of technological measures to the extent such circumvention\nis effected by exercising rights under this License with respect to\nthe covered work, and you disclaim any intention to limit operation or\nmodification of the work as a means of enforcing, against the work's\nusers, your or third parties' legal rights to forbid circumvention of\ntechnological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\nreceive it, in any medium, provided that you conspicuously and\nappropriately publish on each copy an appropriate copyright notice;\nkeep intact all notices stating that this License and any\nnon-permissive terms added in accord with section 7 apply to the code;\nkeep intact all notices of the absence of any warranty; and give all\nrecipients a copy of this License along with the Program.\n\n  You may charge any price or no price for each copy that you convey,\nand you may offer support or warranty protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\nproduce it from the Program, in the form of source code under the\nterms of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified\n    it, and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is\n    released under this License and any conditions added under section\n    7.  This requirement modifies the requirement in section 4 to\n    \"keep intact all notices\".\n\n    c) You must license the entire work, as a whole, under this\n    License to anyone who comes into possession of a copy.  This\n    License will therefore apply, along with any applicable section 7\n    additional terms, to the whole of the work, and all its parts,\n    regardless of how they are packaged.  This License gives no\n    permission to license the work in any other way, but it does not\n    invalidate such permission if you have separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your\n    work need not make them do so.\n\n  A compilation of a covered work with other separate and independent\nworks, which are not by their nature extensions of the covered work,\nand which are not combined with it such as to form a larger program,\nin or on a volume of a storage or distribution medium, is called an\n\"aggregate\" if the compilation and its resulting copyright are not\nused to limit the access or legal rights of the compilation's users\nbeyond what the individual works permit.  Inclusion of a covered work\nin an aggregate does not cause this License to apply to the other\nparts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms\nof sections 4 and 5, provided that you also convey the\nmachine-readable Corresponding Source under the terms of this License,\nin one of these ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium\n    customarily used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a\n    written offer, valid for at least three years and valid for as\n    long as you offer spare parts or customer support for that product\n    model, to give anyone who possesses the object code either (1) a\n    copy of the Corresponding Source for all the software in the\n    product that is covered by this License, on a durable physical\n    medium customarily used for software interchange, for a price no\n    more than your reasonable cost of physically performing this\n    conveying of source, or (2) access to copy the\n    Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source.  This\n    alternative is allowed only occasionally and noncommercially, and\n    only if you received the object code with such an offer, in accord\n    with subsection 6b.\n\n    d) Convey the object code by offering access from a designated\n    place (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge.  You need not require recipients to copy the\n    Corresponding Source along with the object code.  If the place to\n    copy the object code is a network server, the Corresponding Source\n    may be on a different server (operated by you or a third party)\n    that supports equivalent copying facilities, provided you maintain\n    clear directions next to the object code saying where to find the\n    Corresponding Source.  Regardless of what server hosts the\n    Corresponding Source, you remain obligated to ensure that it is\n    available for as long as needed to satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided\n    you inform other peers where the object code and Corresponding\n    Source of the work are being offered to the general public at no\n    charge under subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\nfrom the Corresponding Source as a System Library, need not be\nincluded in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\ntangible personal property which is normally used for personal, family,\nor household purposes, or (2) anything designed or sold for incorporation\ninto a dwelling.  In determining whether a product is a consumer product,\ndoubtful cases shall be resolved in favor of coverage.  For a particular\nproduct received by a particular user, \"normally used\" refers to a\ntypical or common use of that class of product, regardless of the status\nof the particular user or of the way in which the particular user\nactually uses, or expects or is expected to use, the product.  A product\nis a consumer product regardless of whether the product has substantial\ncommercial, industrial or non-consumer uses, unless such uses represent\nthe only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\nprocedures, authorization keys, or other information required to install\nand execute modified versions of a covered work in that User Product from\na modified version of its Corresponding Source.  The information must\nsuffice to ensure that the continued functioning of the modified object\ncode is in no case prevented or interfered with solely because\nmodification has been made.\n\n  If you convey an object code work under this section in, or with, or\nspecifically for use in, a User Product, and the conveying occurs as\npart of a transaction in which the right of possession and use of the\nUser Product is transferred to the recipient in perpetuity or for a\nfixed term (regardless of how the transaction is characterized), the\nCorresponding Source conveyed under this section must be accompanied\nby the Installation Information.  But this requirement does not apply\nif neither you nor any third party retains the ability to install\nmodified object code on the User Product (for example, the work has\nbeen installed in ROM).\n\n  The requirement to provide Installation Information does not include a\nrequirement to continue to provide support service, warranty, or updates\nfor a work that has been modified or installed by the recipient, or for\nthe User Product in which it has been modified or installed.  Access to a\nnetwork may be denied when the modification itself materially and\nadversely affects the operation of the network or violates the rules and\nprotocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided,\nin accord with this section must be in a format that is publicly\ndocumented (and with an implementation available to the public in\nsource code form), and must require no special password or key for\nunpacking, reading or copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\nLicense by making exceptions from one or more of its conditions.\nAdditional permissions that are applicable to the entire Program shall\nbe treated as though they were included in this License, to the extent\nthat they are valid under applicable law.  If additional permissions\napply only to part of the Program, that part may be used separately\nunder those permissions, but the entire Program remains governed by\nthis License without regard to the additional permissions.\n\n  When you convey a copy of a covered work, you may at your option\nremove any additional permissions from that copy, or from any part of\nit.  (Additional permissions may be written to require their own\nremoval in certain cases when you modify the work.)  You may place\nadditional permissions on material, added by you to a covered work,\nfor which you have or can give appropriate copyright permission.\n\n  Notwithstanding any other provision of this License, for material you\nadd to a covered work, you may (if authorized by the copyright holders of\nthat material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some\n    trade names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that\n    material by anyone who conveys the material (or modified versions of\n    it) with contractual assumptions of liability to the recipient, for\n    any liability that these contractual assumptions directly impose on\n    those licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\nrestrictions\" within the meaning of section 10.  If the Program as you\nreceived it, or any part of it, contains a notice stating that it is\ngoverned by this License along with a term that is a further\nrestriction, you may remove that term.  If a license document contains\na further restriction but permits relicensing or conveying under this\nLicense, you may add to a covered work material governed by the terms\nof that license document, provided that the further restriction does\nnot survive such relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you\nmust place, in the relevant source files, a statement of the\nadditional terms that apply to those files, or a notice indicating\nwhere to find the applicable terms.\n\n  Additional terms, permissive or non-permissive, may be stated in the\nform of a separately written license, or stated as exceptions;\nthe above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\nprovided under this License.  Any attempt otherwise to propagate or\nmodify it is void, and will automatically terminate your rights under\nthis License (including any patent licenses granted under the third\nparagraph of section 11).\n\n  However, if you cease all violation of this License, then your\nlicense from a particular copyright holder is reinstated (a)\nprovisionally, unless and until the copyright holder explicitly and\nfinally terminates your license, and (b) permanently, if the copyright\nholder fails to notify you of the violation by some reasonable means\nprior to 60 days after the cessation.\n\n  Moreover, your license from a particular copyright holder is\nreinstated permanently if the copyright holder notifies you of the\nviolation by some reasonable means, this is the first time you have\nreceived notice of violation of this License (for any work) from that\ncopyright holder, and you cure the violation prior to 30 days after\nyour receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\nlicenses of parties who have received copies or rights from you under\nthis License.  If your rights have been terminated and not permanently\nreinstated, you do not qualify to receive new licenses for the same\nmaterial under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or\nrun a copy of the Program.  Ancillary propagation of a covered work\noccurring solely as a consequence of using peer-to-peer transmission\nto receive a copy likewise does not require acceptance.  However,\nnothing other than this License grants you permission to propagate or\nmodify any covered work.  These actions infringe copyright if you do\nnot accept this License.  Therefore, by modifying or propagating a\ncovered work, you indicate your acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically\nreceives a license from the original licensors, to run, modify and\npropagate that work, subject to this License.  You are not responsible\nfor enforcing compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\norganization, or substantially all assets of one, or subdividing an\norganization, or merging organizations.  If propagation of a covered\nwork results from an entity transaction, each party to that\ntransaction who receives a copy of the work also receives whatever\nlicenses to the work the party's predecessor in interest had or could\ngive under the previous paragraph, plus a right to possession of the\nCorresponding Source of the work from the predecessor in interest, if\nthe predecessor has it or can get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the\nrights granted or affirmed under this License.  For example, you may\nnot impose a license fee, royalty, or other charge for exercise of\nrights granted under this License, and you may not initiate litigation\n(including a cross-claim or counterclaim in a lawsuit) alleging that\nany patent claim is infringed by making, using, selling, offering for\nsale, or importing the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\nLicense of the Program or a work on which the Program is based.  The\nwork thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims\nowned or controlled by the contributor, whether already acquired or\nhereafter acquired, that would be infringed by some manner, permitted\nby this License, of making, using, or selling its contributor version,\nbut do not include claims that would be infringed only as a\nconsequence of further modification of the contributor version.  For\npurposes of this definition, \"control\" includes the right to grant\npatent sublicenses in a manner consistent with the requirements of\nthis License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\npatent license under the contributor's essential patent claims, to\nmake, use, sell, offer for sale, import and otherwise run, modify and\npropagate the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\nagreement or commitment, however denominated, not to enforce a patent\n(such as an express permission to practice a patent or covenant not to\nsue for patent infringement).  To \"grant\" such a patent license to a\nparty means to make such an agreement or commitment not to enforce a\npatent against the party.\n\n  If you convey a covered work, knowingly relying on a patent license,\nand the Corresponding Source of the work is not available for anyone\nto copy, free of charge and under the terms of this License, through a\npublicly available network server or other readily accessible means,\nthen you must either (1) cause the Corresponding Source to be so\navailable, or (2) arrange to deprive yourself of the benefit of the\npatent license for this particular work, or (3) arrange, in a manner\nconsistent with the requirements of this License, to extend the patent\nlicense to downstream recipients.  \"Knowingly relying\" means you have\nactual knowledge that, but for the patent license, your conveying the\ncovered work in a country, or your recipient's use of the covered work\nin a country, would infringe one or more identifiable patents in that\ncountry that you have reason to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\narrangement, you convey, or propagate by procuring conveyance of, a\ncovered work, and grant a patent license to some of the parties\nreceiving the covered work authorizing them to use, propagate, modify\nor convey a specific copy of the covered work, then the patent license\nyou grant is automatically extended to all recipients of the covered\nwork and works based on it.\n\n  A patent license is \"discriminatory\" if it does not include within\nthe scope of its coverage, prohibits the exercise of, or is\nconditioned on the non-exercise of one or more of the rights that are\nspecifically granted under this License.  You may not convey a covered\nwork if you are a party to an arrangement with a third party that is\nin the business of distributing software, under which you make payment\nto the third party based on the extent of your activity of conveying\nthe work, and under which the third party grants, to any of the\nparties who would receive the covered work from you, a discriminatory\npatent license (a) in connection with copies of the covered work\nconveyed by you (or copies made from those copies), or (b) primarily\nfor and in connection with specific products or compilations that\ncontain the covered work, unless you entered into that arrangement,\nor that patent license was granted, prior to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting\nany implied license or other defenses to infringement that may\notherwise be available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot convey a\ncovered work so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you may\nnot convey it at all.  For example, if you agree to terms that obligate you\nto collect a royalty for further conveying from those to whom you convey\nthe Program, the only way you could satisfy both those terms and this\nLicense would be to refrain entirely from conveying the Program.\n\n  13. Use with the GNU Affero General Public License.\n\n  Notwithstanding any other provision of this License, you have\npermission to link or combine any covered work with a work licensed\nunder version 3 of the GNU Affero General Public License into a single\ncombined work, and to convey the resulting work.  The terms of this\nLicense will continue to apply to the part which is the covered work,\nbut the special requirements of the GNU Affero General Public License,\nsection 13, concerning interaction through a network will apply to the\ncombination as such.\n\n  14. Revised Versions of this License.\n\n  The Free Software Foundation may publish revised and/or new versions of\nthe GNU General Public License from time to time.  Such new versions will\nbe similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\n  Each version is given a distinguishing version number.  If the\nProgram specifies that a certain numbered version of the GNU General\nPublic License \"or any later version\" applies to it, you have the\noption of following the terms and conditions either of that numbered\nversion or of any later version published by the Free Software\nFoundation.  If the Program does not specify a version number of the\nGNU General Public License, you may choose any version ever published\nby the Free Software Foundation.\n\n  If the Program specifies that a proxy can decide which future\nversions of the GNU General Public License can be used, that proxy's\npublic statement of acceptance of a version permanently authorizes you\nto choose that version for the Program.\n\n  Later license versions may give you additional or different\npermissions.  However, no additional obligations are imposed on any\nauthor or copyright holder as a result of your choosing to follow a\nlater version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided\nabove cannot be given local legal effect according to their terms,\nreviewing courts shall apply local law that most closely approximates\nan absolute waiver of all civil liability in connection with the\nProgram, unless a warranty or assumption of liability accompanies a\ncopy of the Program in return for a fee.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nstate the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with this program.  If not, see <https://www.gnu.org/licenses/>.\n\nAlso add information on how to contact you by electronic and paper mail.\n\n  If the program does terminal interaction, make it output a short\nnotice like this when it starts in an interactive mode:\n\n    <program>  Copyright (C) <year>  <name of author>\n    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.\n    This is free software, and you are welcome to redistribute it\n    under certain conditions; type `show c' for details.\n\nThe hypothetical commands `show w' and `show c' should show the appropriate\nparts of the General Public License.  Of course, your program's commands\nmight be different; for a GUI interface, you would use an \"about box\".\n\n  You should also get your employer (if you work as a programmer) or school,\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\nFor more information on this, and how to apply and follow the GNU GPL, see\n<https://www.gnu.org/licenses/>.\n\n  The GNU General Public License does not permit incorporating your program\ninto proprietary programs.  If your program is a subroutine library, you\nmay consider it more useful to permit linking proprietary applications with\nthe library.  If this is what you want to do, use the GNU Lesser General\nPublic License instead of this License.  But first, please read\n<https://www.gnu.org/licenses/why-not-lgpl.html>.\n"
  },
  {
    "path": "LICENSE_NVIDIA.txt",
    "content": "Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n\n\nAttribution-NonCommercial 4.0 International\n\n=======================================================================\n\nCreative Commons Corporation (\"Creative Commons\") is not a law firm and\ndoes not provide legal services or legal advice. Distribution of\nCreative Commons public licenses does not create a lawyer-client or\nother relationship. Creative Commons makes its licenses and related\ninformation available on an \"as-is\" basis. Creative Commons gives no\nwarranties regarding its licenses, any material licensed under their\nterms and conditions, or any related information. Creative Commons\ndisclaims all liability for damages resulting from their use to the\nfullest extent possible.\n\nUsing Creative Commons Public Licenses\n\nCreative Commons public licenses provide a standard set of terms and\nconditions that creators and other rights holders may use to share\noriginal works of authorship and other material subject to copyright\nand certain other rights specified in the public license below. The\nfollowing considerations are for informational purposes only, are not\nexhaustive, and do not form part of our licenses.\n\n     Considerations for licensors: Our public licenses are\n     intended for use by those authorized to give the public\n     permission to use material in ways otherwise restricted by\n     copyright and certain other rights. Our licenses are\n     irrevocable. Licensors should read and understand the terms\n     and conditions of the license they choose before applying it.\n     Licensors should also secure all rights necessary before\n     applying our licenses so that the public can reuse the\n     material as expected. Licensors should clearly mark any\n     material not subject to the license. This includes other CC-\n     licensed material, or material used under an exception or\n     limitation to copyright. More considerations for licensors:\n    wiki.creativecommons.org/Considerations_for_licensors\n\n     Considerations for the public: By using one of our public\n     licenses, a licensor grants the public permission to use the\n     licensed material under specified terms and conditions. If\n     the licensor's permission is not necessary for any reason--for\n     example, because of any applicable exception or limitation to\n     copyright--then that use is not regulated by the license. Our\n     licenses grant only permissions under copyright and certain\n     other rights that a licensor has authority to grant. Use of\n     the licensed material may still be restricted for other\n     reasons, including because others have copyright or other\n     rights in the material. A licensor may make special requests,\n     such as asking that all changes be marked or described.\n     Although not required by our licenses, you are encouraged to\n     respect those requests where reasonable. More_considerations\n     for the public: \n    wiki.creativecommons.org/Considerations_for_licensees\n\n=======================================================================\n\nCreative Commons Attribution-NonCommercial 4.0 International Public\nLicense\n\nBy exercising the Licensed Rights (defined below), You accept and agree\nto be bound by the terms and conditions of this Creative Commons\nAttribution-NonCommercial 4.0 International Public License (\"Public\nLicense\"). To the extent this Public License may be interpreted as a\ncontract, You are granted the Licensed Rights in consideration of Your\nacceptance of these terms and conditions, and the Licensor grants You\nsuch rights in consideration of benefits the Licensor receives from\nmaking the Licensed Material available under these terms and\nconditions.\n\n\nSection 1 -- Definitions.\n\n  a. Adapted Material means material subject to Copyright and Similar\n     Rights that is derived from or based upon the Licensed Material\n     and in which the Licensed Material is translated, altered,\n     arranged, transformed, or otherwise modified in a manner requiring\n     permission under the Copyright and Similar Rights held by the\n     Licensor. For purposes of this Public License, where the Licensed\n     Material is a musical work, performance, or sound recording,\n     Adapted Material is always produced where the Licensed Material is\n     synched in timed relation with a moving image.\n\n  b. Adapter's License means the license You apply to Your Copyright\n     and Similar Rights in Your contributions to Adapted Material in\n     accordance with the terms and conditions of this Public License.\n\n  c. Copyright and Similar Rights means copyright and/or similar rights\n     closely related to copyright including, without limitation,\n     performance, broadcast, sound recording, and Sui Generis Database\n     Rights, without regard to how the rights are labeled or\n     categorized. For purposes of this Public License, the rights\n     specified in Section 2(b)(1)-(2) are not Copyright and Similar\n     Rights.\n  d. Effective Technological Measures means those measures that, in the\n     absence of proper authority, may not be circumvented under laws\n     fulfilling obligations under Article 11 of the WIPO Copyright\n     Treaty adopted on December 20, 1996, and/or similar international\n     agreements.\n\n  e. Exceptions and Limitations means fair use, fair dealing, and/or\n     any other exception or limitation to Copyright and Similar Rights\n     that applies to Your use of the Licensed Material.\n\n  f. Licensed Material means the artistic or literary work, database,\n     or other material to which the Licensor applied this Public\n     License.\n\n  g. Licensed Rights means the rights granted to You subject to the\n     terms and conditions of this Public License, which are limited to\n     all Copyright and Similar Rights that apply to Your use of the\n     Licensed Material and that the Licensor has authority to license.\n\n  h. Licensor means the individual(s) or entity(ies) granting rights\n     under this Public License.\n\n  i. NonCommercial means not primarily intended for or directed towards\n     commercial advantage or monetary compensation. For purposes of\n     this Public License, the exchange of the Licensed Material for\n     other material subject to Copyright and Similar Rights by digital\n     file-sharing or similar means is NonCommercial provided there is\n     no payment of monetary compensation in connection with the\n     exchange.\n\n  j. Share means to provide material to the public by any means or\n     process that requires permission under the Licensed Rights, such\n     as reproduction, public display, public performance, distribution,\n     dissemination, communication, or importation, and to make material\n     available to the public including in ways that members of the\n     public may access the material from a place and at a time\n     individually chosen by them.\n\n  k. Sui Generis Database Rights means rights other than copyright\n     resulting from Directive 96/9/EC of the European Parliament and of\n     the Council of 11 March 1996 on the legal protection of databases,\n     as amended and/or succeeded, as well as other essentially\n     equivalent rights anywhere in the world.\n\n  l. You means the individual or entity exercising the Licensed Rights\n     under this Public License. Your has a corresponding meaning.\n\n\nSection 2 -- Scope.\n\n  a. License grant.\n\n       1. Subject to the terms and conditions of this Public License,\n          the Licensor hereby grants You a worldwide, royalty-free,\n          non-sublicensable, non-exclusive, irrevocable license to\n          exercise the Licensed Rights in the Licensed Material to:\n\n            a. reproduce and Share the Licensed Material, in whole or\n               in part, for NonCommercial purposes only; and\n\n            b. produce, reproduce, and Share Adapted Material for\n               NonCommercial purposes only.\n\n       2. Exceptions and Limitations. For the avoidance of doubt, where\n          Exceptions and Limitations apply to Your use, this Public\n          License does not apply, and You do not need to comply with\n          its terms and conditions.\n\n       3. Term. The term of this Public License is specified in Section\n          6(a).\n\n       4. Media and formats; technical modifications allowed. The\n          Licensor authorizes You to exercise the Licensed Rights in\n          all media and formats whether now known or hereafter created,\n          and to make technical modifications necessary to do so. The\n          Licensor waives and/or agrees not to assert any right or\n          authority to forbid You from making technical modifications\n          necessary to exercise the Licensed Rights, including\n          technical modifications necessary to circumvent Effective\n          Technological Measures. For purposes of this Public License,\n          simply making modifications authorized by this Section 2(a)\n          (4) never produces Adapted Material.\n\n       5. Downstream recipients.\n\n            a. Offer from the Licensor -- Licensed Material. Every\n               recipient of the Licensed Material automatically\n               receives an offer from the Licensor to exercise the\n               Licensed Rights under the terms and conditions of this\n               Public License.\n\n            b. No downstream restrictions. You may not offer or impose\n               any additional or different terms or conditions on, or\n               apply any Effective Technological Measures to, the\n               Licensed Material if doing so restricts exercise of the\n               Licensed Rights by any recipient of the Licensed\n               Material.\n\n       6. No endorsement. Nothing in this Public License constitutes or\n          may be construed as permission to assert or imply that You\n          are, or that Your use of the Licensed Material is, connected\n          with, or sponsored, endorsed, or granted official status by,\n          the Licensor or others designated to receive attribution as\n          provided in Section 3(a)(1)(A)(i).\n\n  b. Other rights.\n\n       1. Moral rights, such as the right of integrity, are not\n          licensed under this Public License, nor are publicity,\n          privacy, and/or other similar personality rights; however, to\n          the extent possible, the Licensor waives and/or agrees not to\n          assert any such rights held by the Licensor to the limited\n          extent necessary to allow You to exercise the Licensed\n          Rights, but not otherwise.\n\n       2. Patent and trademark rights are not licensed under this\n          Public License.\n\n       3. To the extent possible, the Licensor waives any right to\n          collect royalties from You for the exercise of the Licensed\n          Rights, whether directly or through a collecting society\n          under any voluntary or waivable statutory or compulsory\n          licensing scheme. In all other cases the Licensor expressly\n          reserves any right to collect such royalties, including when\n          the Licensed Material is used other than for NonCommercial\n          purposes.\n\n\nSection 3 -- License Conditions.\n\nYour exercise of the Licensed Rights is expressly made subject to the\nfollowing conditions.\n\n  a. Attribution.\n\n       1. If You Share the Licensed Material (including in modified\n          form), You must:\n\n            a. retain the following if it is supplied by the Licensor\n               with the Licensed Material:\n\n                 i. identification of the creator(s) of the Licensed\n                    Material and any others designated to receive\n                    attribution, in any reasonable manner requested by\n                    the Licensor (including by pseudonym if\n                    designated);\n\n                ii. a copyright notice;\n\n               iii. a notice that refers to this Public License;\n\n                iv. a notice that refers to the disclaimer of\n                    warranties;\n\n                 v. a URI or hyperlink to the Licensed Material to the\n                    extent reasonably practicable;\n\n            b. indicate if You modified the Licensed Material and\n               retain an indication of any previous modifications; and\n\n            c. indicate the Licensed Material is licensed under this\n               Public License, and include the text of, or the URI or\n               hyperlink to, this Public License.\n\n       2. You may satisfy the conditions in Section 3(a)(1) in any\n          reasonable manner based on the medium, means, and context in\n          which You Share the Licensed Material. For example, it may be\n          reasonable to satisfy the conditions by providing a URI or\n          hyperlink to a resource that includes the required\n          information.\n\n       3. If requested by the Licensor, You must remove any of the\n          information required by Section 3(a)(1)(A) to the extent\n          reasonably practicable.\n\n       4. If You Share Adapted Material You produce, the Adapter's\n          License You apply must not prevent recipients of the Adapted\n          Material from complying with this Public License.\n\n\nSection 4 -- Sui Generis Database Rights.\n\nWhere the Licensed Rights include Sui Generis Database Rights that\napply to Your use of the Licensed Material:\n\n  a. for the avoidance of doubt, Section 2(a)(1) grants You the right\n     to extract, reuse, reproduce, and Share all or a substantial\n     portion of the contents of the database for NonCommercial purposes\n     only;\n\n  b. if You include all or a substantial portion of the database\n     contents in a database in which You have Sui Generis Database\n     Rights, then the database in which You have Sui Generis Database\n     Rights (but not its individual contents) is Adapted Material; and\n\n  c. You must comply with the conditions in Section 3(a) if You Share\n     all or a substantial portion of the contents of the database.\n\nFor the avoidance of doubt, this Section 4 supplements and does not\nreplace Your obligations under this Public License where the Licensed\nRights include other Copyright and Similar Rights.\n\n\nSection 5 -- Disclaimer of Warranties and Limitation of Liability.\n\n  a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE\n     EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS\n     AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF\n     ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,\n     IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,\n     WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR\n     PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,\n     ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT\n     KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT\n     ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.\n\n  b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE\n     TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,\n     NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,\n     INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,\n     COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR\n     USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN\n     ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR\n     DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR\n     IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.\n\n  c. The disclaimer of warranties and limitation of liability provided\n     above shall be interpreted in a manner that, to the extent\n     possible, most closely approximates an absolute disclaimer and\n     waiver of all liability.\n\n\nSection 6 -- Term and Termination.\n\n  a. This Public License applies for the term of the Copyright and\n     Similar Rights licensed here. However, if You fail to comply with\n     this Public License, then Your rights under this Public License\n     terminate automatically.\n\n  b. Where Your right to use the Licensed Material has terminated under\n     Section 6(a), it reinstates:\n\n       1. automatically as of the date the violation is cured, provided\n          it is cured within 30 days of Your discovery of the\n          violation; or\n\n       2. upon express reinstatement by the Licensor.\n\n     For the avoidance of doubt, this Section 6(b) does not affect any\n     right the Licensor may have to seek remedies for Your violations\n     of this Public License.\n\n  c. For the avoidance of doubt, the Licensor may also offer the\n     Licensed Material under separate terms or conditions or stop\n     distributing the Licensed Material at any time; however, doing so\n     will not terminate this Public License.\n\n  d. Sections 1, 5, 6, 7, and 8 survive termination of this Public\n     License.\n\n\nSection 7 -- Other Terms and Conditions.\n\n  a. The Licensor shall not be bound by any additional or different\n     terms or conditions communicated by You unless expressly agreed.\n\n  b. Any arrangements, understandings, or agreements regarding the\n     Licensed Material not stated herein are separate from and\n     independent of the terms and conditions of this Public License.\n\n\nSection 8 -- Interpretation.\n\n  a. For the avoidance of doubt, this Public License does not, and\n     shall not be interpreted to, reduce, limit, restrict, or impose\n     conditions on any use of the Licensed Material that could lawfully\n     be made without permission under this Public License.\n\n  b. To the extent possible, if any provision of this Public License is\n     deemed unenforceable, it shall be automatically reformed to the\n     minimum extent necessary to make it enforceable. If the provision\n     cannot be reformed, it shall be severed from this Public License\n     without affecting the enforceability of the remaining terms and\n     conditions.\n\n  c. No term or condition of this Public License will be waived and no\n     failure to comply consented to unless expressly agreed to by the\n     Licensor.\n\n  d. Nothing in this Public License constitutes or may be interpreted\n     as a limitation upon, or waiver of, any privileges and immunities\n     that apply to the Licensor or You, including from the legal\n     processes of any jurisdiction or authority.\n\n=======================================================================\n\nCreative Commons is not a party to its public\nlicenses. Notwithstanding, Creative Commons may elect to apply one of\nits public licenses to material it publishes and in those instances\nwill be considered the \"Licensor.\" The text of the Creative Commons\npublic licenses is dedicated to the public domain under the CC0 Public\nDomain Dedication. Except for the limited purpose of indicating that\nmaterial is shared under a Creative Commons public license or as\notherwise permitted by the Creative Commons policies published at\ncreativecommons.org/policies, Creative Commons does not authorize the\nuse of the trademark \"Creative Commons\" or any other trademark or logo\nof Creative Commons without its prior written consent including,\nwithout limitation, in connection with any unauthorized modifications\nto any of its public licenses or any other arrangements,\nunderstandings, or agreements concerning use of licensed material. For\nthe avoidance of doubt, this paragraph does not form part of the\npublic licenses.\n\nCreative Commons may be contacted at creativecommons.org.\n"
  },
  {
    "path": "README.md",
    "content": "# The model is now available [HERE](https://ibug.doc.ic.ac.uk/resources/tbgan/) \n(Requires to sign End User License Agreement)\n\n\n\n\n# [Synthesizing Coupled 3D Face Modalities by Trunk-Branch Generative Adversarial Networks](https://barisgecer.github.io/files/gecer_tbgan_arxiv.pdf)\n[ArXiv](https://arxiv.org/pdf/1909.02215.pdf), [Supplementary Video](https://www.youtube.com/watch?v=wehBCetIb7E)\n\n [Baris Gecer](http://barisgecer.github.io)<sup> 1,2</sup>, [Alexander Lattas](https://alexanderlattas.com/)<sup> 1,2</sup>, [Stylianos Ploumpis](https://ibug.doc.ic.ac.uk/people/sploumpis)<sup> 1,2</sup>, [Jiankang Deng](https://jiankangdeng.github.io/)<sup> 1,2</sup>, [Athanasios Papaioannou](https://ibug.doc.ic.ac.uk/people/apapaioannou)<sup> 1,2</sup>, [Stylianos Moschoglou](https://ibug.doc.ic.ac.uk/people/smoschoglou)<sup> 1,2</sup>, & [Stefanos Zafeiriou](https://wp.doc.ic.ac.uk/szafeiri/)<sup> 1,2</sup>\n <br/>\n <sup>1 </sup>Imperial College London\n <br/>\n <sup>2 </sup>FaceSoft.io\n\n\n#### This repo provides Tensorflow implementation of above paper for training\n\n## Abstract\n\n<p align=\"center\"><img width=\"100%\" src=\"representative.png\" /></p>\n\n\nGenerating realistic 3D faces is of high importance for computer graphics and computer vision applications. Generally, research on 3D face generation revolves around linear statistical models of the facial surface. Nevertheless, these models cannot represent faithfully either the facial texture or the normals of the face, which are very crucial for photo-realistic face synthesis. Recently, it was demonstrated that Generative Adversarial Networks (GANs) can be used for generating high-quality textures of faces. Nevertheless, the generation process either omits the geometry and normals, or independent processes are used to produce 3D shape information. In this paper, we present the first methodology that generates high-quality texture, shape, and normals jointly, which can be used for photo-realistic synthesis. To do so, we propose a novel GAN that can generate data from different modalities while exploiting their correlations. Furthermore, we demonstrate how we can condition the generation on the expression and create faces with various facial expressions. The qualitative results shown in this paper are compressed due to size limitations, full-resolution results and the accompanying video can be found in the supplementary documents. \n\n<br/>\n\n\n## Supplementary Video\n\n[<p align=\"center\"><img width=\"100%\" alt=\"Watch the video\" title=\"Click to Watch on YouTube\" src=\"https://img.youtube.com/vi/wehBCetIb7E/sddefault.jpg\" /></p>](https://www.youtube.com/watch?v=wehBCetIb7E)\n\n\n## Testing the Model\n\n- Download the model after signing the agreement and place it under '/results' directory\n- Install menpo3d by\n> pip install menpo3d\n- And then Run the test script:\n> python test.py\n\n\n## Preparing datasets for training\n\nThe TBGAN code repository contains a command-line tool for recreating bit-exact replicas of the datasets that we used in the paper. The tool also provides various utilities for operating on the datasets:\n\n```\nusage: dataset_tool.py [-h] <command> ...\n\n    display             Display images in dataset.\n    extract             Extract images from dataset.\n    compare             Compare two datasets.\n    create_from_pkl_img_norm  Create dataset from a directory full of texture, normals and shape.\n\nType \"dataset_tool.py <command> -h\" for more information.\nPlease ignore other functions. The main function to prepare tf_records is 'create_from_pkl_img_norm'\n```\n\nThe datasets are represented by directories containing the same image data in several resolutions to enable efficient streaming. There is a separate `*.tfrecords` file for each resolution, and if the dataset contains labels, they are stored in a separate file as well:\n\n```\n> python dataset_tool.py create_from_pkl_img_norm datasets/tf_records datasets/texture(/*.png) dataset/shape(/*.pkl) dataset/normals(/*.pkl)\n```\n\nThe ```create_*``` commands take the standard version of a given dataset as input and produce the corresponding `*.tfrecords` files as output.\n\n\n## Training networks\n```\nPlease see how to start training with a PROGAN\nAdditionally, you will need to add \n> \"dynamic_range=[-1,1],dtype = 'float32'\" \narguments to 'dataset' EasyDict() in config.py\n```\n\nOnce the necessary datasets are set up, you can proceed to train your own networks. The general procedure is as follows:\n\n1. Edit `config.py` to specify the dataset and training configuration by uncommenting/editing specific lines.\n2. Run the training script with `python train.py`.\n3. The results are written into a newly created subdirectory under `config.result_dir`\n4. Wait several days (or weeks) for the training to converge, and analyze the results.\n\nBy default, `config.py` is configured to train a 1024x1024 network for CelebA-HQ using a single-GPU. This is expected to take about two weeks even on the highest-end NVIDIA GPUs. The key to enabling faster training is to employ multiple GPUs and/or go for a lower-resolution dataset. To this end, `config.py` contains several examples for commonly used datasets, as well as a set of \"configuration presets\" for multi-GPU training. All of the presets are expected to yield roughly the same image quality for CelebA-HQ, but their total training time can vary considerably:\n\n* `preset-v1-1gpu`: Original config that was used to produce the CelebA-HQ and LSUN results shown in the paper. Expected to take about 1 month on NVIDIA Tesla V100.\n* `preset-v2-1gpu`: Optimized config that converges considerably faster than the original one. Expected to take about 2 weeks on 1xV100.\n* `preset-v2-2gpus`: Optimized config for 2 GPUs. Takes about 1 week on 2xV100.\n* `preset-v2-4gpus`: Optimized config for 4 GPUs. Takes about 3 days on 4xV100.\n* `preset-v2-8gpus`: Optimized config for 8 GPUs. Takes about 2 days on 8xV100.\n\nFor reference, the expected output of each configuration preset for CelebA-HQ can be found in [`networks/tensorflow-version/example_training_runs`](https://drive.google.com/open?id=1A9SKoQ7Xu2fqK22GHdMw8LZTh6qLvR7H)\n\nOther noteworthy config options:\n\n* `fp16`: Enable [FP16 mixed-precision training](http://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html) to reduce the training times even further. The actual speedup is heavily dependent on GPU architecture and cuDNN version, and it can be expected to increase considerably in the future.\n* `BENCHMARK`: Quickly iterate through the resolutions to measure the raw training performance.\n* `BENCHMARK0`: Same as `BENCHMARK`, but only use the highest resolution.\n* `syn1024rgb`: Synthetic 1024x1024 dataset consisting of just black images. Useful for benchmarking.\n* `VERBOSE`: Save image and network snapshots very frequently to facilitate debugging.\n* `GRAPH` and `HIST`: Include additional data in the TensorBoard report.\n\n## Analyzing results\n\nTraining results can be analyzed in several ways:\n\n* **Manual inspection**: The training script saves a snapshot of randomly generated images at regular intervals in `fakes*.png` and reports the overall progress in `log.txt`.\n* **TensorBoard**: The training script also exports various running statistics in a `*.tfevents` file that can be visualized in TensorBoard with `tensorboard --logdir <result_subdir>`.\n* **Generating images and videos**: At the end of `config.py`, there are several pre-defined configs to launch utility scripts (`generate_*`). For example:\n  * Suppose you have an ongoing training run titled `010-pgan-celebahq-preset-v1-1gpu-fp32`, and you want to generate a video of random interpolations for the latest snapshot.\n  * Uncomment the `generate_interpolation_video` line in `config.py`, replace `run_id=10`, and run `python train.py`\n  * The script will automatically locate the latest network snapshot and create a new result directory containing a single MP4 file.\n* **Quality metrics**: Similar to the previous example, `config.py` also contains pre-defined configs to compute various quality metrics (Sliced Wasserstein distance, Fréchet inception distance, etc.) for an existing training run. The metrics are computed for each network snapshot in succession and stored in `metric-*.txt` in the original result directory.\n\n\n## Acknowledgement\nBaris Gecer is supported by the Turkish Ministry of National Education, Stylianos Ploumpis by the EPSRC Project EP/N007743/1 (FACER2VM), and Stefanos Zafeiriou by EPSRC Fellowship DEFORM (EP/S010203/1).\n\nCode borrows heavily from NVIDIA's [PRO-GAN implementation](https://github.com/tkarras/progressive_growing_of_gans), please check and comply with its [License](https://github.com/tkarras/progressive_growing_of_gans/blob/master/LICENSE.txt). and cite their paper:\n```\n@inproceedings{karras2018progressive,\n  title={Progressive Growing of GANs for Improved Quality, Stability, and Variation},\n  author={Karras, Tero and Aila, Timo and Laine, Samuli and Lehtinen, Jaakko},\n  booktitle={International Conference on Learning Representations},\n  year={2018}\n}\n```\n\n## Citation\nIf you find this work is useful for your research, please cite our [paper](https://arxiv.org/abs/1909.02215):\n\n```\n@inproceedings{gecer2020tbgan,\n  title={Synthesizing Coupled 3D Face Modalities by Trunk-Branch Generative Adversarial Networks},\n  author={{Gecer}, Baris and {Lattas}, Alexander and {Ploumpis}, Stylianos and\n         {Deng}, Jiankang and {Papaioannou}, Athanasios and\n         {Moschoglou}, Stylianos and {Zafeiriou}, Stefanos},\n  booktitle={Proceedings of the European conference on computer vision (ECCV)},\n  year={2020},\n  organization={Springer}\n  doi = {10.1007/978-3-030-58526-6_25}\n}\n```\n"
  },
  {
    "path": "UV_manipulation_2.py",
    "content": "import os\n\nimport menpo3d.io as m3io\nfrom menpo3d import io\nimport menpo.io as mio\nimport numpy as np\nfrom menpo.shape import TriMesh, ColouredTriMesh, PointCloud, TexturedTriMesh\nfrom menpo.image import Image\nfrom scipy.interpolate import NearestNDInterpolator\nfrom menpo.image import MaskedImage\nfrom functools import lru_cache\nfrom menpo.transform import AlignmentSimilarity\n#-------------------------------------------------------------------#\n#Full face load dictionaries\n@lru_cache()\ndef load_512_ifo_dict():\n    return mio.import_pickle('512_UV_dict.pkl')\n\n#-------------------------------------------------------------------#\n@lru_cache()\ndef load_mean():\n    return mio.import_pickle('./pkls/all_all_all_mean.pkl'),mio.import_pickle('./pkls/all_all_all_lands_ids.pkl')\n#-------------------------------------------------------------------#\n\ndef alignment(mesh):\n    if mesh.n_points==53215:\n        template, idxs= load_mean()\n\n        \n    alignment = AlignmentSimilarity(PointCloud(mesh.points[idxs]), PointCloud(template.points[idxs]))\n    aligned_mesh = alignment.apply(mesh)\n    return aligned_mesh\n\n\ndef import_uv_info(instance, res, uv_layout='oval', topology='full'):\n    if np.logical_or(type(instance).__name__=='TriMesh',type(instance).__name__=='TexturedTriMesh'):\n        if instance.n_points == 53215:\n            topology = 'full'\n        else:\n            raise ValueError('Unknown topology')\n\n        if topology == 'full':\n            if res==512:\n                if uv_layout=='oval':\n                    info_dict = load_512_ifo_dict()\n                elif uv_layout=='stretch':\n                    info_dict = load_512_ifo_dict_strech()\n            else:\n                raise ValueError('Wrong resolution')\n    elif type(instance).__name__=='Image':\n        if topology == 'full':\n            if res==512:\n                if uv_layout=='oval':\n                    info_dict = load_512_ifo_dict()\n                elif uv_layout=='stretch':\n                    info_dict = load_512_ifo_dict_strech()\n            else:\n                raise ValueError('Wrong resolution')\n    return info_dict  \n\ndef from_UV_2_3D(uv, uv_layout='oval', topology='full', plot=False):\n    res = uv.shape[0]\n    info_dict = import_uv_info(uv,res,uv_layout=uv_layout,topology=topology)\n        \n    tmask = info_dict['tmask']\n    tc_ps = info_dict['tcoords_pixel_scaled']\n    tmask_im =  info_dict['tmask_image']\n    trilist = info_dict['trilist']\n    \n    #uv = interpolaton_of_uv_xyz(uv,tmask).as_unmasked()\n    x = uv.pixels[0][(tc_ps.points.astype(int).T[0,:], tc_ps.points.astype(int).T[1,:])]\n    y = uv.pixels[1][(tc_ps.points.astype(int).T[0,:], tc_ps.points.astype(int).T[1,:])] \n    z = uv.pixels[2][(tc_ps.points.astype(int).T[0,:], tc_ps.points.astype(int).T[1,:])]\n    points = np.hstack((x.T[:,None],y.T[:,None],z.T[:,None]))\n    if plot is True:\n        TriMesh(points,trilist).view()\n    return TriMesh(points,trilist)"
  },
  {
    "path": "__init__.py",
    "content": "__all__ = [\"config\", \"dataset\", \"dataset_tool\",\"legacy\",\"loss\",\"misc\",\"myutil\",\"networks\",\"tfutil\",\"util_scripts\",\"train\"]\nimport tfutil\nimport util_scripts"
  },
  {
    "path": "config.py",
    "content": "# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# This work is licensed under the Creative Commons Attribution-NonCommercial\n# 4.0 International License. To view a copy of this license, visit\n# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to\n# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\n\n#----------------------------------------------------------------------------\n# Convenience class that behaves exactly like dict(), but allows accessing\n# the keys and values using the attribute syntax, i.e., \"mydict.key = value\".\n\n\nclass EasyDict(dict):\n    def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs)\n    def __getattr__(self, name): return self[name]\n    def __setattr__(self, name, value): self[name] = value\n    def __delattr__(self, name): del self[name]\n\n#----------------------------------------------------------------------------\n# Paths.\n\ndata_dir = './'\nresult_dir = './results'\n\n#----------------------------------------------------------------------------\n# TensorFlow options.\n\ntf_config = EasyDict()  # TensorFlow session config, set by tfutil.init_tf().\nenv = EasyDict()        # Environment variables, set by the main program in train.py.\n\ntf_config['graph_options.place_pruned_graph']   = False      # False (default) = Check that all ops are available on the designated device. True = Skip the check for ops that are not used.\ntf_config['gpu_options.allow_growth']          = False     # False (default) = Allocate all GPU memory at the beginning. True = Allocate only as much GPU memory as needed.\n#env.CUDA_VISIBLE_DEVICES                       = '0'       # Unspecified (default) = Use all available GPUs. List of ints = CUDA device numbers to use.\nenv.TF_CPP_MIN_LOG_LEVEL                        = '1'       # 0 (default) = Print all available debug info from TensorFlow. 1 = Print warnings and errors, but disable debug info.\n\n#----------------------------------------------------------------------------\n# Official training configs, targeted mainly for CelebA-HQ.\n# To run, comment/uncomment the lines as appropriate and launch train.py.\n\ndesc        = 'pgan'                                        # Description string included in result subdir name.\nrandom_seed = 1000                                          # Global random seed.\ndataset     = EasyDict()                                    # Options for dataset.load_dataset().\ntrain       = EasyDict(func='train.train_progressive_gan')  # Options for main training func.\nG           = EasyDict(func='networks.G_paper')             # Options for generator network.\nD           = EasyDict(func='networks.D_paper')             # Options for discriminator network.\nG_opt       = EasyDict(beta1=0.0, beta2=0.99, epsilon=1e-8) # Options for generator optimizer.\nD_opt       = EasyDict(beta1=0.0, beta2=0.99, epsilon=1e-8) # Options for discriminator optimizer.\nG_loss      = EasyDict(func='loss.G_wgan_acgan')            # Options for generator loss.\nD_loss      = EasyDict(func='loss.D_wgangp_acgan')          # Options for discriminator loss.\nsched       = EasyDict()                                    # Options for train.TrainingSchedule.\ngrid        = EasyDict(size='1080p', layout='random')       # Options for train.setup_snapshot_image_grid().\n\n# desc += '-mein3d_texture_uv_tf_512';            dataset = EasyDict(tfrecord_dir='mein3d_texture_uv_tf_512');\n# desc += '-mein3d_shape_uv_tf_512_bary';            dataset = EasyDict(tfrecord_dir='mein3d_shape_uv_tf_512_bary',dynamic_range=[-1,1],dtype = 'float32');\ndesc += '-3dmd_all_newuv_crop_tf';            dataset = EasyDict(tfrecord_dir='3dmd_all_newuv_crop_tf',dynamic_range=[-1,1],dtype = 'float32');\nG.lod_sep = 7\nD.lod_sep = 7\ndataset.max_label_size = 'full'\ngrid.layout = 'row_per_class'\ngrid.size = '4k'\n\n# Continue\n#train.resume_run_id = 30\n#train.resume_kimg = 12000\n#train.resume_time = 7*24*60*60 + 5*60*60 + 0*60\n\n\n# Conditioning & snapshot options.\n#desc += '-cond'; dataset.max_label_size = 'full' # conditioned on full label\n#desc += '-cond1'; dataset.max_label_size = 1 # conditioned on first component of the label\n#desc += '-g4k'; grid.size = '4k'\n#desc += '-grpc'; grid.layout = 'row_per_class'\n\n# Config presets (choose one).\n#desc += '-preset-v1-1gpu'; num_gpus = 1; D.mbstd_group_size = 16; sched.minibatch_base = 16; sched.minibatch_dict = {256: 14, 512: 6, 1024: 3}; sched.lod_training_kimg = 800; sched.lod_transition_kimg = 800; train.total_kimg = 19000\n# desc += '-preset-v2-1gpu'; num_gpus = 1; sched.minibatch_base = 4; sched.minibatch_dict = {4: 128, 8: 128, 16: 128, 32: 64, 64: 32, 128: 16, 256: 8, 512: 4}; sched.G_lrate_dict = {1024: 0.0015}; sched.D_lrate_dict = EasyDict(sched.G_lrate_dict); train.total_kimg = 12000\n# desc += '-preset-v2-2gpus'; num_gpus = 2; sched.minibatch_base = 8; sched.minibatch_dict = {4: 256, 8: 256, 16: 128, 32: 64, 64: 32, 128: 16, 256: 8, 512: 4}; sched.G_lrate_dict = {512: 0.0015, 1024: 0.002}; sched.D_lrate_dict = EasyDict(sched.G_lrate_dict); train.total_kimg = 12000\ndesc += '-preset-v2-4gpus'; num_gpus = 4; sched.minibatch_base = 16; sched.minibatch_dict = {4: 512, 8: 256, 16: 128, 32: 64, 64: 32, 128: 16}; sched.G_lrate_dict = {256: 0.0015, 512: 0.002, 1024: 0.003}; sched.D_lrate_dict = EasyDict(sched.G_lrate_dict); train.total_kimg = 16000\n#desc += '-preset-v2-8gpus'; num_gpus = 8; sched.minibatch_base = 32; sched.minibatch_dict = {4: 512, 8: 256, 16: 128, 32: 64, 64: 32}; sched.G_lrate_dict = {128: 0.0015, 256: 0.002, 512: 0.003, 1024: 0.003}; sched.D_lrate_dict = EasyDict(sched.G_lrate_dict); train.total_kimg = 12000\n\n# Numerical precision (choose one).\n# desc += '-fp32'; sched.max_minibatch_per_gpu = {256: 16, 512: 8, 1024: 4}\n#desc += '-fp16'; G.dtype = 'float16'; D.dtype = 'float16'; G.pixelnorm_epsilon=1e-4; G_opt.use_loss_scaling = True; D_opt.use_loss_scaling = True; sched.max_minibatch_per_gpu = {512: 16, 1024: 8}\n\n# Disable individual features.\n#desc += '-nogrowing'; sched.lod_initial_resolution = 1024; sched.lod_training_kimg = 0; sched.lod_transition_kimg = 0; train.total_kimg = 10000\n#desc += '-nopixelnorm'; G.use_pixelnorm = False\n#desc += '-nowscale'; G.use_wscale = False; D.use_wscale = False\n#desc += '-noleakyrelu'; G.use_leakyrelu = False\n#desc += '-nosmoothing'; train.G_smoothing = 0.0\n#desc += '-norepeat'; train.minibatch_repeats = 1\n#desc += '-noreset'; train.reset_opt_for_new_lod = False\n\n# Special modes.\n#desc += '-BENCHMARK'; sched.lod_initial_resolution = 4; sched.lod_training_kimg = 3; sched.lod_transition_kimg = 3; train.total_kimg = (8*2+1)*3; sched.tick_kimg_base = 1; sched.tick_kimg_dict = {}; train.image_snapshot_ticks = 1000; train.network_snapshot_ticks = 1000\n#desc += '-BENCHMARK0'; sched.lod_initial_resolution = 1024; train.total_kimg = 10; sched.tick_kimg_base = 1; sched.tick_kimg_dict = {}; train.image_snapshot_ticks = 1000; train.network_snapshot_ticks = 1000\ndesc += '-VERBOSE'; sched.tick_kimg_base = 100; sched.tick_kimg_dict = {}; train.image_snapshot_ticks = 2; train.network_snapshot_ticks = 2\n#desc += '-GRAPH'; train.save_tf_graph = True\n#desc += '-HIST'; train.save_weight_histograms = True\n\n#----------------------------------------------------------------------------\n# Utility scripts.\n# To run, uncomment the appropriate line and launch train.py.\n\n# train = EasyDict(func='util_scripts.fit_real_images', run_id=0, png_prefix='', num_pngs=1000); num_gpus = 1; desc = 'real-images-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.generate_fake_images_glob', run_id=0, num_pngs=1000); num_gpus = 1; desc = 'fake-images-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.generate_fake_images', run_id=0, png_prefix='', num_pngs=100000); num_gpus = 1; desc = 'fake-images-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.generate_fake_images', run_id=23, grid_size=[15,8], num_pngs=10, image_shrink=4); num_gpus = 1; desc = 'fake-grids-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.generate_interpolation_video', run_id=0, grid_size=[1,1], duration_sec=600.0, smoothing_sec=1.0); num_gpus = 1; desc = 'interpolation-video-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.generate_interpolation_images', run_id=30, grid_size=[1,1], duration_sec=60.0, smoothing_sec=1.0); num_gpus = 1; desc = 'interpolation-images-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.generate_interpolation_video_bydim', run_id=0, grid_size=[1,1], duration_sec=10.0, mp4_fps=30, smoothing_sec=1.0,dim=3); num_gpus = 1; desc = 'interpolation-video-' + str(train.run_id) + '_dim'+str(train.dim)\n#train = EasyDict(func='util_scripts.generate_training_video', run_id=0, duration_sec=20.0); num_gpus = 1; desc = 'training-video-' + str(train.run_id)\n\n#train = EasyDict(func='util_scripts.evaluate_metrics', run_id=23, log='metric-swd-16k.txt', metrics=['swd'], num_images=16384, real_passes=2); num_gpus = 1; desc = train.log.split('.')[0] + '-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.evaluate_metrics', run_id=23, log='metric-fid-10k.txt', metrics=['fid'], num_images=10000, real_passes=1); num_gpus = 1; desc = train.log.split('.')[0] + '-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.evaluate_metrics', run_id=23, log='metric-fid-50k.txt', metrics=['fid'], num_images=50000, real_passes=1); num_gpus = 1; desc = train.log.split('.')[0] + '-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.evaluate_metrics', run_id=23, log='metric-is-50k.txt', metrics=['is'], num_images=50000, real_passes=1); num_gpus = 1; desc = train.log.split('.')[0] + '-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.evaluate_metrics', run_id=23, log='metric-msssim-20k.txt', metrics=['msssim'], num_images=20000, real_passes=1); num_gpus = 1; desc = train.log.split('.')[0] + '-' + str(train.run_id)\n"
  },
  {
    "path": "config_test.py",
    "content": "# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# This work is licensed under the Creative Commons Attribution-NonCommercial\n# 4.0 International License. To view a copy of this license, visit\n# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to\n# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\n\n#----------------------------------------------------------------------------\n# Convenience class that behaves exactly like dict(), but allows accessing\n# the keys and values using the attribute syntax, i.e., \"mydict.key = value\".\n\n\nclass EasyDict(dict):\n    def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs)\n    def __getattr__(self, name): return self[name]\n    def __setattr__(self, name, value): self[name] = value\n    def __delattr__(self, name): del self[name]\n\n#----------------------------------------------------------------------------\n# Paths.\n\ndata_dir = './'\nresult_dir = './results'\n\n#----------------------------------------------------------------------------\n# TensorFlow options.\n\ntf_config = EasyDict()  # TensorFlow session config, set by tfutil.init_tf().\nenv = EasyDict()        # Environment variables, set by the main program in train.py.\n\ntf_config['graph_options.place_pruned_graph']   = False      # False (default) = Check that all ops are available on the designated device. True = Skip the check for ops that are not used.\ntf_config['gpu_options.allow_growth']          = False     # False (default) = Allocate all GPU memory at the beginning. True = Allocate only as much GPU memory as needed.\n#env.CUDA_VISIBLE_DEVICES                       = '0'       # Unspecified (default) = Use all available GPUs. List of ints = CUDA device numbers to use.\nenv.TF_CPP_MIN_LOG_LEVEL                        = '1'       # 0 (default) = Print all available debug info from TensorFlow. 1 = Print warnings and errors, but disable debug info.\n\n#----------------------------------------------------------------------------\n# Official training configs, targeted mainly for CelebA-HQ.\n# To run, comment/uncomment the lines as appropriate and launch train.py.\n\ndesc        = 'pgan'                                        # Description string included in result subdir name.\nrandom_seed = 1000                                          # Global random seed.\ndataset     = EasyDict()                                    # Options for dataset.load_dataset().\ntrain       = EasyDict(func='train.train_progressive_gan')  # Options for main training func.\nG           = EasyDict(func='networks.G_paper')             # Options for generator network.\nD           = EasyDict(func='networks.D_paper')             # Options for discriminator network.\nG_opt       = EasyDict(beta1=0.0, beta2=0.99, epsilon=1e-8) # Options for generator optimizer.\nD_opt       = EasyDict(beta1=0.0, beta2=0.99, epsilon=1e-8) # Options for discriminator optimizer.\nG_loss      = EasyDict(func='loss.G_wgan_acgan')            # Options for generator loss.\nD_loss      = EasyDict(func='loss.D_wgangp_acgan')          # Options for discriminator loss.\nsched       = EasyDict()                                    # Options for train.TrainingSchedule.\ngrid        = EasyDict(size='1080p', layout='random')       # Options for train.setup_snapshot_image_grid().\n\n# desc += '-mein3d_texture_uv_tf_512';            dataset = EasyDict(tfrecord_dir='mein3d_texture_uv_tf_512');\n# desc += '-mein3d_shape_uv_tf_512_bary';            dataset = EasyDict(tfrecord_dir='mein3d_shape_uv_tf_512_bary',dynamic_range=[-1,1],dtype = 'float32');\ndesc += '-3dmd_all_newuv_crop_tf';            dataset = EasyDict(tfrecord_dir='3dmd_all_newuv_crop_tf',dynamic_range=[-1,1],dtype = 'float32');\nG.lod_sep = 7\nD.lod_sep = 7\ndataset.max_label_size = 'full'\ngrid.layout = 'row_per_class'\ngrid.size = '4k'\n\n# Continue\n#train.resume_run_id = 30\n#train.resume_kimg = 12000\n#train.resume_time = 7*24*60*60 + 5*60*60 + 0*60\n\n\n# Conditioning & snapshot options.\n#desc += '-cond'; dataset.max_label_size = 'full' # conditioned on full label\n#desc += '-cond1'; dataset.max_label_size = 1 # conditioned on first component of the label\n#desc += '-g4k'; grid.size = '4k'\n#desc += '-grpc'; grid.layout = 'row_per_class'\n\n# Config presets (choose one).\ndesc += '-preset-v1-1gpu'; num_gpus = 1; D.mbstd_group_size = 16; sched.minibatch_base = 16; sched.minibatch_dict = {256: 14, 512: 6, 1024: 3}; sched.lod_training_kimg = 800; sched.lod_transition_kimg = 800; train.total_kimg = 19000\n# desc += '-preset-v2-1gpu'; num_gpus = 1; sched.minibatch_base = 4; sched.minibatch_dict = {4: 128, 8: 128, 16: 128, 32: 64, 64: 32, 128: 16, 256: 8, 512: 4}; sched.G_lrate_dict = {1024: 0.0015}; sched.D_lrate_dict = EasyDict(sched.G_lrate_dict); train.total_kimg = 12000\n# desc += '-preset-v2-2gpus'; num_gpus = 2; sched.minibatch_base = 8; sched.minibatch_dict = {4: 256, 8: 256, 16: 128, 32: 64, 64: 32, 128: 16, 256: 8, 512: 4}; sched.G_lrate_dict = {512: 0.0015, 1024: 0.002}; sched.D_lrate_dict = EasyDict(sched.G_lrate_dict); train.total_kimg = 12000\n#desc += '-preset-v2-4gpus'; num_gpus = 4; sched.minibatch_base = 16; sched.minibatch_dict = {4: 512, 8: 256, 16: 128, 32: 64, 64: 32, 128: 16}; sched.G_lrate_dict = {256: 0.0015, 512: 0.002, 1024: 0.003}; sched.D_lrate_dict = EasyDict(sched.G_lrate_dict); train.total_kimg = 16000\n#desc += '-preset-v2-8gpus'; num_gpus = 8; sched.minibatch_base = 32; sched.minibatch_dict = {4: 512, 8: 256, 16: 128, 32: 64, 64: 32}; sched.G_lrate_dict = {128: 0.0015, 256: 0.002, 512: 0.003, 1024: 0.003}; sched.D_lrate_dict = EasyDict(sched.G_lrate_dict); train.total_kimg = 12000\n\n# Numerical precision (choose one).\n# desc += '-fp32'; sched.max_minibatch_per_gpu = {256: 16, 512: 8, 1024: 4}\n#desc += '-fp16'; G.dtype = 'float16'; D.dtype = 'float16'; G.pixelnorm_epsilon=1e-4; G_opt.use_loss_scaling = True; D_opt.use_loss_scaling = True; sched.max_minibatch_per_gpu = {512: 16, 1024: 8}\n\n# Disable individual features.\n#desc += '-nogrowing'; sched.lod_initial_resolution = 1024; sched.lod_training_kimg = 0; sched.lod_transition_kimg = 0; train.total_kimg = 10000\n#desc += '-nopixelnorm'; G.use_pixelnorm = False\n#desc += '-nowscale'; G.use_wscale = False; D.use_wscale = False\n#desc += '-noleakyrelu'; G.use_leakyrelu = False\n#desc += '-nosmoothing'; train.G_smoothing = 0.0\n#desc += '-norepeat'; train.minibatch_repeats = 1\n#desc += '-noreset'; train.reset_opt_for_new_lod = False\n\n# Special modes.\n#desc += '-BENCHMARK'; sched.lod_initial_resolution = 4; sched.lod_training_kimg = 3; sched.lod_transition_kimg = 3; train.total_kimg = (8*2+1)*3; sched.tick_kimg_base = 1; sched.tick_kimg_dict = {}; train.image_snapshot_ticks = 1000; train.network_snapshot_ticks = 1000\n#desc += '-BENCHMARK0'; sched.lod_initial_resolution = 1024; train.total_kimg = 10; sched.tick_kimg_base = 1; sched.tick_kimg_dict = {}; train.image_snapshot_ticks = 1000; train.network_snapshot_ticks = 1000\ndesc += '-VERBOSE'; sched.tick_kimg_base = 100; sched.tick_kimg_dict = {}; train.image_snapshot_ticks = 2; train.network_snapshot_ticks = 2\n#desc += '-GRAPH'; train.save_tf_graph = True\n#desc += '-HIST'; train.save_weight_histograms = True\n\n#----------------------------------------------------------------------------\n# Utility scripts.\n# To run, uncomment the appropriate line and launch train.py.\n\n# train = EasyDict(func='util_scripts.fit_real_images', run_id=0, png_prefix='', num_pngs=1000); num_gpus = 1; desc = 'real-images-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.generate_fake_images_glob', run_id=0, num_pngs=1000); num_gpus = 1; desc = 'fake-images-' + str(train.run_id)\n# train = EasyDict(func='util_scripts.generate_fake_images', run_id=32, png_prefix='', num_pngs=100); num_gpus = 1; desc = 'fake-images-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.generate_fake_images', run_id=23, grid_size=[15,8], num_pngs=10, image_shrink=4); num_gpus = 1; desc = 'fake-grids-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.generate_interpolation_video', run_id=0, grid_size=[1,1], duration_sec=600.0, smoothing_sec=1.0); num_gpus = 1; desc = 'interpolation-video-' + str(train.run_id)\ntrain = EasyDict(func='util_scripts.generate_interpolation_images', run_id=32, grid_size=[1,1], duration_sec=5.0, smoothing_sec=0.1); num_gpus = 1; desc = 'interpolation-images-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.generate_interpolation_video_bydim', run_id=0, grid_size=[1,1], duration_sec=10.0, mp4_fps=30, smoothing_sec=1.0,dim=3); num_gpus = 1; desc = 'interpolation-video-' + str(train.run_id) + '_dim'+str(train.dim)\n#train = EasyDict(func='util_scripts.generate_training_video', run_id=0, duration_sec=20.0); num_gpus = 1; desc = 'training-video-' + str(train.run_id)\n\n#train = EasyDict(func='util_scripts.evaluate_metrics', run_id=23, log='metric-swd-16k.txt', metrics=['swd'], num_images=16384, real_passes=2); num_gpus = 1; desc = train.log.split('.')[0] + '-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.evaluate_metrics', run_id=23, log='metric-fid-10k.txt', metrics=['fid'], num_images=10000, real_passes=1); num_gpus = 1; desc = train.log.split('.')[0] + '-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.evaluate_metrics', run_id=23, log='metric-fid-50k.txt', metrics=['fid'], num_images=50000, real_passes=1); num_gpus = 1; desc = train.log.split('.')[0] + '-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.evaluate_metrics', run_id=23, log='metric-is-50k.txt', metrics=['is'], num_images=50000, real_passes=1); num_gpus = 1; desc = train.log.split('.')[0] + '-' + str(train.run_id)\n#train = EasyDict(func='util_scripts.evaluate_metrics', run_id=23, log='metric-msssim-20k.txt', metrics=['msssim'], num_images=20000, real_passes=1); num_gpus = 1; desc = train.log.split('.')[0] + '-' + str(train.run_id)\n"
  },
  {
    "path": "dataset.py",
    "content": "# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# This work is licensed under the Creative Commons Attribution-NonCommercial\n# 4.0 International License. To view a copy of this license, visit\n# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to\n# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\n\nimport os\nimport glob\nimport numpy as np\nimport tensorflow as tf\nimport tfutil\n\n#----------------------------------------------------------------------------\n# Parse individual image from a tfrecords file.\n\n\n#----------------------------------------------------------------------------\n# Dataset class that loads data from tfrecords files.\n\nclass TFRecordDataset:\n    def __init__(self,\n        tfrecord_dir,               # Directory containing a collection of tfrecords files.\n        resolution      = None,     # Dataset resolution, None = autodetect.\n        label_file      = None,     # Relative path of the labels file, None = autodetect.\n        max_label_size  = 0,        # 0 = no labels, 'full' = full labels, <int> = N first label components.\n        repeat          = True,     # Repeat dataset indefinitely.\n        shuffle_mb      = 4096,     # Shuffle data within specified window (megabytes), 0 = disable shuffling.\n        prefetch_mb     = 2048,     # Amount of data to prefetch (megabytes), 0 = disable prefetching.\n        buffer_mb       = 256,      # Read buffer size (megabytes).\n        num_threads     = 2,        # Number of concurrent threads.\n        dtype           ='uint8',\n        dynamic_range   =[0, 255]):\n\n        self.tfrecord_dir       = tfrecord_dir\n        self.resolution         = None\n        self.resolution_log2    = None\n        self.shape              = []        # [channel, height, width]\n        self.dtype              = dtype\n        self.dynamic_range      = dynamic_range\n        self.label_file         = label_file\n        self.label_size         = None      # [component]\n        self.label_dtype        = None\n        self._np_labels         = None\n        self._tf_minibatch_in   = None\n        self._tf_labels_var     = None\n        self._tf_labels_dataset = None\n        self._tf_datasets       = dict()\n        self._tf_iterator       = None\n        self._tf_init_ops       = dict()\n        self._tf_minibatch_np   = None\n        self._cur_minibatch     = -1\n        self._cur_lod           = -1\n\n        # List tfrecords files and inspect their shapes.\n        assert os.path.isdir(self.tfrecord_dir)\n        tfr_files = sorted(glob.glob(os.path.join(self.tfrecord_dir, '*.tfrecords')))\n        assert len(tfr_files) >= 1\n        tfr_shapes = []\n        for tfr_file in tfr_files:\n            tfr_opt = tf.python_io.TFRecordOptions(tf.python_io.TFRecordCompressionType.NONE)\n            for record in tf.python_io.tf_record_iterator(tfr_file, tfr_opt):\n                tfr_shapes.append(self.parse_tfrecord_np(record).shape)\n                break\n\n        # Autodetect label filename.\n        if self.label_file is None:\n            guess = sorted(glob.glob(os.path.join(self.tfrecord_dir, '*.labels')))\n            if len(guess):\n                self.label_file = guess[0]\n        elif not os.path.isfile(self.label_file):\n            guess = os.path.join(self.tfrecord_dir, self.label_file)\n            if os.path.isfile(guess):\n                self.label_file = guess\n\n        # Determine shape and resolution.\n        max_shape = max(tfr_shapes, key=lambda shape: np.prod(shape))\n        self.resolution = resolution if resolution is not None else max_shape[1]\n        self.resolution_log2 = int(np.log2(self.resolution))\n        self.shape = [max_shape[0], self.resolution, self.resolution]\n        tfr_lods = [self.resolution_log2 - int(np.log2(shape[1])) for shape in tfr_shapes]\n        assert all(shape[0] == max_shape[0] for shape in tfr_shapes)\n        assert all(shape[1] == shape[2] for shape in tfr_shapes)\n        assert all(shape[1] == self.resolution // (2**lod) for shape, lod in zip(tfr_shapes, tfr_lods))\n        assert all(lod in tfr_lods for lod in range(self.resolution_log2 - 1))\n\n        # Load labels.\n        assert max_label_size == 'full' or max_label_size >= 0\n        self._np_labels = np.zeros([1<<20, 0], dtype=np.float32)\n        if self.label_file is not None and max_label_size != 0:\n            self._np_labels = np.load(self.label_file)\n            assert self._np_labels.ndim == 2\n        if max_label_size != 'full' and self._np_labels.shape[1] > max_label_size:\n            self._np_labels = self._np_labels[:, :max_label_size]\n        self.label_size = self._np_labels.shape[1]\n        self.label_dtype = self._np_labels.dtype.name\n\n        # Build TF expressions.\n        with tf.name_scope('Dataset'), tf.device('/cpu:0'):\n            self._tf_minibatch_in = tf.placeholder(tf.int64, name='minibatch_in', shape=[])\n            tf_labels_init = tf.zeros(self._np_labels.shape, self._np_labels.dtype)\n            self._tf_labels_var = tf.Variable(tf_labels_init, name='labels_var')\n            tfutil.set_vars({self._tf_labels_var: self._np_labels})\n            self._tf_labels_dataset = tf.data.Dataset.from_tensor_slices(self._tf_labels_var)\n            for tfr_file, tfr_shape, tfr_lod in zip(tfr_files, tfr_shapes, tfr_lods):\n                if tfr_lod < 0:\n                    continue\n                dset = tf.data.TFRecordDataset(tfr_file, compression_type='', buffer_size=buffer_mb<<20)\n                dset = dset.map(self.parse_tfrecord_tf, num_parallel_calls=num_threads)\n                dset = tf.data.Dataset.zip((dset, self._tf_labels_dataset))\n                bytes_per_item = np.prod(tfr_shape) * np.dtype(self.dtype).itemsize\n                if shuffle_mb > 0:\n                    dset = dset.shuffle(((shuffle_mb << 20) - 1) // bytes_per_item + 1)\n                if repeat:\n                    dset = dset.repeat()\n                if prefetch_mb > 0:\n                    dset = dset.prefetch(((prefetch_mb << 20) - 1) // bytes_per_item + 1)\n                dset = dset.batch(self._tf_minibatch_in)\n                self._tf_datasets[tfr_lod] = dset\n            self._tf_iterator = tf.data.Iterator.from_structure(self._tf_datasets[0].output_types, self._tf_datasets[0].output_shapes)\n            self._tf_init_ops = {lod: self._tf_iterator.make_initializer(dset) for lod, dset in self._tf_datasets.items()}\n\n    # Use the given minibatch size and level-of-detail for the data returned by get_minibatch_tf().\n    def configure(self, minibatch_size, lod=0):\n        lod = int(np.floor(lod))\n        assert minibatch_size >= 1 and lod in self._tf_datasets\n        if self._cur_minibatch != minibatch_size or self._cur_lod != lod:\n            self._tf_init_ops[lod].run({self._tf_minibatch_in: minibatch_size})\n            self._cur_minibatch = minibatch_size\n            self._cur_lod = lod\n\n    # Get next minibatch as TensorFlow expressions.\n    def get_minibatch_tf(self): # => images, labels\n        return self._tf_iterator.get_next()\n\n    # Get next minibatch as NumPy arrays.\n    def get_minibatch_np(self, minibatch_size, lod=0): # => images, labels\n        self.configure(minibatch_size, lod)\n        if self._tf_minibatch_np is None:\n            self._tf_minibatch_np = self.get_minibatch_tf()\n        return tfutil.run(self._tf_minibatch_np)\n\n    # Get random labels as TensorFlow expression.\n    def get_random_labels_tf(self, minibatch_size): # => labels\n        if self.label_size > 0:\n            return tf.gather(self._tf_labels_var, tf.random_uniform([minibatch_size], 0, self._np_labels.shape[0], dtype=tf.int32))\n        else:\n            return tf.zeros([minibatch_size, 0], self.label_dtype)\n\n    # Get random labels as NumPy array.\n    def get_random_labels_np(self, minibatch_size): # => labels\n        if self.label_size > 0:\n            return self._np_labels[np.random.randint(self._np_labels.shape[0], size=[minibatch_size])]\n        else:\n            return np.zeros([minibatch_size, 0], self.label_dtype)\n\n    def parse_tfrecord_tf(self, record):\n        features = tf.parse_single_example(record, features={\n            'shape': tf.FixedLenFeature([3], tf.int64),\n            'data': tf.FixedLenFeature([], tf.string)})\n        data = tf.decode_raw(features['data'], tf.as_dtype(self.dtype))\n        return tf.reshape(data, features['shape'])\n\n    def parse_tfrecord_np(self, record):\n        ex = tf.train.Example()\n        ex.ParseFromString(record)\n        shape = ex.features.feature['shape'].int64_list.value\n        data = ex.features.feature['data'].bytes_list.value[0]\n        return np.fromstring(data, np.dtype(self.dtype).type).reshape(shape)\n#----------------------------------------------------------------------------\n# Base class for datasets that are generated on the fly.\n\nclass SyntheticDataset:\n    def __init__(self, resolution=1024, num_channels=3, dtype='uint8', dynamic_range=[0,255], label_size=0, label_dtype='float32'):\n        self.resolution         = resolution\n        self.resolution_log2    = int(np.log2(resolution))\n        self.shape              = [num_channels, resolution, resolution]\n        self.dtype              = dtype\n        self.dynamic_range      = dynamic_range\n        self.label_size         = label_size\n        self.label_dtype        = label_dtype\n        self._tf_minibatch_var  = None\n        self._tf_lod_var        = None\n        self._tf_minibatch_np   = None\n        self._tf_labels_np      = None\n\n        assert self.resolution == 2 ** self.resolution_log2\n        with tf.name_scope('Dataset'):\n            self._tf_minibatch_var = tf.Variable(np.int32(0), name='minibatch_var')\n            self._tf_lod_var = tf.Variable(np.int32(0), name='lod_var')\n\n    def configure(self, minibatch_size, lod=0):\n        lod = int(np.floor(lod))\n        assert minibatch_size >= 1 and lod >= 0 and lod <= self.resolution_log2\n        tfutil.set_vars({self._tf_minibatch_var: minibatch_size, self._tf_lod_var: lod})\n\n    def get_minibatch_tf(self): # => images, labels\n        with tf.name_scope('SyntheticDataset'):\n            shrink = tf.cast(2.0 ** tf.cast(self._tf_lod_var, tf.float32), tf.int32)\n            shape = [self.shape[0], self.shape[1] // shrink, self.shape[2] // shrink]\n            images = self._generate_images(self._tf_minibatch_var, self._tf_lod_var, shape)\n            labels = self._generate_labels(self._tf_minibatch_var)\n            return images, labels\n\n    def get_minibatch_np(self, minibatch_size, lod=0): # => images, labels\n        self.configure(minibatch_size, lod)\n        if self._tf_minibatch_np is None:\n            self._tf_minibatch_np = self.get_minibatch_tf()\n        return tfutil.run(self._tf_minibatch_np)\n\n    def get_random_labels_tf(self, minibatch_size): # => labels\n        with tf.name_scope('SyntheticDataset'):\n            return self._generate_labels(minibatch_size)\n\n    def get_random_labels_np(self, minibatch_size): # => labels\n        self.configure(minibatch_size)\n        if self._tf_labels_np is None:\n            self._tf_labels_np = self.get_random_labels_tf()\n        return tfutil.run(self._tf_labels_np)\n\n    def _generate_images(self, minibatch, lod, shape): # to be overridden by subclasses\n        return tf.zeros([minibatch] + shape, self.dtype)\n\n    def _generate_labels(self, minibatch): # to be overridden by subclasses\n        return tf.zeros([minibatch, self.label_size], self.label_dtype)\n\n#----------------------------------------------------------------------------\n# Helper func for constructing a dataset object using the given options.\n\ndef load_dataset(class_name='dataset.TFRecordDataset', data_dir=None, verbose=False, **kwargs):\n    adjusted_kwargs = dict(kwargs)\n    if 'tfrecord_dir' in adjusted_kwargs and data_dir is not None:\n        adjusted_kwargs['tfrecord_dir'] = os.path.join(data_dir, adjusted_kwargs['tfrecord_dir'])\n    if verbose:\n        print('Streaming data using %s...' % class_name)\n    dataset = tfutil.import_obj(class_name)(**adjusted_kwargs)\n    if verbose:\n        print('Dataset shape =', np.int32(dataset.shape).tolist())\n        print('Dynamic range =', dataset.dynamic_range)\n        print('Label size    =', dataset.label_size)\n    return dataset\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "dataset_tool.py",
    "content": "# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# This work is licensed under the Creative Commons Attribution-NonCommercial\n# 4.0 International License. To view a copy of this license, visit\n# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to\n# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\n\nimport os\nimport sys\nimport glob\nimport argparse\nimport threading\nimport six.moves.queue as Queue\nimport traceback\nimport numpy as np\nimport tensorflow as tf\nimport PIL.Image\n\nimport tfutil\nimport dataset\nimport menpo.io as mio\nimport scipy\n\n#----------------------------------------------------------------------------\n\ndef error(msg):\n    print('Error: ' + msg)\n    exit(1)\n\n#----------------------------------------------------------------------------\n\nclass TFRecordExporter:\n    def __init__(self, tfrecord_dir, expected_images, print_progress=True, progress_interval=10):\n        self.tfrecord_dir       = tfrecord_dir\n        self.tfr_prefix         = os.path.join(self.tfrecord_dir, os.path.basename(self.tfrecord_dir))\n        self.expected_images    = expected_images\n        self.cur_images         = 0\n        self.shape              = None\n        self.resolution_log2    = None\n        self.tfr_writers        = []\n        self.print_progress     = print_progress\n        self.progress_interval  = progress_interval\n        if self.print_progress:\n            print('Creating dataset \"%s\"' % tfrecord_dir)\n        if not os.path.isdir(self.tfrecord_dir):\n            os.makedirs(self.tfrecord_dir)\n        assert(os.path.isdir(self.tfrecord_dir))\n        \n    def close(self):\n        if self.print_progress:\n            print('%-40s\\r' % 'Flushing data...', end='', flush=True)\n        for tfr_writer in self.tfr_writers:\n            tfr_writer.close()\n        self.tfr_writers = []\n        if self.print_progress:\n            print('%-40s\\r' % '', end='', flush=True)\n            print('Added %d images.' % self.cur_images)\n\n    def choose_shuffled_order(self): # Note: Images and labels must be added in shuffled order.\n        order = np.arange(self.expected_images)\n        np.random.RandomState(123).shuffle(order)\n        return order\n\n    def add_image(self, img):\n        if self.print_progress and self.cur_images % self.progress_interval == 0:\n            print('%d / %d\\r' % (self.cur_images, self.expected_images), end='', flush=True)\n        if self.shape is None:\n            self.shape = img.shape\n            self.resolution_log2 = int(np.log2(self.shape[1]))\n            assert self.shape[0] in [1, 3]\n            assert self.shape[1] == self.shape[2]\n            assert self.shape[1] == 2**self.resolution_log2\n            tfr_opt = tf.python_io.TFRecordOptions(tf.python_io.TFRecordCompressionType.NONE)\n            for lod in range(self.resolution_log2 - 1):\n                tfr_file = self.tfr_prefix + '-r%02d.tfrecords' % (self.resolution_log2 - lod)\n                self.tfr_writers.append(tf.python_io.TFRecordWriter(tfr_file, tfr_opt))\n        assert img.shape == self.shape\n        for lod, tfr_writer in enumerate(self.tfr_writers):\n            if lod:\n                img = img.astype(np.float32)\n                img = (img[:, 0::2, 0::2] + img[:, 0::2, 1::2] + img[:, 1::2, 0::2] + img[:, 1::2, 1::2]) * 0.25\n            quant = np.rint(img).clip(0, 255).astype(np.uint8)\n            ex = tf.train.Example(features=tf.train.Features(feature={\n                'shape': tf.train.Feature(int64_list=tf.train.Int64List(value=quant.shape)),\n                'data': tf.train.Feature(bytes_list=tf.train.BytesList(value=[quant.tostring()]))}))\n            tfr_writer.write(ex.SerializeToString())\n        self.cur_images += 1\n\n    def add_shape(self, img):\n        if self.print_progress and self.cur_images % self.progress_interval == 0:\n            print('%d / %d\\r' % (self.cur_images, self.expected_images), end='', flush=True)\n        if self.shape is None:\n            self.shape = img.shape\n            self.resolution_log2 = int(np.log2(self.shape[1]))\n            assert self.shape[0] in [1, 3]\n            assert self.shape[1] == self.shape[2]\n            assert self.shape[1] == 2**self.resolution_log2\n            tfr_opt = tf.python_io.TFRecordOptions(tf.python_io.TFRecordCompressionType.NONE)\n            for lod in range(self.resolution_log2 - 1):\n                tfr_file = self.tfr_prefix + '-r%02d.tfrecords' % (self.resolution_log2 - lod)\n                self.tfr_writers.append(tf.python_io.TFRecordWriter(tfr_file, tfr_opt))\n        assert img.shape == self.shape\n        for lod, tfr_writer in enumerate(self.tfr_writers):\n            if lod:\n                img = img.astype(np.float32)\n                img = (img[:, 0::2, 0::2] + img[:, 0::2, 1::2] + img[:, 1::2, 0::2] + img[:, 1::2, 1::2]) * 0.25\n            ex = tf.train.Example(features=tf.train.Features(feature={\n                'shape': tf.train.Feature(int64_list=tf.train.Int64List(value=img.shape)),\n                'data': tf.train.Feature(bytes_list=tf.train.BytesList(value=[img.tostring()]))}))\n            tfr_writer.write(ex.SerializeToString())\n        self.cur_images += 1\n\n    def add_both(self, img):\n        if self.print_progress and self.cur_images % self.progress_interval == 0:\n            print('%d / %d\\r' % (self.cur_images, self.expected_images), end='', flush=True)\n        if self.shape is None:\n            self.shape = img.shape\n            self.resolution_log2 = int(np.log2(self.shape[1]))\n            assert self.shape[0] in [1, 6, 9]\n            assert self.shape[1] == self.shape[2]\n            assert self.shape[1] == 2**self.resolution_log2\n            tfr_opt = tf.python_io.TFRecordOptions(tf.python_io.TFRecordCompressionType.NONE)\n            for lod in range(self.resolution_log2 - 1):\n                tfr_file = self.tfr_prefix + '-r%02d.tfrecords' % (self.resolution_log2 - lod)\n                self.tfr_writers.append(tf.python_io.TFRecordWriter(tfr_file, tfr_opt))\n        assert img.shape == self.shape\n        for lod, tfr_writer in enumerate(self.tfr_writers):\n            if lod:\n                img = img.astype(np.float32)\n                img = (img[:, 0::2, 0::2] + img[:, 0::2, 1::2] + img[:, 1::2, 0::2] + img[:, 1::2, 1::2]) * 0.25\n            ex = tf.train.Example(features=tf.train.Features(feature={\n                'shape': tf.train.Feature(int64_list=tf.train.Int64List(value=img.shape)),\n                'data': tf.train.Feature(bytes_list=tf.train.BytesList(value=[img.tostring()]))}))\n            tfr_writer.write(ex.SerializeToString())\n        self.cur_images += 1\n\n    def add_labels(self, labels):\n        if self.print_progress:\n            print('%-40s\\r' % 'Saving labels...', end='', flush=True)\n        assert labels.shape[0] == self.cur_images\n        with open(self.tfr_prefix + '-rxx.labels', 'wb') as f:\n            np.save(f, labels.astype(np.float32))\n            \n    def __enter__(self):\n        return self\n    \n    def __exit__(self, *args):\n        self.close()\n\n#----------------------------------------------------------------------------\n\nclass ExceptionInfo(object):\n    def __init__(self):\n        self.value = sys.exc_info()[1]\n        self.traceback = traceback.format_exc()\n\n#----------------------------------------------------------------------------\n\nclass WorkerThread(threading.Thread):\n    def __init__(self, task_queue):\n        threading.Thread.__init__(self)\n        self.task_queue = task_queue\n\n    def run(self):\n        while True:\n            func, args, result_queue = self.task_queue.get()\n            if func is None:\n                break\n            try:\n                result = func(*args)\n            except:\n                result = ExceptionInfo()\n            result_queue.put((result, args))\n\n#----------------------------------------------------------------------------\n\nclass ThreadPool(object):\n    def __init__(self, num_threads):\n        assert num_threads >= 1\n        self.task_queue = Queue.Queue()\n        self.result_queues = dict()\n        self.num_threads = num_threads\n        for idx in range(self.num_threads):\n            thread = WorkerThread(self.task_queue)\n            thread.daemon = True\n            thread.start()\n\n    def add_task(self, func, args=()):\n        assert hasattr(func, '__call__') # must be a function\n        if func not in self.result_queues:\n            self.result_queues[func] = Queue.Queue()\n        self.task_queue.put((func, args, self.result_queues[func]))\n\n    def get_result(self, func): # returns (result, args)\n        result, args = self.result_queues[func].get()\n        if isinstance(result, ExceptionInfo):\n            print('\\n\\nWorker thread caught an exception:\\n' + result.traceback)\n            raise result.value\n        return result, args\n\n    def finish(self):\n        for idx in range(self.num_threads):\n            self.task_queue.put((None, (), None))\n\n    def __enter__(self): # for 'with' statement\n        return self\n\n    def __exit__(self, *excinfo):\n        self.finish()\n\n    def process_items_concurrently(self, item_iterator, process_func=lambda x: x, pre_func=lambda x: x, post_func=lambda x: x, max_items_in_flight=None):\n        if max_items_in_flight is None: max_items_in_flight = self.num_threads * 4\n        assert max_items_in_flight >= 1\n        results = []\n        retire_idx = [0]\n\n        def task_func(prepared, idx):\n            return process_func(prepared)\n           \n        def retire_result():\n            processed, (prepared, idx) = self.get_result(task_func)\n            results[idx] = processed\n            while retire_idx[0] < len(results) and results[retire_idx[0]] is not None:\n                yield post_func(results[retire_idx[0]])\n                results[retire_idx[0]] = None\n                retire_idx[0] += 1\n    \n        for idx, item in enumerate(item_iterator):\n            prepared = pre_func(item)\n            results.append(None)\n            self.add_task(func=task_func, args=(prepared, idx))\n            while retire_idx[0] < idx - max_items_in_flight + 2:\n                for res in retire_result(): yield res\n        while retire_idx[0] < len(results):\n            for res in retire_result(): yield res\n\n#----------------------------------------------------------------------------\n\ndef display(tfrecord_dir):\n    print('Loading dataset \"%s\"' % tfrecord_dir)\n    tfutil.init_tf({'gpu_options.allow_growth': True})\n    dset = dataset.TFRecordDataset(tfrecord_dir, max_label_size='full', repeat=False, shuffle_mb=0)\n    tfutil.init_uninited_vars()\n    \n    idx = 0\n    while True:\n        try:\n            images, labels = dset.get_minibatch_np(1)\n        except tf.errors.OutOfRangeError:\n            break\n        if idx == 0:\n            print('Displaying images')\n            import cv2 # pip install opencv-python\n            cv2.namedWindow('dataset_tool')\n            print('Press SPACE or ENTER to advance, ESC to exit')\n        print('\\nidx = %-8d\\nlabel = %s' % (idx, labels[0].tolist()))\n        cv2.imshow('dataset_tool', images[0].transpose(1, 2, 0)[:, :, ::-1]) # CHW => HWC, RGB => BGR\n        idx += 1\n        if cv2.waitKey() == 27:\n            break\n    print('\\nDisplayed %d images.' % idx)\n\n#----------------------------------------------------------------------------\n\ndef extract(tfrecord_dir, output_dir):\n    print('Loading dataset \"%s\"' % tfrecord_dir)\n    tfutil.init_tf({'gpu_options.allow_growth': True})\n    dset = dataset.TFRecordDataset(tfrecord_dir, max_label_size=0, repeat=False, shuffle_mb=0)\n    tfutil.init_uninited_vars()\n    \n    print('Extracting images to \"%s\"' % output_dir)\n    if not os.path.isdir(output_dir):\n        os.makedirs(output_dir)\n    idx = 0\n    while True:\n        if idx % 10 == 0:\n            print('%d\\r' % idx, end='', flush=True)\n        try:\n            images, labels = dset.get_minibatch_np(1)\n        except tf.errors.OutOfRangeError:\n            break\n        if images.shape[1] == 1:\n            img = PIL.Image.fromarray(images[0][0], 'L')\n        else:\n            img = PIL.Image.fromarray(images[0].transpose(1, 2, 0), 'RGB')\n        img.save(os.path.join(output_dir, 'img%08d.png' % idx))\n        idx += 1\n    print('Extracted %d images.' % idx)\n\n#----------------------------------------------------------------------------\n\ndef compare(tfrecord_dir_a, tfrecord_dir_b, ignore_labels):\n    max_label_size = 0 if ignore_labels else 'full'\n    print('Loading dataset \"%s\"' % tfrecord_dir_a)\n    tfutil.init_tf({'gpu_options.allow_growth': True})\n    dset_a = dataset.TFRecordDataset(tfrecord_dir_a, max_label_size=max_label_size, repeat=False, shuffle_mb=0)\n    print('Loading dataset \"%s\"' % tfrecord_dir_b)\n    dset_b = dataset.TFRecordDataset(tfrecord_dir_b, max_label_size=max_label_size, repeat=False, shuffle_mb=0)\n    tfutil.init_uninited_vars()\n    \n    print('Comparing datasets')\n    idx = 0\n    identical_images = 0\n    identical_labels = 0\n    while True:\n        if idx % 100 == 0:\n            print('%d\\r' % idx, end='', flush=True)\n        try:\n            images_a, labels_a = dset_a.get_minibatch_np(1)\n        except tf.errors.OutOfRangeError:\n            images_a, labels_a = None, None\n        try:\n            images_b, labels_b = dset_b.get_minibatch_np(1)\n        except tf.errors.OutOfRangeError:\n            images_b, labels_b = None, None\n        if images_a is None or images_b is None:\n            if images_a is not None or images_b is not None:\n                print('Datasets contain different number of images')\n            break\n        if images_a.shape == images_b.shape and np.all(images_a == images_b):\n            identical_images += 1\n        else:\n            print('Image %d is different' % idx)\n        if labels_a.shape == labels_b.shape and np.all(labels_a == labels_b):\n            identical_labels += 1\n        else:\n            print('Label %d is different' % idx)\n        idx += 1\n    print('Identical images: %d / %d' % (identical_images, idx))\n    if not ignore_labels:\n        print('Identical labels: %d / %d' % (identical_labels, idx))\n\n#----------------------------------------------------------------------------\n\ndef create_mnist(tfrecord_dir, mnist_dir):\n    print('Loading MNIST from \"%s\"' % mnist_dir)\n    import gzip\n    with gzip.open(os.path.join(mnist_dir, 'train-images-idx3-ubyte.gz'), 'rb') as file:\n        images = np.frombuffer(file.read(), np.uint8, offset=16)\n    with gzip.open(os.path.join(mnist_dir, 'train-labels-idx1-ubyte.gz'), 'rb') as file:\n        labels = np.frombuffer(file.read(), np.uint8, offset=8)\n    images = images.reshape(-1, 1, 28, 28)\n    images = np.pad(images, [(0,0), (0,0), (2,2), (2,2)], 'constant', constant_values=0)\n    assert images.shape == (60000, 1, 32, 32) and images.dtype == np.uint8\n    assert labels.shape == (60000,) and labels.dtype == np.uint8\n    assert np.min(images) == 0 and np.max(images) == 255\n    assert np.min(labels) == 0 and np.max(labels) == 9\n    onehot = np.zeros((labels.size, np.max(labels) + 1), dtype=np.float32)\n    onehot[np.arange(labels.size), labels] = 1.0\n    \n    with TFRecordExporter(tfrecord_dir, images.shape[0]) as tfr:\n        order = tfr.choose_shuffled_order()\n        for idx in range(order.size):\n            tfr.add_image(images[order[idx]])\n        tfr.add_labels(onehot[order])\n\n#----------------------------------------------------------------------------\n\ndef create_mnistrgb(tfrecord_dir, mnist_dir, num_images=1000000, random_seed=123):\n    print('Loading MNIST from \"%s\"' % mnist_dir)\n    import gzip\n    with gzip.open(os.path.join(mnist_dir, 'train-images-idx3-ubyte.gz'), 'rb') as file:\n        images = np.frombuffer(file.read(), np.uint8, offset=16)\n    images = images.reshape(-1, 28, 28)\n    images = np.pad(images, [(0,0), (2,2), (2,2)], 'constant', constant_values=0)\n    assert images.shape == (60000, 32, 32) and images.dtype == np.uint8\n    assert np.min(images) == 0 and np.max(images) == 255\n    \n    with TFRecordExporter(tfrecord_dir, num_images) as tfr:\n        rnd = np.random.RandomState(random_seed)\n        for idx in range(num_images):\n            tfr.add_image(images[rnd.randint(images.shape[0], size=3)])\n\n#----------------------------------------------------------------------------\n\ndef create_cifar10(tfrecord_dir, cifar10_dir):\n    print('Loading CIFAR-10 from \"%s\"' % cifar10_dir)\n    import pickle\n    images = []\n    labels = []\n    for batch in range(1, 6):\n        with open(os.path.join(cifar10_dir, 'data_batch_%d' % batch), 'rb') as file:\n            data = pickle.load(file, encoding='latin1')\n        images.append(data['data'].reshape(-1, 3, 32, 32))\n        labels.append(data['labels'])\n    images = np.concatenate(images)\n    labels = np.concatenate(labels)\n    assert images.shape == (50000, 3, 32, 32) and images.dtype == np.uint8\n    assert labels.shape == (50000,) and labels.dtype == np.int32\n    assert np.min(images) == 0 and np.max(images) == 255\n    assert np.min(labels) == 0 and np.max(labels) == 9\n    onehot = np.zeros((labels.size, np.max(labels) + 1), dtype=np.float32)\n    onehot[np.arange(labels.size), labels] = 1.0\n\n    with TFRecordExporter(tfrecord_dir, images.shape[0]) as tfr:\n        order = tfr.choose_shuffled_order()\n        for idx in range(order.size):\n            tfr.add_image(images[order[idx]])\n        tfr.add_labels(onehot[order])\n\n#----------------------------------------------------------------------------\n\ndef create_cifar100(tfrecord_dir, cifar100_dir):\n    print('Loading CIFAR-100 from \"%s\"' % cifar100_dir)\n    import pickle\n    with open(os.path.join(cifar100_dir, 'train'), 'rb') as file:\n        data = pickle.load(file, encoding='latin1')\n    images = data['data'].reshape(-1, 3, 32, 32)\n    labels = np.array(data['fine_labels'])\n    assert images.shape == (50000, 3, 32, 32) and images.dtype == np.uint8\n    assert labels.shape == (50000,) and labels.dtype == np.int32\n    assert np.min(images) == 0 and np.max(images) == 255\n    assert np.min(labels) == 0 and np.max(labels) == 99\n    onehot = np.zeros((labels.size, np.max(labels) + 1), dtype=np.float32)\n    onehot[np.arange(labels.size), labels] = 1.0\n\n    with TFRecordExporter(tfrecord_dir, images.shape[0]) as tfr:\n        order = tfr.choose_shuffled_order()\n        for idx in range(order.size):\n            tfr.add_image(images[order[idx]])\n        tfr.add_labels(onehot[order])\n\n#----------------------------------------------------------------------------\n\ndef create_svhn(tfrecord_dir, svhn_dir):\n    print('Loading SVHN from \"%s\"' % svhn_dir)\n    import pickle\n    images = []\n    labels = []\n    for batch in range(1, 4):\n        with open(os.path.join(svhn_dir, 'train_%d.pkl' % batch), 'rb') as file:\n            data = pickle.load(file, encoding='latin1')\n        images.append(data[0])\n        labels.append(data[1])\n    images = np.concatenate(images)\n    labels = np.concatenate(labels)\n    assert images.shape == (73257, 3, 32, 32) and images.dtype == np.uint8\n    assert labels.shape == (73257,) and labels.dtype == np.uint8\n    assert np.min(images) == 0 and np.max(images) == 255\n    assert np.min(labels) == 0 and np.max(labels) == 9\n    onehot = np.zeros((labels.size, np.max(labels) + 1), dtype=np.float32)\n    onehot[np.arange(labels.size), labels] = 1.0\n\n    with TFRecordExporter(tfrecord_dir, images.shape[0]) as tfr:\n        order = tfr.choose_shuffled_order()\n        for idx in range(order.size):\n            tfr.add_image(images[order[idx]])\n        tfr.add_labels(onehot[order])\n\n#----------------------------------------------------------------------------\n\ndef create_lsun(tfrecord_dir, lmdb_dir, resolution=256, max_images=None):\n    print('Loading LSUN dataset from \"%s\"' % lmdb_dir)\n    import lmdb # pip install lmdb\n    import cv2 # pip install opencv-python\n    import io\n    with lmdb.open(lmdb_dir, readonly=True).begin(write=False) as txn:\n        total_images = txn.stat()['entries']\n        if max_images is None:\n            max_images = total_images\n        with TFRecordExporter(tfrecord_dir, max_images) as tfr:\n            for idx, (key, value) in enumerate(txn.cursor()):\n                try:\n                    try:\n                        img = cv2.imdecode(np.fromstring(value, dtype=np.uint8), 1)\n                        if img is None:\n                            raise IOError('cv2.imdecode failed')\n                        img = img[:, :, ::-1] # BGR => RGB\n                    except IOError:\n                        img = np.asarray(PIL.Image.open(io.BytesIO(value)))\n                    crop = np.min(img.shape[:2])\n                    img = img[(img.shape[0] - crop) // 2 : (img.shape[0] + crop) // 2, (img.shape[1] - crop) // 2 : (img.shape[1] + crop) // 2]\n                    img = PIL.Image.fromarray(img, 'RGB')\n                    img = img.resize((resolution, resolution), PIL.Image.ANTIALIAS)\n                    img = np.asarray(img)\n                    img = img.transpose(2, 0, 1) # HWC => CHW\n                    tfr.add_image(img)\n                except:\n                    print(sys.exc_info()[1])\n                if tfr.cur_images == max_images:\n                    break\n        \n#----------------------------------------------------------------------------\n\ndef create_celeba(tfrecord_dir, celeba_dir, cx=89, cy=121):\n    print('Loading CelebA from \"%s\"' % celeba_dir)\n    glob_pattern = os.path.join(celeba_dir, 'img_align_celeba_png', '*.png')\n    image_filenames = sorted(glob.glob(glob_pattern))\n    expected_images = 202599\n    if len(image_filenames) != expected_images:\n        error('Expected to find %d images' % expected_images)\n    \n    with TFRecordExporter(tfrecord_dir, len(image_filenames)) as tfr:\n        order = tfr.choose_shuffled_order()\n        for idx in range(order.size):\n            img = np.asarray(PIL.Image.open(image_filenames[order[idx]]))\n            assert img.shape == (218, 178, 3)\n            img = img[cy - 64 : cy + 64, cx - 64 : cx + 64]\n            img = img.transpose(2, 0, 1) # HWC => CHW\n            tfr.add_image(img)\n\n#----------------------------------------------------------------------------\n\ndef create_celebahq(tfrecord_dir, celeba_dir, delta_dir, num_threads=4, num_tasks=100):\n    print('Loading CelebA from \"%s\"' % celeba_dir)\n    expected_images = 202599\n    if len(glob.glob(os.path.join(celeba_dir, 'img_celeba', '*.jpg'))) != expected_images:\n        error('Expected to find %d images' % expected_images)\n    with open(os.path.join(celeba_dir, 'Anno', 'list_landmarks_celeba.txt'), 'rt') as file:\n        landmarks = [[float(value) for value in line.split()[1:]] for line in file.readlines()[2:]]\n        landmarks = np.float32(landmarks).reshape(-1, 5, 2)\n    \n    print('Loading CelebA-HQ deltas from \"%s\"' % delta_dir)\n    import scipy.ndimage\n    import hashlib\n    import bz2\n    import zipfile\n    import base64\n    import cryptography.hazmat.primitives.hashes\n    import cryptography.hazmat.backends\n    import cryptography.hazmat.primitives.kdf.pbkdf2\n    import cryptography.fernet\n    expected_zips = 30\n    if len(glob.glob(os.path.join(delta_dir, 'delta*.zip'))) != expected_zips:\n        error('Expected to find %d zips' % expected_zips)\n    with open(os.path.join(delta_dir, 'image_list.txt'), 'rt') as file:\n        lines = [line.split() for line in file]\n        fields = dict()\n        for idx, field in enumerate(lines[0]):\n            type = int if field.endswith('idx') else str\n            fields[field] = [type(line[idx]) for line in lines[1:]]\n    indices = np.array(fields['idx'])\n\n    # Must use pillow version 3.1.1 for everything to work correctly.\n    if getattr(PIL, 'PILLOW_VERSION', '') != '3.1.1':\n        error('create_celebahq requires pillow version 3.1.1') # conda install pillow=3.1.1\n        \n    # Must use libjpeg version 8d for everything to work correctly.\n    img = np.array(PIL.Image.open(os.path.join(celeba_dir, 'img_celeba', '000001.jpg')))\n    md5 = hashlib.md5()\n    md5.update(img.tobytes())\n    if md5.hexdigest() != '9cad8178d6cb0196b36f7b34bc5eb6d3':\n        error('create_celebahq requires libjpeg version 8d') # conda install jpeg=8d\n\n    def rot90(v):\n        return np.array([-v[1], v[0]])\n\n    def process_func(idx):\n        # Load original image.\n        orig_idx = fields['orig_idx'][idx]\n        orig_file = fields['orig_file'][idx]\n        orig_path = os.path.join(celeba_dir, 'img_celeba', orig_file)\n        img = PIL.Image.open(orig_path)\n\n        # Choose oriented crop rectangle.\n        lm = landmarks[orig_idx]\n        eye_avg = (lm[0] + lm[1]) * 0.5 + 0.5\n        mouth_avg = (lm[3] + lm[4]) * 0.5 + 0.5\n        eye_to_eye = lm[1] - lm[0]\n        eye_to_mouth = mouth_avg - eye_avg\n        x = eye_to_eye - rot90(eye_to_mouth)\n        x /= np.hypot(*x)\n        x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8)\n        y = rot90(x)\n        c = eye_avg + eye_to_mouth * 0.1\n        quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y])\n        zoom = 1024 / (np.hypot(*x) * 2)\n\n        # Shrink.\n        shrink = int(np.floor(0.5 / zoom))\n        if shrink > 1:\n            size = (int(np.round(float(img.size[0]) / shrink)), int(np.round(float(img.size[1]) / shrink)))\n            img = img.resize(size, PIL.Image.ANTIALIAS)\n            quad /= shrink\n            zoom *= shrink\n\n        # Crop.\n        border = max(int(np.round(1024 * 0.1 / zoom)), 3)\n        crop = (int(np.floor(min(quad[:,0]))), int(np.floor(min(quad[:,1]))), int(np.ceil(max(quad[:,0]))), int(np.ceil(max(quad[:,1]))))\n        crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]), min(crop[3] + border, img.size[1]))\n        if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]:\n            img = img.crop(crop)\n            quad -= crop[0:2]\n\n        # Simulate super-resolution.\n        superres = int(np.exp2(np.ceil(np.log2(zoom))))\n        if superres > 1:\n            img = img.resize((img.size[0] * superres, img.size[1] * superres), PIL.Image.ANTIALIAS)\n            quad *= superres\n            zoom /= superres\n\n        # Pad.\n        pad = (int(np.floor(min(quad[:,0]))), int(np.floor(min(quad[:,1]))), int(np.ceil(max(quad[:,0]))), int(np.ceil(max(quad[:,1]))))\n        pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0), max(pad[3] - img.size[1] + border, 0))\n        if max(pad) > border - 4:\n            pad = np.maximum(pad, int(np.round(1024 * 0.3 / zoom)))\n            img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect')\n            h, w, _ = img.shape\n            y, x, _ = np.mgrid[:h, :w, :1]\n            mask = 1.0 - np.minimum(np.minimum(np.float32(x) / pad[0], np.float32(y) / pad[1]), np.minimum(np.float32(w-1-x) / pad[2], np.float32(h-1-y) / pad[3]))\n            blur = 1024 * 0.02 / zoom\n            img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0)\n            img += (np.median(img, axis=(0,1)) - img) * np.clip(mask, 0.0, 1.0)\n            img = PIL.Image.fromarray(np.uint8(np.clip(np.round(img), 0, 255)), 'RGB')\n            quad += pad[0:2]\n            \n        # Transform.\n        img = img.transform((4096, 4096), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR)\n        img = img.resize((1024, 1024), PIL.Image.ANTIALIAS)\n        img = np.asarray(img).transpose(2, 0, 1)\n        \n        # Verify MD5.\n        md5 = hashlib.md5()\n        md5.update(img.tobytes())\n        assert md5.hexdigest() == fields['proc_md5'][idx]\n        \n        # Load delta image and original JPG.\n        with zipfile.ZipFile(os.path.join(delta_dir, 'deltas%05d.zip' % (idx - idx % 1000)), 'r') as zip:\n            delta_bytes = zip.read('delta%05d.dat' % idx)\n        with open(orig_path, 'rb') as file:\n            orig_bytes = file.read()\n        \n        # Decrypt delta image, using original JPG data as decryption key.\n        algorithm = cryptography.hazmat.primitives.hashes.SHA256()\n        backend = cryptography.hazmat.backends.default_backend()\n        salt = bytes(orig_file, 'ascii')\n        kdf = cryptography.hazmat.primitives.kdf.pbkdf2.PBKDF2HMAC(algorithm=algorithm, length=32, salt=salt, iterations=100000, backend=backend)\n        key = base64.urlsafe_b64encode(kdf.derive(orig_bytes))\n        delta = np.frombuffer(bz2.decompress(cryptography.fernet.Fernet(key).decrypt(delta_bytes)), dtype=np.uint8).reshape(3, 1024, 1024)\n        \n        # Apply delta image.\n        img = img + delta\n        \n        # Verify MD5.\n        md5 = hashlib.md5()\n        md5.update(img.tobytes())\n        assert md5.hexdigest() == fields['final_md5'][idx]\n        return img\n\n    with TFRecordExporter(tfrecord_dir, indices.size) as tfr:\n        order = tfr.choose_shuffled_order()\n        with ThreadPool(num_threads) as pool:\n            for img in pool.process_items_concurrently(indices[order].tolist(), process_func=process_func, max_items_in_flight=num_tasks):\n                tfr.add_image(img)\n\n#----------------------------------------------------------------------------\n\ndef create_from_images(tfrecord_dir, image_dir, shuffle):\n    print('Loading images from \"%s\"' % image_dir)\n    image_filenames = sorted(glob.glob(os.path.join(image_dir, '*')))\n    if len(image_filenames) == 0:\n        error('No input images found')\n\n    good_ids =  mio.import_pickle('/vol/construct3dmm/visualizations/nicp/mein3d/good_ids.pkl')\n\n    img = np.asarray(PIL.Image.open(image_filenames[0]))\n    resolution = img.shape[0]\n    channels = img.shape[2] if img.ndim == 3 else 1\n    # if img.shape[1] != resolution:\n    #     error('Input images must have the same width and height')\n    # if resolution != 2 ** int(np.floor(np.log2(resolution))):\n    #     error('Input image resolution must be a power-of-two')\n    if channels not in [1, 3]:\n        error('Input images must be stored as RGB or grayscale')\n    \n    with TFRecordExporter(tfrecord_dir, len(image_filenames)) as tfr:\n        order = tfr.choose_shuffled_order() if shuffle else np.arange(len(image_filenames))\n        for idx in range(order.size):\n                img = PIL.Image.open(image_filenames[order[idx]])\n                # img.crop((41, 0, img.size[0]-42, resolution))\n                # new_im = PIL.Image.new(\"RGB\", (512, 512),(0, 0, 0))\n                # new_im.paste(img, ((512 - img.size[0]) // 2,(512 - img.size[1]) // 2))\n                img = np.asarray(img)\n                if channels == 1:\n                    img = img[np.newaxis, :, :] # HW => CHW\n                else:\n                    img = img.transpose(2, 0, 1) # HWC => CHW\n                tfr.add_image(img)\n\n#----------------------------------------------------------------------------\n\ndef create_from_pkl(tfrecord_dir, image_dir, shuffle):\n    print('Loading images from \"%s\"' % image_dir)\n    image_filenames = sorted(glob.glob(os.path.join(image_dir, '*')))\n    if len(image_filenames) == 0:\n        error('No input images found')\n\n    # good_ids =  mio.import_pickle('/vol/construct3dmm/visualizations/nicp/mein3d/good_ids.pkl')\n\n    img = mio.import_pickle(image_filenames[0])\n    resolution = img.shape[2]\n    channels = img.shape[0] if img.ndim == 3 else 1\n    if img.shape[1] != resolution:\n        error('Input images must have the same width and height')\n    if resolution != 2 ** int(np.floor(np.log2(resolution))):\n        error('Input image resolution must be a power-of-two')\n    if channels not in [1, 3]:\n        error('Input images must be stored as RGB or grayscale')\n\n    with TFRecordExporter(tfrecord_dir, len(image_filenames)) as tfr:\n        order = tfr.choose_shuffled_order() if shuffle else np.arange(len(image_filenames))\n        for idx in range(order.size):\n            img = mio.import_pickle(image_filenames[order[idx]]).astype(np.float32)\n\n            # img[0, :, :] = scipy.ndimage.gaussian_filter(img[0, :, :], 2)\n            # img[1, :, :] = scipy.ndimage.gaussian_filter(img[1, :, :], 2)\n            # img[2, :, :] = scipy.ndimage.gaussian_filter(img[2, :, :], 2)\n            # img_resized = np.stack((cv2.resize(img[0],dsize=(256,256)),cv2.resize(img[1],dsize=(256,256)),cv2.resize(img[2],dsize=(256,256))))\n            tfr.add_shape(img)\n\n#----------------------------------------------------------------------------\n\ndef create_from_pkl_img(tfrecord_dir, image_dir, pickle_dir, shuffle):\n    print('Loading images from \"%s\"' % image_dir)\n\n    image_filenames = sorted(glob.glob(os.path.join(image_dir, '*')))\n    pickle_filenames = sorted(glob.glob(os.path.join(pickle_dir, '*')))\n    if len(image_filenames) == 0:\n        error('No input images found')\n\n    # good_ids =  mio.import_pickle('/vol/construct3dmm/visualizations/nicp/mein3d/good_ids.pkl')\n\n    img = mio.import_pickle(pickle_filenames[0])\n    resolution = img.shape[2]\n    channels = img.shape[0] if img.ndim == 3 else 1\n    if img.shape[1] != resolution:\n        error('Input images must have the same width and height')\n    if resolution != 2 ** int(np.floor(np.log2(resolution))):\n        error('Input image resolution must be a power-of-two')\n    if channels not in [1, 3]:\n        error('Input images must be stored as RGB or grayscale')\n\n    with TFRecordExporter(tfrecord_dir, len(image_filenames)) as tfr:\n        order = tfr.choose_shuffled_order() if shuffle else np.arange(len(image_filenames))\n        for idx in range(order.size):\n            img = mio.import_image(image_filenames[order[idx]]).pixels.astype(np.float32)*2-1\n            pkl = mio.import_pickle(pickle_filenames[order[idx]]).astype(np.float32)\n            # pkl[0, :, :] = scipy.ndimage.gaussian_filter(pkl[0, :, :], 2)\n            # pkl[1, :, :] = scipy.ndimage.gaussian_filter(pkl[1, :, :], 2)\n            # pkl[2, :, :] = scipy.ndimage.gaussian_filter(pkl[2, :, :], 2)\n                # img_resized = np.stack((cv2.resize(img[0],dsize=(256,256)),cv2.resize(img[1],dsize=(256,256)),cv2.resize(img[2],dsize=(256,256))))\n            tfr.add_both(np.concatenate([img,pkl]))\n\n#----------------------------------------------------------------------------\n\ndef create_from_pkl_img_norm(tfrecord_dir, mein3d_image_dir, mein3d_pickle_dir, mein3d_normal_dir, shuffle):\n    print('Loading images from \"%s\"' % mein3d_image_dir)\n\n    _3dmd_image_dir = '/raid/baris/data/3dmd_crop/texture/'\n    _3dmd_pickle_dir = '/raid/baris/data/3dmd_crop/shape/'\n    _3dmd_normal_dir = '/raid/baris/data/3dmd_crop/normals/'\n\n    labels_mein3d = mio.import_pickle('../Prepare_dataset/results_mein3d.pkl')\n    paths_mein3d = mio.import_pickle('../Prepare_dataset/paths_mein3d.pkl')\n    paths_mein3d_tex = [os.path.join(mein3d_image_dir, im + '.png') for im in paths_mein3d]\n\n    actual_paths_mein3d_tex = glob.glob(mein3d_image_dir + '/*.png')\n    actual_paths_mein3d_shp = glob.glob(mein3d_pickle_dir + '/*.pkl')\n    actual_paths_mein3d_nor = glob.glob(mein3d_normal_dir + '/*.pkl')\n    idx =[]\n    for path in actual_paths_mein3d_tex:\n        idx.append(paths_mein3d_tex.index(path))\n    actual_labels_mein3d = labels_mein3d[idx]\n\n    assert len(actual_paths_mein3d_tex) == len(actual_paths_mein3d_shp) == len(actual_paths_mein3d_nor) == len(actual_labels_mein3d)\n\n\n\n    labels_3dmd = mio.import_pickle('../Prepare_dataset/results_3dmd.pkl')\n    paths_3dmd = mio.import_pickle('../Prepare_dataset/paths_3dmd.pkl')\n    paths_3dmd_shp = [os.path.join(_3dmd_pickle_dir,str.split(im,'.')[0], im + '.pkl') for im in paths_3dmd]\n\n\n    actual_paths_3dmd_tex = glob.glob(_3dmd_image_dir + '/*/*.*.png')\n    actual_paths_3dmd_shp = glob.glob(_3dmd_pickle_dir + '/*/*.*.pkl')\n    actual_paths_3dmd_nor = glob.glob(_3dmd_normal_dir + '/*/*.*.pkl')\n    idx =[]\n    for path in actual_paths_3dmd_shp:\n        if path in paths_3dmd_shp and path.replace('shape','texture').replace('.pkl','.png') in actual_paths_3dmd_tex:\n            idx.append(paths_3dmd_shp.index(path))\n    actual_labels_3dmd = labels_3dmd[idx]\n    intersection_paths_tex = [os.path.join(_3dmd_image_dir, str.split(im, '.')[0], im + '.png') for im in [paths_3dmd[i] for i in idx]]\n    intersection_paths_shp = [os.path.join(_3dmd_pickle_dir,str.split(im,'.')[0], im + '.pkl') for im in [paths_3dmd[i] for i in idx]]\n    intersection_paths_nor = [os.path.join(_3dmd_normal_dir, str.split(im, '.')[0], im + '.pkl') for im in [paths_3dmd[i] for i in idx]]\n\n    assert len(intersection_paths_tex) == len(intersection_paths_shp) == len(intersection_paths_nor) == len(actual_labels_3dmd)\n\n\n    image_filenames = actual_paths_mein3d_tex + intersection_paths_tex\n    pickle_filenames = actual_paths_mein3d_shp + intersection_paths_shp\n    normal_filenames = actual_paths_mein3d_nor + intersection_paths_nor\n    # image_filenames = paths_mein3d[0:100]\n    if len(image_filenames) == 0:\n        error('No input images found')\n    labels = np.append(actual_labels_mein3d,actual_labels_3dmd,0)\n    # labels = labels[0:100]\n\n\n    if len(image_filenames) == 0:\n        error('No input images found')\n\n    # good_ids =  mio.import_pickle('/vol/construct3dmm/visualizations/nicp/mein3d/good_ids.pkl')\n\n    img = mio.import_pickle(pickle_filenames[0])\n    resolution = img.shape[2]\n    channels = img.shape[0] if img.ndim == 3 else 1\n    if img.shape[1] != resolution:\n        error('Input images must have the same width and height')\n    if resolution != 2 ** int(np.floor(np.log2(resolution))):\n        error('Input image resolution must be a power-of-two')\n    if channels not in [1, 3]:\n        error('Input images must be stored as RGB or grayscale')\n\n    with TFRecordExporter(tfrecord_dir, len(image_filenames)) as tfr:\n        order = tfr.choose_shuffled_order() if shuffle else np.arange(len(image_filenames))\n        for idx in range(order.size):\n            img = mio.import_image(image_filenames[order[idx]]).pixels.astype(np.float32)*2-1\n            pkl = mio.import_pickle(pickle_filenames[order[idx]]).astype(np.float32)\n            normal = mio.import_pickle(normal_filenames[order[idx]]).astype(np.float32)\n            # pkl[0, :, :] = scipy.ndimage.gaussian_filter(pkl[0, :, :], 2)\n            # pkl[1, :, :] = scipy.ndimage.gaussian_filter(pkl[1, :, :], 2)\n            # pkl[2, :, :] = scipy.ndimage.gaussian_filter(pkl[2, :, :], 2)\n                # img_resized = np.stack((cv2.resize(img[0],dsize=(256,256)),cv2.resize(img[1],dsize=(256,256)),cv2.resize(img[2],dsize=(256,256))))\n            tfr.add_both(np.concatenate([img,pkl,normal]))\n        tfr.add_labels(labels[order,:])\n\n#----------------------------------------------------------------------------\n\ndef create_from_hdf5(tfrecord_dir, hdf5_filename, shuffle):\n    print('Loading HDF5 archive from \"%s\"' % hdf5_filename)\n    import h5py # conda install h5py\n    with h5py.File(hdf5_filename, 'r') as hdf5_file:\n        hdf5_data = max([value for key, value in hdf5_file.items() if key.startswith('data')], key=lambda lod: lod.shape[3])\n        with TFRecordExporter(tfrecord_dir, hdf5_data.shape[0]) as tfr:\n            order = tfr.choose_shuffled_order() if shuffle else np.arange(hdf5_data.shape[0])\n            for idx in range(order.size):\n                tfr.add_image(hdf5_data[order[idx]])\n            npy_filename = os.path.splitext(hdf5_filename)[0] + '-labels.npy'\n            if os.path.isfile(npy_filename):\n                tfr.add_labels(np.load(npy_filename)[order])\n\n#----------------------------------------------------------------------------\n\ndef execute_cmdline(argv):\n    prog = argv[0]\n    parser = argparse.ArgumentParser(\n        prog        = prog,\n        description = 'Tool for creating, extracting, and visualizing Progressive GAN datasets.',\n        epilog      = 'Type \"%s <command> -h\" for more information.' % prog)\n        \n    subparsers = parser.add_subparsers(dest='command')\n    subparsers.required = True\n    def add_command(cmd, desc, example=None):\n        epilog = 'Example: %s %s' % (prog, example) if example is not None else None\n        return subparsers.add_parser(cmd, description=desc, help=desc, epilog=epilog)\n\n    p = add_command(    'display',          'Display images in dataset.',\n                                            'display datasets/mnist')\n    p.add_argument(     'tfrecord_dir',     help='Directory containing dataset')\n  \n    p = add_command(    'extract',          'Extract images from dataset.',\n                                            'extract datasets/mnist mnist-images')\n    p.add_argument(     'tfrecord_dir',     help='Directory containing dataset')\n    p.add_argument(     'output_dir',       help='Directory to extract the images into')\n\n    p = add_command(    'compare',          'Compare two datasets.',\n                                            'compare datasets/mydataset datasets/mnist')\n    p.add_argument(     'tfrecord_dir_a',   help='Directory containing first dataset')\n    p.add_argument(     'tfrecord_dir_b',   help='Directory containing second dataset')\n    p.add_argument(     '--ignore_labels',  help='Ignore labels (default: 0)', type=int, default=0)\n\n    p = add_command(    'create_mnist',     'Create dataset for MNIST.',\n                                            'create_mnist datasets/mnist ~/downloads/mnist')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'mnist_dir',        help='Directory containing MNIST')\n\n    p = add_command(    'create_mnistrgb',  'Create dataset for MNIST-RGB.',\n                                            'create_mnistrgb datasets/mnistrgb ~/downloads/mnist')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'mnist_dir',        help='Directory containing MNIST')\n    p.add_argument(     '--num_images',     help='Number of composite images to create (default: 1000000)', type=int, default=1000000)\n    p.add_argument(     '--random_seed',    help='Random seed (default: 123)', type=int, default=123)\n\n    p = add_command(    'create_cifar10',   'Create dataset for CIFAR-10.',\n                                            'create_cifar10 datasets/cifar10 ~/downloads/cifar10')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'cifar10_dir',      help='Directory containing CIFAR-10')\n\n    p = add_command(    'create_cifar100',  'Create dataset for CIFAR-100.',\n                                            'create_cifar100 datasets/cifar100 ~/downloads/cifar100')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'cifar100_dir',     help='Directory containing CIFAR-100')\n\n    p = add_command(    'create_svhn',      'Create dataset for SVHN.',\n                                            'create_svhn datasets/svhn ~/downloads/svhn')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'svhn_dir',         help='Directory containing SVHN')\n\n    p = add_command(    'create_lsun',      'Create dataset for single LSUN category.',\n                                            'create_lsun datasets/lsun-car-100k ~/downloads/lsun/car_lmdb --resolution 256 --max_images 100000')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'lmdb_dir',         help='Directory containing LMDB database')\n    p.add_argument(     '--resolution',     help='Output resolution (default: 256)', type=int, default=256)\n    p.add_argument(     '--max_images',     help='Maximum number of images (default: none)', type=int, default=None)\n\n    p = add_command(    'create_celeba',    'Create dataset for CelebA.',\n                                            'create_celeba datasets/celeba ~/downloads/celeba')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'celeba_dir',       help='Directory containing CelebA')\n    p.add_argument(     '--cx',             help='Center X coordinate (default: 89)', type=int, default=89)\n    p.add_argument(     '--cy',             help='Center Y coordinate (default: 121)', type=int, default=121)\n\n    p = add_command(    'create_celebahq',  'Create dataset for CelebA-HQ.',\n                                            'create_celebahq datasets/celebahq ~/downloads/celeba ~/downloads/celeba-hq-deltas')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'celeba_dir',       help='Directory containing CelebA')\n    p.add_argument(     'delta_dir',        help='Directory containing CelebA-HQ deltas')\n    p.add_argument(     '--num_threads',    help='Number of concurrent threads (default: 4)', type=int, default=4)\n    p.add_argument(     '--num_tasks',      help='Number of concurrent processing tasks (default: 100)', type=int, default=100)\n\n    p = add_command(    'create_from_images', 'Create dataset from a directory full of images.',\n                                            'create_from_images datasets/mydataset myimagedir')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'image_dir',        help='Directory containing the images')\n    p.add_argument(     '--shuffle',        help='Randomize image order (default: 1)', type=int, default=1)\n\n    p = add_command(    'create_from_pkl', 'Create dataset from a directory full of images.',\n                                            'create_from_pkl datasets/mydataset myimagedir')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'image_dir',        help='Directory containing the images')\n    p.add_argument(     '--shuffle',        help='Randomize image order (default: 1)', type=int, default=1)\n\n    p = add_command(    'create_from_pkl_img', 'Create dataset from a directory full of images.',\n                                            'create_from_pkl_img datasets/mydataset myimagedir myshapedir')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'image_dir',        help='Directory containing the images')\n    p.add_argument(     'pickle_dir',        help='Directory containing the shape pickles')\n    p.add_argument(     '--shuffle',        help='Randomize image order (default: 1)', type=int, default=1)\n\n    p = add_command(    'create_from_pkl_img_norm', 'Create dataset from a directory full of images.',\n                                            'create_from_pkl_img_norm datasets/mydataset myimagedir myshapedir')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'mein3d_image_dir',        help='Directory containing the images')\n    p.add_argument(     'mein3d_pickle_dir',        help='Directory containing the shape pickles')\n    p.add_argument(     'mein3d_normal_dir',        help='Directory containing the normal pickles')\n    p.add_argument(     '--shuffle',        help='Randomize image order (default: 1)', type=int, default=1)\n\n    p = add_command(    'create_from_hdf5', 'Create dataset from legacy HDF5 archive.',\n                                            'create_from_hdf5 datasets/celebahq ~/downloads/celeba-hq-1024x1024.h5')\n    p.add_argument(     'tfrecord_dir',     help='New dataset directory to be created')\n    p.add_argument(     'hdf5_filename',    help='HDF5 archive containing the images')\n    p.add_argument(     '--shuffle',        help='Randomize image order (default: 1)', type=int, default=1)\n\n    args = parser.parse_args(argv[1:] if len(argv) > 1 else ['-h'])\n    func = globals()[args.command]\n    del args.command\n    func(**vars(args))\n\n#----------------------------------------------------------------------------\n\nif __name__ == \"__main__\":\n    execute_cmdline(sys.argv)\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "legacy.py",
    "content": "# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# This work is licensed under the Creative Commons Attribution-NonCommercial\n# 4.0 International License. To view a copy of this license, visit\n# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to\n# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\n\nimport pickle\nimport inspect\nimport numpy as np\n\nimport tfutil\nimport networks\n\n#----------------------------------------------------------------------------\n# Custom unpickler that is able to load network pickles produced by\n# the old Theano implementation.\n\nclass LegacyUnpickler(pickle.Unpickler):\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n\n    def find_class(self, module, name):\n        module = module.replace('uv_gan.','')\n        if module == 'network' and name == 'Network':\n            return tfutil.Network\n        return super().find_class(module, name)\n\n#----------------------------------------------------------------------------\n# Import handler for tfutil.Network that silently converts networks produced\n# by the old Theano implementation to a suitable format.\n\ntheano_gan_remap = {\n    'G_paper':          'G_paper',\n    'G_progressive_8':  'G_paper',\n    'D_paper':          'D_paper',\n    'D_progressive_8':  'D_paper'}\n\ndef patch_theano_gan(state):\n    if 'version' in state or state['build_func_spec']['func'] not in theano_gan_remap:\n        return state\n\n    spec = dict(state['build_func_spec'])\n    func = spec.pop('func')\n    resolution = spec.get('resolution', 32)\n    resolution_log2 = int(np.log2(resolution))\n    use_wscale = spec.get('use_wscale', True)\n\n    assert spec.pop('label_size',       0)          == 0\n    assert spec.pop('use_batchnorm',    False)      == False\n    assert spec.pop('tanh_at_end',      None)       is None\n    assert spec.pop('mbstat_func',      'Tstdeps')  == 'Tstdeps'\n    assert spec.pop('mbstat_avg',       'all')      == 'all'\n    assert spec.pop('mbdisc_kernels',   None)       is None\n    spec.pop(       'use_gdrop',        True)       # doesn't make a difference\n    assert spec.pop('use_layernorm',    False)      == False\n    spec[           'fused_scale']                  = False\n    spec[           'mbstd_group_size']             = 16\n\n    vars = []\n    param_iter = iter(state['param_values'])\n    relu = np.sqrt(2); linear = 1.0\n    def flatten2(w): return w.reshape(w.shape[0], -1)\n    def he_std(gain, w): return gain / np.sqrt(np.prod(w.shape[:-1]))\n    def wscale(gain, w): return w * next(param_iter) / he_std(gain, w) if use_wscale else w\n    def layer(name, gain, w): return [(name + '/weight', wscale(gain, w)), (name + '/bias', next(param_iter))]\n    \n    if func.startswith('G'):\n        vars += layer('4x4/Dense', relu/4, flatten2(next(param_iter).transpose(1,0,2,3)))\n        vars += layer('4x4/Conv', relu, next(param_iter).transpose(2,3,1,0)[::-1,::-1])\n        for res in range(3, resolution_log2 + 1):\n            vars += layer('%dx%d/Conv0' % (2**res, 2**res), relu, next(param_iter).transpose(2,3,1,0)[::-1,::-1])\n            vars += layer('%dx%d/Conv1' % (2**res, 2**res), relu, next(param_iter).transpose(2,3,1,0)[::-1,::-1])\n        for lod in range(0, resolution_log2 - 1):\n            vars += layer('ToRGB_lod%d' % lod, linear, next(param_iter)[np.newaxis, np.newaxis])\n\n    if func.startswith('D'):\n        vars += layer('FromRGB_lod0', relu, next(param_iter)[np.newaxis, np.newaxis])\n        for res in range(resolution_log2, 2, -1):\n            vars += layer('%dx%d/Conv0' % (2**res, 2**res), relu, next(param_iter).transpose(2,3,1,0)[::-1,::-1])\n            vars += layer('%dx%d/Conv1' % (2**res, 2**res), relu, next(param_iter).transpose(2,3,1,0)[::-1,::-1])\n            vars += layer('FromRGB_lod%d' % (resolution_log2 - (res - 1)), relu, next(param_iter)[np.newaxis, np.newaxis])\n        vars += layer('4x4/Conv', relu, next(param_iter).transpose(2,3,1,0)[::-1,::-1])\n        vars += layer('4x4/Dense0', relu, flatten2(next(param_iter)[:,:,::-1,::-1]).transpose())\n        vars += layer('4x4/Dense1', linear, next(param_iter))\n\n    vars += [('lod', state['toplevel_params']['cur_lod'])]\n\n    return {\n        'version':          2,\n        'name':             func,\n        'build_module_src': inspect.getsource(networks),\n        'build_func_name':  theano_gan_remap[func],\n        'static_kwargs':    spec,\n        'variables':        vars}\n\ntfutil.network_import_handlers.append(patch_theano_gan)\n\n#----------------------------------------------------------------------------\n# Import handler for tfutil.Network that ignores unsupported/deprecated\n# networks produced by older versions of the code.\n\ndef ignore_unknown_theano_network(state):\n    if 'version' in state:\n        return state\n\n    print('Ignoring unknown Theano network:', state['build_func_spec']['func'])\n    return {\n        'version':          2,\n        'name':             'Dummy',\n        'build_module_src': 'def dummy(input, **kwargs): input.set_shape([None, 1]); return input',\n        'build_func_name':  'dummy',\n        'static_kwargs':    {},\n        'variables':        []}\n\ntfutil.network_import_handlers.append(ignore_unknown_theano_network)\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "loss.py",
    "content": "# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# This work is licensed under the Creative Commons Attribution-NonCommercial\n# 4.0 International License. To view a copy of this license, visit\n# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to\n# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\n\nimport tensorflow as tf\n\nimport tfutil\n\n#----------------------------------------------------------------------------\n# Convenience func that casts all of its arguments to tf.float32.\n\ndef fp32(*values):\n    if len(values) == 1 and isinstance(values[0], tuple):\n        values = values[0]\n    values = tuple(tf.cast(v, tf.float32) for v in values)\n    return values if len(values) >= 2 else values[0]\n\n#----------------------------------------------------------------------------\n# Generator loss function used in the paper (WGAN + AC-GAN).\n\ndef G_wgan_acgan(G, D, opt, training_set, minibatch_size,\n    cond_weight = 1.0): # Weight of the conditioning term.\n\n    latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:])\n    labels = training_set.get_random_labels_tf(minibatch_size)\n    labels = tf.nn.softmax(labels)\n    fake_images_out = G.get_output_for(latents, labels, is_training=True)\n    fake_scores_out, fake_labels_out = fp32(D.get_output_for(fake_images_out, is_training=True))\n    loss = -fake_scores_out\n\n    if D.output_shapes[1][1] > 0:\n        with tf.name_scope('LabelPenalty'):\n            label_penalty_fakes = tf.nn.softmax_cross_entropy_with_logits_v2(labels=labels, logits=fake_labels_out)\n        loss += label_penalty_fakes * cond_weight\n    return loss\n\n#----------------------------------------------------------------------------\n# Discriminator loss function used in the paper (WGAN-GP + AC-GAN).\n\ndef D_wgangp_acgan(G, D, opt, training_set, minibatch_size, reals, labels,\n    wgan_lambda     = 10.0,     # Weight for the gradient penalty term.\n    wgan_epsilon    = 0.001,    # Weight for the epsilon term, \\epsilon_{drift}.\n    wgan_target     = 1.0,      # Target value for gradient magnitudes.\n    cond_weight     = 1.0):     # Weight of the conditioning terms.\n\n    labels = tf.nn.softmax(labels)\n    latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:])\n    fake_images_out = G.get_output_for(latents, labels, is_training=True)\n    real_scores_out, real_labels_out = fp32(D.get_output_for(reals, is_training=True))\n    fake_scores_out, fake_labels_out = fp32(D.get_output_for(fake_images_out, is_training=True))\n    real_scores_out = tfutil.autosummary('Loss/real_scores', real_scores_out)\n    fake_scores_out = tfutil.autosummary('Loss/fake_scores', fake_scores_out)\n    loss = fake_scores_out - real_scores_out\n\n    with tf.name_scope('GradientPenalty'):\n        mixing_factors = tf.random_uniform([minibatch_size, 1, 1, 1], 0.0, 1.0, dtype=fake_images_out.dtype)\n        mixed_images_out = tfutil.lerp(tf.cast(reals, fake_images_out.dtype), fake_images_out, mixing_factors)\n        mixed_scores_out, mixed_labels_out = fp32(D.get_output_for(mixed_images_out, is_training=True))\n        mixed_scores_out = tfutil.autosummary('Loss/mixed_scores', mixed_scores_out)\n        mixed_loss = opt.apply_loss_scaling(tf.reduce_sum(mixed_scores_out))\n        mixed_grads = opt.undo_loss_scaling(fp32(tf.gradients(mixed_loss, [mixed_images_out])[0]))\n        mixed_norms = tf.sqrt(tf.reduce_sum(tf.square(mixed_grads), axis=[1,2,3]))\n        mixed_norms = tfutil.autosummary('Loss/mixed_norms', mixed_norms)\n        gradient_penalty = tf.square(mixed_norms - wgan_target)\n    loss += gradient_penalty * (wgan_lambda / (wgan_target**2))\n\n    with tf.name_scope('EpsilonPenalty'):\n        epsilon_penalty = tfutil.autosummary('Loss/epsilon_penalty', tf.square(real_scores_out))\n    loss += epsilon_penalty * wgan_epsilon\n\n    if D.output_shapes[1][1] > 0:\n        with tf.name_scope('LabelPenalty'):\n            label_penalty_reals = tf.nn.softmax_cross_entropy_with_logits_v2(labels=labels, logits=real_labels_out)\n            label_penalty_fakes = tf.nn.softmax_cross_entropy_with_logits_v2(labels=labels, logits=fake_labels_out)\n            label_penalty_reals = tfutil.autosummary('Loss/label_penalty_reals', label_penalty_reals)\n            label_penalty_fakes = tfutil.autosummary('Loss/label_penalty_fakes', label_penalty_fakes)\n        loss += (label_penalty_reals + label_penalty_fakes) * cond_weight\n    return loss\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "metrics/__init__.py",
    "content": "# empty\n"
  },
  {
    "path": "metrics/frechet_inception_distance.py",
    "content": "#!/usr/bin/env python3\n#\n# Copyright 2017 Martin Heusel\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Adapted from the original implementation by Martin Heusel.\n# Source https://github.com/bioinf-jku/TTUR/blob/master/fid.py\n\n''' Calculates the Frechet Inception Distance (FID) to evalulate GANs.\n\nThe FID metric calculates the distance between two distributions of images.\nTypically, we have summary statistics (mean & covariance matrix) of one\nof these distributions, while the 2nd distribution is given by a GAN.\n\nWhen run as a stand-alone program, it compares the distribution of\nimages that are stored as PNG/JPEG at a specified location with a\ndistribution given by summary statistics (in pickle format).\n\nThe FID is calculated by assuming that X_1 and X_2 are the activations of\nthe pool_3 layer of the inception net for generated samples and real world\nsamples respectivly.\n\nSee --help to see further details.\n'''\n\nfrom __future__ import absolute_import, division, print_function\nimport numpy as np\nimport scipy as sp\nimport os\nimport gzip, pickle\nimport tensorflow as tf\nfrom scipy.misc import imread\nimport pathlib\nimport urllib\n\n\nclass InvalidFIDException(Exception):\n    pass\n\n\ndef create_inception_graph(pth):\n    \"\"\"Creates a graph from saved GraphDef file.\"\"\"\n    # Creates graph from saved graph_def.pb.\n    with tf.gfile.FastGFile( pth, 'rb') as f:\n        graph_def = tf.GraphDef()\n        graph_def.ParseFromString( f.read())\n        _ = tf.import_graph_def( graph_def, name='FID_Inception_Net')\n#-------------------------------------------------------------------------------\n\n\n# code for handling inception net derived from\n#   https://github.com/openai/improved-gan/blob/master/inception_score/model.py\ndef _get_inception_layer(sess):\n    \"\"\"Prepares inception net for batched usage and returns pool_3 layer. \"\"\"\n    layername = 'FID_Inception_Net/pool_3:0'\n    pool3 = sess.graph.get_tensor_by_name(layername)\n    ops = pool3.graph.get_operations()\n    for op_idx, op in enumerate(ops):\n        for o in op.outputs:\n            shape = o.get_shape()\n            if shape._dims is not None:\n              shape = [s.value for s in shape]\n              new_shape = []\n              for j, s in enumerate(shape):\n                if s == 1 and j == 0:\n                  new_shape.append(None)\n                else:\n                  new_shape.append(s)\n              try:\n                o._shape = tf.TensorShape(new_shape)\n              except ValueError:\n                o._shape_val = tf.TensorShape(new_shape) # EDIT: added for compatibility with tensorflow 1.6.0\n    return pool3\n#-------------------------------------------------------------------------------\n\n\ndef get_activations(images, sess, batch_size=50, verbose=False):\n    \"\"\"Calculates the activations of the pool_3 layer for all images.\n\n    Params:\n    -- images      : Numpy array of dimension (n_images, hi, wi, 3). The values\n                     must lie between 0 and 256.\n    -- sess        : current session\n    -- batch_size  : the images numpy array is split into batches with batch size\n                     batch_size. A reasonable batch size depends on the disposable hardware.\n    -- verbose    : If set to True and parameter out_step is given, the number of calculated\n                     batches is reported.\n    Returns:\n    -- A numpy array of dimension (num images, 2048) that contains the\n       activations of the given tensor when feeding inception with the query tensor.\n    \"\"\"\n    inception_layer = _get_inception_layer(sess)\n    d0 = images.shape[0]\n    if batch_size > d0:\n        print(\"warning: batch size is bigger than the data size. setting batch size to data size\")\n        batch_size = d0\n    n_batches = d0//batch_size\n    n_used_imgs = n_batches*batch_size\n    pred_arr = np.empty((n_used_imgs,2048))\n    for i in range(n_batches):\n        if verbose:\n            print(\"\\rPropagating batch %d/%d\" % (i+1, n_batches), end=\"\", flush=True)\n        start = i*batch_size\n        end = start + batch_size\n        batch = images[start:end]\n        pred = sess.run(inception_layer, {'FID_Inception_Net/ExpandDims:0': batch})\n        pred_arr[start:end] = pred.reshape(batch_size,-1)\n    if verbose:\n        print(\" done\")\n    return pred_arr\n#-------------------------------------------------------------------------------\n\n\ndef calculate_frechet_distance(mu1, sigma1, mu2, sigma2):\n    \"\"\"Numpy implementation of the Frechet Distance.\n    The Frechet distance between two multivariate Gaussians X_1 ~ N(mu_1, C_1)\n    and X_2 ~ N(mu_2, C_2) is\n            d^2 = ||mu_1 - mu_2||^2 + Tr(C_1 + C_2 - 2*sqrt(C_1*C_2)).\n\n    Params:\n    -- mu1 : Numpy array containing the activations of the pool_3 layer of the\n             inception net ( like returned by the function 'get_predictions')\n    -- mu2   : The sample mean over activations of the pool_3 layer, precalcualted\n               on an representive data set.\n    -- sigma2: The covariance matrix over activations of the pool_3 layer,\n               precalcualted on an representive data set.\n\n    Returns:\n    -- dist  : The Frechet Distance.\n\n    Raises:\n    -- InvalidFIDException if nan occures.\n    \"\"\"\n    m = np.square(mu1 - mu2).sum()\n    #s = sp.linalg.sqrtm(np.dot(sigma1, sigma2)) # EDIT: commented out\n    s, _ = sp.linalg.sqrtm(np.dot(sigma1, sigma2), disp=False) # EDIT: added\n    dist = m + np.trace(sigma1+sigma2 - 2*s)\n    #if np.isnan(dist): # EDIT: commented out\n    #    raise InvalidFIDException(\"nan occured in distance calculation.\") # EDIT: commented out\n    #return dist # EDIT: commented out\n    return np.real(dist) # EDIT: added\n#-------------------------------------------------------------------------------\n\n\ndef calculate_activation_statistics(images, sess, batch_size=50, verbose=False):\n    \"\"\"Calculation of the statistics used by the FID.\n    Params:\n    -- images      : Numpy array of dimension (n_images, hi, wi, 3). The values\n                     must lie between 0 and 255.\n    -- sess        : current session\n    -- batch_size  : the images numpy array is split into batches with batch size\n                     batch_size. A reasonable batch size depends on the available hardware.\n    -- verbose     : If set to True and parameter out_step is given, the number of calculated\n                     batches is reported.\n    Returns:\n    -- mu    : The mean over samples of the activations of the pool_3 layer of\n               the incption model.\n    -- sigma : The covariance matrix of the activations of the pool_3 layer of\n               the incption model.\n    \"\"\"\n    act = get_activations(images, sess, batch_size, verbose)\n    mu = np.mean(act, axis=0)\n    sigma = np.cov(act, rowvar=False)\n    return mu, sigma\n#-------------------------------------------------------------------------------\n\n\n#-------------------------------------------------------------------------------\n# The following functions aren't needed for calculating the FID\n# they're just here to make this module work as a stand-alone script\n# for calculating FID scores\n#-------------------------------------------------------------------------------\ndef check_or_download_inception(inception_path):\n    ''' Checks if the path to the inception file is valid, or downloads\n        the file if it is not present. '''\n    INCEPTION_URL = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'\n    if inception_path is None:\n        inception_path = '/tmp'\n    inception_path = pathlib.Path(inception_path)\n    model_file = inception_path / 'classify_image_graph_def.pb'\n    if not model_file.exists():\n        print(\"Downloading Inception model\")\n        from urllib import request\n        import tarfile\n        fn, _ = request.urlretrieve(INCEPTION_URL)\n        with tarfile.open(fn, mode='r') as f:\n            f.extract('classify_image_graph_def.pb', str(model_file.parent))\n    return str(model_file)\n\n\ndef _handle_path(path, sess):\n    if path.endswith('.npz'):\n        f = np.load(path)\n        m, s = f['mu'][:], f['sigma'][:]\n        f.close()\n    else:\n        path = pathlib.Path(path)\n        files = list(path.glob('*.jpg')) + list(path.glob('*.png'))\n        x = np.array([imread(str(fn)).astype(np.float32) for fn in files])\n        m, s = calculate_activation_statistics(x, sess)\n    return m, s\n\n\ndef calculate_fid_given_paths(paths, inception_path):\n    ''' Calculates the FID of two paths. '''\n    inception_path = check_or_download_inception(inception_path)\n\n    for p in paths:\n        if not os.path.exists(p):\n            raise RuntimeError(\"Invalid path: %s\" % p)\n\n    create_inception_graph(str(inception_path))\n    with tf.Session() as sess:\n        sess.run(tf.global_variables_initializer())\n        m1, s1 = _handle_path(paths[0], sess)\n        m2, s2 = _handle_path(paths[1], sess)\n        fid_value = calculate_frechet_distance(m1, s1, m2, s2)\n        return fid_value\n\n\nif __name__ == \"__main__\":\n    from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter\n    parser = ArgumentParser(formatter_class=ArgumentDefaultsHelpFormatter)\n    parser.add_argument(\"path\", type=str, nargs=2,\n        help='Path to the generated images or to .npz statistic files')\n    parser.add_argument(\"-i\", \"--inception\", type=str, default=None,\n        help='Path to Inception model (will be downloaded if not provided)')\n    parser.add_argument(\"--gpu\", default=\"\", type=str,\n        help='GPU to use (leave blank for CPU only)')\n    args = parser.parse_args()\n    os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu\n    fid_value = calculate_fid_given_paths(args.path, args.inception)\n    print(\"FID: \", fid_value)\n\n#----------------------------------------------------------------------------\n# EDIT: added\n\nclass API:\n    def __init__(self, num_images, image_shape, image_dtype, minibatch_size):\n        import config\n        self.network_dir = os.path.join(config.result_dir, '_inception_fid')\n        self.network_file = check_or_download_inception(self.network_dir)\n        self.sess = tf.get_default_session()\n        create_inception_graph(self.network_file)\n\n    def get_metric_names(self):\n        return ['FID']\n\n    def get_metric_formatting(self):\n        return ['%-10.4f']\n\n    def begin(self, mode):\n        assert mode in ['warmup', 'reals', 'fakes']\n        self.activations = []\n\n    def feed(self, mode, minibatch):\n        act = get_activations(minibatch.transpose(0,2,3,1), self.sess, batch_size=minibatch.shape[0])\n        self.activations.append(act)\n\n    def end(self, mode):\n        act = np.concatenate(self.activations)\n        mu = np.mean(act, axis=0)\n        sigma = np.cov(act, rowvar=False)\n        if mode in ['warmup', 'reals']:\n            self.mu_real = mu\n            self.sigma_real = sigma\n        fid = calculate_frechet_distance(mu, sigma, self.mu_real, self.sigma_real)\n        return [fid]\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "metrics/inception_score.py",
    "content": "# Copyright 2016 Wojciech Zaremba\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Adapted from the original implementation by Wojciech Zaremba.\n# Source: https://github.com/openai/improved-gan/blob/master/inception_score/model.py\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport sys\nimport tarfile\n\nimport numpy as np\nfrom six.moves import urllib\nimport tensorflow as tf\nimport glob\nimport scipy.misc\nimport math\nimport sys\n\nMODEL_DIR = '/tmp/imagenet'\n\nDATA_URL = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'\nsoftmax = None\n\n# Call this function with list of images. Each of elements should be a \n# numpy array with values ranging from 0 to 255.\ndef get_inception_score(images, splits=10):\n  assert(type(images) == list)\n  assert(type(images[0]) == np.ndarray)\n  assert(len(images[0].shape) == 3)\n  #assert(np.max(images[0]) > 10) # EDIT: commented out\n  #assert(np.min(images[0]) >= 0.0)\n  inps = []\n  for img in images:\n    img = img.astype(np.float32)\n    inps.append(np.expand_dims(img, 0))\n  bs = 100\n  with tf.Session() as sess:\n    preds = []\n    n_batches = int(math.ceil(float(len(inps)) / float(bs)))\n    for i in range(n_batches):\n        #sys.stdout.write(\".\") # EDIT: commented out\n        #sys.stdout.flush()\n        inp = inps[(i * bs):min((i + 1) * bs, len(inps))]\n        inp = np.concatenate(inp, 0)\n        pred = sess.run(softmax, {'ExpandDims:0': inp})\n        preds.append(pred)\n    preds = np.concatenate(preds, 0)\n    scores = []\n    for i in range(splits):\n      part = preds[(i * preds.shape[0] // splits):((i + 1) * preds.shape[0] // splits), :]\n      kl = part * (np.log(part) - np.log(np.expand_dims(np.mean(part, 0), 0)))\n      kl = np.mean(np.sum(kl, 1))\n      scores.append(np.exp(kl))\n    return np.mean(scores), np.std(scores)\n\n# This function is called automatically.\ndef _init_inception():\n  global softmax\n  if not os.path.exists(MODEL_DIR):\n    os.makedirs(MODEL_DIR)\n  filename = DATA_URL.split('/')[-1]\n  filepath = os.path.join(MODEL_DIR, filename)\n  if not os.path.exists(filepath):\n    def _progress(count, block_size, total_size):\n      sys.stdout.write('\\r>> Downloading %s %.1f%%' % (\n          filename, float(count * block_size) / float(total_size) * 100.0))\n      sys.stdout.flush()\n    filepath, _ = urllib.request.urlretrieve(DATA_URL, filepath, _progress)\n    print()\n    statinfo = os.stat(filepath)\n    print('Succesfully downloaded', filename, statinfo.st_size, 'bytes.')\n    tarfile.open(filepath, 'r:gz').extractall(MODEL_DIR) # EDIT: increased indent\n  with tf.gfile.FastGFile(os.path.join(\n      MODEL_DIR, 'classify_image_graph_def.pb'), 'rb') as f:\n    graph_def = tf.GraphDef()\n    graph_def.ParseFromString(f.read())\n    _ = tf.import_graph_def(graph_def, name='')\n  # Works with an arbitrary minibatch size.\n  with tf.Session() as sess:\n    pool3 = sess.graph.get_tensor_by_name('pool_3:0')\n    ops = pool3.graph.get_operations()\n    for op_idx, op in enumerate(ops):\n        for o in op.outputs:\n            shape = o.get_shape()\n            shape = [s.value for s in shape]\n            new_shape = []\n            for j, s in enumerate(shape):\n                if s == 1 and j == 0:\n                    new_shape.append(None)\n                else:\n                    new_shape.append(s)\n            try:\n                o._shape = tf.TensorShape(new_shape)\n            except ValueError:\n                o._shape_val = tf.TensorShape(new_shape) # EDIT: added for compatibility with tensorflow 1.6.0\n    w = sess.graph.get_operation_by_name(\"softmax/logits/MatMul\").inputs[1]\n    logits = tf.matmul(tf.squeeze(pool3), w)\n    softmax = tf.nn.softmax(logits)\n\n#if softmax is None: # EDIT: commented out\n#  _init_inception() # EDIT: commented out\n\n#----------------------------------------------------------------------------\n# EDIT: added\n\nclass API:\n    def __init__(self, num_images, image_shape, image_dtype, minibatch_size):\n        import config\n        globals()['MODEL_DIR'] = os.path.join(config.result_dir, '_inception')\n        self.sess = tf.get_default_session()\n        _init_inception()\n\n    def get_metric_names(self):\n        return ['IS_mean', 'IS_std']\n\n    def get_metric_formatting(self):\n        return ['%-10.4f', '%-10.4f']\n\n    def begin(self, mode):\n        assert mode in ['warmup', 'reals', 'fakes']\n        self.images = []\n\n    def feed(self, mode, minibatch):\n        self.images.append(minibatch.transpose(0, 2, 3, 1))\n\n    def end(self, mode):\n        images = list(np.concatenate(self.images))\n        with self.sess.as_default():\n            mean, std = get_inception_score(images)\n        return [mean, std]\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "metrics/ms_ssim.py",
    "content": "#!/usr/bin/python\n#\n# Copyright 2016 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\n# Adapted from the original implementation by The TensorFlow Authors.\n# Source: https://github.com/tensorflow/models/blob/master/research/compression/image_encoder/msssim.py\n\nimport numpy as np\nfrom scipy import signal\nfrom scipy.ndimage.filters import convolve\n\ndef _FSpecialGauss(size, sigma):\n    \"\"\"Function to mimic the 'fspecial' gaussian MATLAB function.\"\"\"\n    radius = size // 2\n    offset = 0.0\n    start, stop = -radius, radius + 1\n    if size % 2 == 0:\n        offset = 0.5\n        stop -= 1\n    x, y = np.mgrid[offset + start:stop, offset + start:stop]\n    assert len(x) == size\n    g = np.exp(-((x**2 + y**2)/(2.0 * sigma**2)))\n    return g / g.sum()\n\ndef _SSIMForMultiScale(img1, img2, max_val=255, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03):\n    \"\"\"Return the Structural Similarity Map between `img1` and `img2`.\n\n    This function attempts to match the functionality of ssim_index_new.m by\n    Zhou Wang: http://www.cns.nyu.edu/~lcv/ssim/msssim.zip\n\n    Arguments:\n        img1: Numpy array holding the first RGB image batch.\n        img2: Numpy array holding the second RGB image batch.\n        max_val: the dynamic range of the images (i.e., the difference between the\n            maximum the and minimum allowed values).\n        filter_size: Size of blur kernel to use (will be reduced for small images).\n        filter_sigma: Standard deviation for Gaussian blur kernel (will be reduced\n            for small images).\n        k1: Constant used to maintain stability in the SSIM calculation (0.01 in\n            the original paper).\n        k2: Constant used to maintain stability in the SSIM calculation (0.03 in\n            the original paper).\n\n    Returns:\n        Pair containing the mean SSIM and contrast sensitivity between `img1` and\n        `img2`.\n\n    Raises:\n        RuntimeError: If input images don't have the same shape or don't have four\n            dimensions: [batch_size, height, width, depth].\n    \"\"\"\n    if img1.shape != img2.shape:\n        raise RuntimeError('Input images must have the same shape (%s vs. %s).' % (img1.shape, img2.shape))\n    if img1.ndim != 4:\n        raise RuntimeError('Input images must have four dimensions, not %d' % img1.ndim)\n\n    img1 = img1.astype(np.float32)\n    img2 = img2.astype(np.float32)\n    _, height, width, _ = img1.shape\n\n    # Filter size can't be larger than height or width of images.\n    size = min(filter_size, height, width)\n\n    # Scale down sigma if a smaller filter size is used.\n    sigma = size * filter_sigma / filter_size if filter_size else 0\n\n    if filter_size:\n        window = np.reshape(_FSpecialGauss(size, sigma), (1, size, size, 1))\n        mu1 = signal.fftconvolve(img1, window, mode='valid')\n        mu2 = signal.fftconvolve(img2, window, mode='valid')\n        sigma11 = signal.fftconvolve(img1 * img1, window, mode='valid')\n        sigma22 = signal.fftconvolve(img2 * img2, window, mode='valid')\n        sigma12 = signal.fftconvolve(img1 * img2, window, mode='valid')\n    else:\n        # Empty blur kernel so no need to convolve.\n        mu1, mu2 = img1, img2\n        sigma11 = img1 * img1\n        sigma22 = img2 * img2\n        sigma12 = img1 * img2\n\n    mu11 = mu1 * mu1\n    mu22 = mu2 * mu2\n    mu12 = mu1 * mu2\n    sigma11 -= mu11\n    sigma22 -= mu22\n    sigma12 -= mu12\n\n    # Calculate intermediate values used by both ssim and cs_map.\n    c1 = (k1 * max_val) ** 2\n    c2 = (k2 * max_val) ** 2\n    v1 = 2.0 * sigma12 + c2\n    v2 = sigma11 + sigma22 + c2\n    ssim = np.mean((((2.0 * mu12 + c1) * v1) / ((mu11 + mu22 + c1) * v2)), axis=(1, 2, 3)) # Return for each image individually.\n    cs = np.mean(v1 / v2, axis=(1, 2, 3))\n    return ssim, cs\n\ndef _HoxDownsample(img):\n    return (img[:, 0::2, 0::2, :] + img[:, 1::2, 0::2, :] + img[:, 0::2, 1::2, :] + img[:, 1::2, 1::2, :]) * 0.25\n\ndef msssim(img1, img2, max_val=255, filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03, weights=None):\n    \"\"\"Return the MS-SSIM score between `img1` and `img2`.\n\n    This function implements Multi-Scale Structural Similarity (MS-SSIM) Image\n    Quality Assessment according to Zhou Wang's paper, \"Multi-scale structural\n    similarity for image quality assessment\" (2003).\n    Link: https://ece.uwaterloo.ca/~z70wang/publications/msssim.pdf\n\n    Author's MATLAB implementation:\n    http://www.cns.nyu.edu/~lcv/ssim/msssim.zip\n\n    Arguments:\n        img1: Numpy array holding the first RGB image batch.\n        img2: Numpy array holding the second RGB image batch.\n        max_val: the dynamic range of the images (i.e., the difference between the\n            maximum the and minimum allowed values).\n        filter_size: Size of blur kernel to use (will be reduced for small images).\n        filter_sigma: Standard deviation for Gaussian blur kernel (will be reduced\n            for small images).\n        k1: Constant used to maintain stability in the SSIM calculation (0.01 in\n            the original paper).\n        k2: Constant used to maintain stability in the SSIM calculation (0.03 in\n            the original paper).\n        weights: List of weights for each level; if none, use five levels and the\n            weights from the original paper.\n\n    Returns:\n        MS-SSIM score between `img1` and `img2`.\n\n    Raises:\n        RuntimeError: If input images don't have the same shape or don't have four\n            dimensions: [batch_size, height, width, depth].\n    \"\"\"\n    if img1.shape != img2.shape:\n        raise RuntimeError('Input images must have the same shape (%s vs. %s).' % (img1.shape, img2.shape))\n    if img1.ndim != 4:\n        raise RuntimeError('Input images must have four dimensions, not %d' % img1.ndim)\n\n    # Note: default weights don't sum to 1.0 but do match the paper / matlab code.\n    weights = np.array(weights if weights else [0.0448, 0.2856, 0.3001, 0.2363, 0.1333])\n    levels = weights.size\n    downsample_filter = np.ones((1, 2, 2, 1)) / 4.0\n    im1, im2 = [x.astype(np.float32) for x in [img1, img2]]\n    mssim = []\n    mcs = []\n    for _ in range(levels):\n        ssim, cs = _SSIMForMultiScale(\n                im1, im2, max_val=max_val, filter_size=filter_size,\n                filter_sigma=filter_sigma, k1=k1, k2=k2)\n        mssim.append(ssim)\n        mcs.append(cs)\n        im1, im2 = [_HoxDownsample(x) for x in [im1, im2]]\n\n    # Clip to zero. Otherwise we get NaNs.\n    mssim = np.clip(np.asarray(mssim), 0.0, np.inf)\n    mcs = np.clip(np.asarray(mcs), 0.0, np.inf)\n\n    # Average over images only at the end.\n    return np.mean(np.prod(mcs[:-1, :] ** weights[:-1, np.newaxis], axis=0) * (mssim[-1, :] ** weights[-1]))\n\n#----------------------------------------------------------------------------\n# EDIT: added\n\nclass API:\n    def __init__(self, num_images, image_shape, image_dtype, minibatch_size):\n        assert num_images % 2 == 0 and minibatch_size % 2 == 0\n        self.num_pairs = num_images // 2\n\n    def get_metric_names(self):\n        return ['MS-SSIM']\n\n    def get_metric_formatting(self):\n        return ['%-10.4f']\n\n    def begin(self, mode):\n        assert mode in ['warmup', 'reals', 'fakes']\n        self.sum = 0.0\n\n    def feed(self, mode, minibatch):\n        images = minibatch.transpose(0, 2, 3, 1)\n        score = msssim(images[0::2], images[1::2])\n        self.sum += score * (images.shape[0] // 2)\n\n    def end(self, mode):\n        avg = self.sum / self.num_pairs\n        return [avg]\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "metrics/sliced_wasserstein.py",
    "content": "# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# This work is licensed under the Creative Commons Attribution-NonCommercial\n# 4.0 International License. To view a copy of this license, visit\n# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to\n# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\n\nimport numpy as np\nimport scipy.ndimage\n\n#----------------------------------------------------------------------------\n\ndef get_descriptors_for_minibatch(minibatch, nhood_size, nhoods_per_image):\n    S = minibatch.shape # (minibatch, channel, height, width)\n    assert len(S) == 4 and S[1] == 3\n    N = nhoods_per_image * S[0]\n    H = nhood_size // 2\n    nhood, chan, x, y = np.ogrid[0:N, 0:3, -H:H+1, -H:H+1]\n    img = nhood // nhoods_per_image\n    x = x + np.random.randint(H, S[3] - H, size=(N, 1, 1, 1))\n    y = y + np.random.randint(H, S[2] - H, size=(N, 1, 1, 1))\n    idx = ((img * S[1] + chan) * S[2] + y) * S[3] + x\n    return minibatch.flat[idx]\n\n#----------------------------------------------------------------------------\n\ndef finalize_descriptors(desc):\n    if isinstance(desc, list):\n        desc = np.concatenate(desc, axis=0)\n    assert desc.ndim == 4 # (neighborhood, channel, height, width)\n    desc -= np.mean(desc, axis=(0, 2, 3), keepdims=True)\n    desc /= np.std(desc, axis=(0, 2, 3), keepdims=True)\n    desc = desc.reshape(desc.shape[0], -1)\n    return desc\n\n#----------------------------------------------------------------------------\n\ndef sliced_wasserstein(A, B, dir_repeats, dirs_per_repeat):\n    assert A.ndim == 2 and A.shape == B.shape                           # (neighborhood, descriptor_component)\n    results = []\n    for repeat in range(dir_repeats):\n        dirs = np.random.randn(A.shape[1], dirs_per_repeat)             # (descriptor_component, direction)\n        dirs /= np.sqrt(np.sum(np.square(dirs), axis=0, keepdims=True)) # normalize descriptor components for each direction\n        dirs = dirs.astype(np.float32)\n        projA = np.matmul(A, dirs)                                      # (neighborhood, direction)\n        projB = np.matmul(B, dirs)\n        projA = np.sort(projA, axis=0)                                  # sort neighborhood projections for each direction\n        projB = np.sort(projB, axis=0)\n        dists = np.abs(projA - projB)                                   # pointwise wasserstein distances\n        results.append(np.mean(dists))                                  # average over neighborhoods and directions\n    return np.mean(results)                                             # average over repeats\n\n#----------------------------------------------------------------------------\n\ndef downscale_minibatch(minibatch, lod):\n    if lod == 0:\n        return minibatch\n    t = minibatch.astype(np.float32)\n    for i in range(lod):\n        t = (t[:, :, 0::2, 0::2] + t[:, :, 0::2, 1::2] + t[:, :, 1::2, 0::2] + t[:, :, 1::2, 1::2]) * 0.25\n    return np.round(t).clip(0, 255).astype(np.uint8)\n\n#----------------------------------------------------------------------------\n\ngaussian_filter = np.float32([\n    [1, 4,  6,  4,  1],\n    [4, 16, 24, 16, 4],\n    [6, 24, 36, 24, 6],\n    [4, 16, 24, 16, 4],\n    [1, 4,  6,  4,  1]]) / 256.0\n\ndef pyr_down(minibatch): # matches cv2.pyrDown()\n    assert minibatch.ndim == 4\n    return scipy.ndimage.convolve(minibatch, gaussian_filter[np.newaxis, np.newaxis, :, :], mode='mirror')[:, :, ::2, ::2]\n\ndef pyr_up(minibatch): # matches cv2.pyrUp()\n    assert minibatch.ndim == 4\n    S = minibatch.shape\n    res = np.zeros((S[0], S[1], S[2] * 2, S[3] * 2), minibatch.dtype)\n    res[:, :, ::2, ::2] = minibatch\n    return scipy.ndimage.convolve(res, gaussian_filter[np.newaxis, np.newaxis, :, :] * 4.0, mode='mirror')\n\ndef generate_laplacian_pyramid(minibatch, num_levels):\n    pyramid = [np.float32(minibatch)]\n    for i in range(1, num_levels):\n        pyramid.append(pyr_down(pyramid[-1]))\n        pyramid[-2] -= pyr_up(pyramid[-1])\n    return pyramid\n\ndef reconstruct_laplacian_pyramid(pyramid):\n    minibatch = pyramid[-1]\n    for level in pyramid[-2::-1]:\n        minibatch = pyr_up(minibatch) + level\n    return minibatch\n\n#----------------------------------------------------------------------------\n\nclass API:\n    def __init__(self, num_images, image_shape, image_dtype, minibatch_size):\n        self.nhood_size         = 7\n        self.nhoods_per_image   = 128\n        self.dir_repeats        = 4\n        self.dirs_per_repeat    = 128\n        self.resolutions = []\n        res = image_shape[1]\n        while res >= 16:\n            self.resolutions.append(res)\n            res //= 2\n\n    def get_metric_names(self):\n        return ['SWDx1e3_%d' % res for res in self.resolutions] + ['SWDx1e3_avg']\n\n    def get_metric_formatting(self):\n        return ['%-13.4f'] * len(self.get_metric_names())\n\n    def begin(self, mode):\n        assert mode in ['warmup', 'reals', 'fakes']\n        self.descriptors = [[] for res in self.resolutions]\n\n    def feed(self, mode, minibatch):\n        for lod, level in enumerate(generate_laplacian_pyramid(minibatch, len(self.resolutions))):\n            desc = get_descriptors_for_minibatch(level, self.nhood_size, self.nhoods_per_image)\n            self.descriptors[lod].append(desc)\n\n    def end(self, mode):\n        desc = [finalize_descriptors(d) for d in self.descriptors]\n        del self.descriptors\n        if mode in ['warmup', 'reals']:\n            self.desc_real = desc\n        dist = [sliced_wasserstein(dreal, dfake, self.dir_repeats, self.dirs_per_repeat) for dreal, dfake in zip(self.desc_real, desc)]\n        del desc\n        dist = [d * 1e3 for d in dist] # multiply by 10^3\n        return dist + [np.mean(dist)]\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "misc.py",
    "content": "# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# This work is licensed under the Creative Commons Attribution-NonCommercial\n# 4.0 International License. To view a copy of this license, visit\n# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to\n# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\n\nimport os\nimport sys\nimport glob\nimport datetime\nimport pickle\nimport re\nimport numpy as np\nfrom collections import OrderedDict \nimport scipy.ndimage\nimport PIL.Image\n\nimport config\nimport dataset\nimport legacy\n\n#----------------------------------------------------------------------------\n# Convenience wrappers for pickle that are able to load data produced by\n# older versions of the code.\n\ndef load_pkl(filename):\n    with open(filename, 'rb') as file:\n        return legacy.LegacyUnpickler(file, encoding='latin1').load()\n\ndef save_pkl(obj, filename):\n    with open(filename, 'wb') as file:\n        pickle.dump(obj, file, protocol=pickle.HIGHEST_PROTOCOL)\n\n#----------------------------------------------------------------------------\n# Image utils.\n\ndef adjust_dynamic_range(data, drange_in, drange_out):\n    if drange_in != drange_out:\n        scale = (np.float32(drange_out[1]) - np.float32(drange_out[0])) / (np.float32(drange_in[1]) - np.float32(drange_in[0]))\n        bias = (np.float32(drange_out[0]) - np.float32(drange_in[0]) * scale)\n        data = data * scale + bias\n    return data\n\ndef create_image_grid(images, grid_size=None):\n    assert images.ndim == 3 or images.ndim == 4\n    num, img_w, img_h = images.shape[0], images.shape[-1], images.shape[-2]\n\n    if grid_size is not None:\n        grid_w, grid_h = tuple(grid_size)\n    else:\n        grid_w = max(int(np.ceil(np.sqrt(num))), 1)\n        grid_h = max((num - 1) // grid_w + 1, 1)\n\n    grid = np.zeros(list(images.shape[1:-2]) + [grid_h * img_h, grid_w * img_w], dtype=images.dtype)\n    for idx in range(num):\n        x = (idx % grid_w) * img_w\n        y = (idx // grid_w) * img_h\n        grid[..., y : y + img_h, x : x + img_w] = images[idx]\n    return grid\n\ndef convert_to_pil_image(image, drange=[0,1]):\n    assert image.ndim == 2 or image.ndim == 3\n    if image.ndim == 3:\n        if image.shape[0] == 1:\n            image = image[0] # grayscale CHW => HW\n        else:\n            image = image.transpose(1, 2, 0) # CHW -> HWC\n\n    image = adjust_dynamic_range(image, drange, [0,255])\n    image = np.rint(image).clip(0, 255).astype(np.uint8)\n    format = 'RGB' if image.ndim == 3 else 'L'\n    return PIL.Image.fromarray(image, format)\n\ndef save_image(image, filename, drange=[0,1], quality=95):\n    img = convert_to_pil_image(image, drange)\n    if '.jpg' in filename:\n        img.save(filename,\"JPEG\", quality=quality, optimize=True)\n    else:\n        img.save(filename)\n\ndef save_image_grid(images, filename, drange=[0,1], grid_size=None):\n    if images.shape[-3] <=3:\n        convert_to_pil_image(create_image_grid(images, grid_size), drange).save(filename)\n    elif images.shape[-3]==6:\n        convert_to_pil_image(create_image_grid(images[:,0:3,:,:], grid_size), drange).save(filename)\n        convert_to_pil_image(create_image_grid(images[:, 3:6, :, :], grid_size), drange).save(os.path.splitext(filename)[0]+'_shp'+os.path.splitext(filename)[1])\n    elif images.shape[-3] == 9:\n        convert_to_pil_image(create_image_grid(images[:, 0:3, :, :], grid_size), drange).save(filename)\n        convert_to_pil_image(create_image_grid(images[:, 3:6, :, :], grid_size), drange).save(\n            os.path.splitext(filename)[0] + '_shp' + os.path.splitext(filename)[1])\n        convert_to_pil_image(create_image_grid(images[:, 6:9, :, :], grid_size), drange).save(\n            os.path.splitext(filename)[0] + '_nor' + os.path.splitext(filename)[1])\n\n#----------------------------------------------------------------------------\n# Logging of stdout and stderr to a file.\n\nclass OutputLogger(object):\n    def __init__(self):\n        self.file = None\n        self.buffer = ''\n\n    def set_log_file(self, filename, mode='wt'):\n        assert self.file is None\n        self.file = open(filename, mode)\n        if self.buffer is not None:\n            self.file.write(self.buffer)\n            self.buffer = None\n\n    def write(self, data):\n        if self.file is not None:\n            self.file.write(data)\n        if self.buffer is not None:\n            self.buffer += data\n\n    def flush(self):\n        if self.file is not None:\n            self.file.flush()\n\nclass TeeOutputStream(object):\n    def __init__(self, child_streams, autoflush=False):\n        self.child_streams = child_streams\n        self.autoflush = autoflush\n \n    def write(self, data):\n        for stream in self.child_streams:\n            stream.write(data)\n        if self.autoflush:\n            self.flush()\n\n    def flush(self):\n        for stream in self.child_streams:\n            stream.flush()\n\noutput_logger = None\n\ndef init_output_logging():\n    global output_logger\n    if output_logger is None:\n        output_logger = OutputLogger()\n        sys.stdout = TeeOutputStream([sys.stdout, output_logger], autoflush=True)\n        sys.stderr = TeeOutputStream([sys.stderr, output_logger], autoflush=True)\n\ndef set_output_log_file(filename, mode='wt'):\n    if output_logger is not None:\n        output_logger.set_log_file(filename, mode)\n\n#----------------------------------------------------------------------------\n# Reporting results.\n\ndef create_result_subdir(result_dir, desc):\n\n    # Select run ID and create subdir.\n    while True:\n        run_id = 0\n        for fname in glob.glob(os.path.join(result_dir, '*')):\n            try:\n                fbase = os.path.basename(fname)\n                ford = int(fbase[:fbase.find('-')])\n                run_id = max(run_id, ford + 1)\n            except ValueError:\n                pass\n\n        result_subdir = os.path.join(result_dir, '%03d-%s' % (run_id, desc))\n        try:\n            os.makedirs(result_subdir)\n            break\n        except OSError:\n            if os.path.isdir(result_subdir):\n                continue\n            raise\n\n    print(\"Saving results to\", result_subdir)\n    set_output_log_file(os.path.join(result_subdir, 'log.txt'))\n\n    # Export config.\n    try:\n        with open(os.path.join(result_subdir, 'config.txt'), 'wt') as fout:\n            for k, v in sorted(config.__dict__.items()):\n                if not k.startswith('_'):\n                    fout.write(\"%s = %s\\n\" % (k, str(v)))\n    except:\n        pass\n\n    return result_subdir\n\ndef format_time(seconds):\n    s = int(np.rint(seconds))\n    if s < 60:         return '%ds'                % (s)\n    elif s < 60*60:    return '%dm %02ds'          % (s // 60, s % 60)\n    elif s < 24*60*60: return '%dh %02dm %02ds'    % (s // (60*60), (s // 60) % 60, s % 60)\n    else:              return '%dd %02dh %02dm'    % (s // (24*60*60), (s // (60*60)) % 24, (s // 60) % 60)\n\n#----------------------------------------------------------------------------\n# Locating results.\n\ndef locate_result_subdir(run_id_or_result_subdir):\n    if isinstance(run_id_or_result_subdir, str) and os.path.isdir(run_id_or_result_subdir):\n        return run_id_or_result_subdir\n\n    searchdirs = []\n    searchdirs += ['']\n    searchdirs += ['results']\n    searchdirs += ['networks']\n\n    for searchdir in searchdirs:\n        dir = config.result_dir if searchdir == '' else os.path.join(config.result_dir, searchdir)\n        dir = os.path.join(dir, str(run_id_or_result_subdir))\n        if os.path.isdir(dir):\n            return dir\n        prefix = '%03d' % run_id_or_result_subdir if isinstance(run_id_or_result_subdir, int) else str(run_id_or_result_subdir)\n        dirs = sorted(glob.glob(os.path.join(config.result_dir, searchdir, prefix + '-*')))\n        dirs = [dir for dir in dirs if os.path.isdir(dir)]\n        if len(dirs) == 1:\n            return dirs[0]\n    raise IOError('Cannot locate result subdir for run', run_id_or_result_subdir)\n\ndef list_network_pkls(run_id_or_result_subdir, include_final=True):\n    result_subdir = locate_result_subdir(run_id_or_result_subdir)\n    pkls = sorted(glob.glob(os.path.join(result_subdir, 'network-*.pkl')))\n    if len(pkls) >= 1 and os.path.basename(pkls[0]) == 'network-final.pkl':\n        if include_final:\n            pkls.append(pkls[0])\n        del pkls[0]\n    return pkls\n\ndef locate_network_pkl(run_id_or_result_subdir_or_network_pkl, snapshot=None):\n    if isinstance(run_id_or_result_subdir_or_network_pkl, str) and os.path.isfile(run_id_or_result_subdir_or_network_pkl):\n        return run_id_or_result_subdir_or_network_pkl\n\n    pkls = list_network_pkls(run_id_or_result_subdir_or_network_pkl)\n    if len(pkls) >= 1 and snapshot is None:\n        return pkls[-1]\n    for pkl in pkls:\n        try:\n            name = os.path.splitext(os.path.basename(pkl))[0]\n            number = int(name.split('-')[-1])\n            if number == snapshot:\n                return pkl\n        except ValueError: pass\n        except IndexError: pass\n    raise IOError('Cannot locate network pkl for snapshot', snapshot)\n\ndef get_id_string_for_network_pkl(network_pkl):\n    p = network_pkl.replace('.pkl', '').replace('\\\\', '/').split('/')\n    return '-'.join(p[max(len(p) - 2, 0):])\n\n#----------------------------------------------------------------------------\n# Loading and using trained networks.\n\ndef load_network_pkl(run_id_or_result_subdir_or_network_pkl, snapshot=None):\n    return load_pkl(locate_network_pkl(run_id_or_result_subdir_or_network_pkl, snapshot))\n\ndef random_latents(num_latents, G, random_state=None):\n    if random_state is not None:\n        return random_state.randn(num_latents, *G.input_shape[1:]).astype(np.float32)\n    else:\n        return np.random.randn(num_latents, *G.input_shape[1:]).astype(np.float32)\n\ndef load_dataset_for_previous_run(run_id, **kwargs): # => dataset_obj, mirror_augment\n    result_subdir = locate_result_subdir(run_id)\n\n    # Parse config.txt.\n    parsed_cfg = dict()\n    with open(os.path.join(result_subdir, 'config.txt'), 'rt') as f:\n        for line in f:\n            if line.startswith('dataset =') or line.startswith('train ='):\n                exec(line, parsed_cfg, parsed_cfg)\n    dataset_cfg = parsed_cfg.get('dataset', dict())\n    train_cfg = parsed_cfg.get('train', dict())\n    mirror_augment = train_cfg.get('mirror_augment', False)\n\n    # Handle legacy options.\n    if 'h5_path' in dataset_cfg:\n        dataset_cfg['tfrecord_dir'] = dataset_cfg.pop('h5_path').replace('.h5', '')\n    if 'mirror_augment' in dataset_cfg:\n        mirror_augment = dataset_cfg.pop('mirror_augment')\n    if 'max_labels' in dataset_cfg:\n        v = dataset_cfg.pop('max_labels')\n        if v is None: v = 0\n        if v == 'all': v = 'full'\n        dataset_cfg['max_label_size'] = v\n    if 'max_images' in dataset_cfg:\n        dataset_cfg.pop('max_images')\n\n    # Handle legacy dataset names.\n    v = dataset_cfg['tfrecord_dir']\n    v = v.replace('-32x32', '').replace('-32', '')\n    v = v.replace('-128x128', '').replace('-128', '')\n    v = v.replace('-256x256', '').replace('-256', '')\n    v = v.replace('-1024x1024', '').replace('-1024', '')\n    v = v.replace('celeba-hq', 'celebahq')\n    v = v.replace('cifar-10', 'cifar10')\n    v = v.replace('cifar-100', 'cifar100')\n    v = v.replace('mnist-rgb', 'mnistrgb')\n    v = re.sub('lsun-100k-([^-]*)', 'lsun-\\\\1-100k', v)\n    v = re.sub('lsun-full-([^-]*)', 'lsun-\\\\1-full', v)\n    dataset_cfg['tfrecord_dir'] = v\n\n    # Load dataset.\n    dataset_cfg.update(kwargs)\n    dataset_obj = dataset.load_dataset(data_dir=config.data_dir, **dataset_cfg)\n    return dataset_obj, mirror_augment\n\ndef apply_mirror_augment(minibatch):\n    mask = np.random.rand(minibatch.shape[0]) < 0.5\n    minibatch = np.array(minibatch)\n    minibatch[mask] = minibatch[mask, :, :, ::-1]\n    return minibatch\n\n#----------------------------------------------------------------------------\n# Text labels.\n\n_text_label_cache = OrderedDict()\n\ndef draw_text_label(img, text, x, y, alignx=0.5, aligny=0.5, color=255, opacity=1.0, glow_opacity=1.0, **kwargs):\n    color = np.array(color).flatten().astype(np.float32)\n    assert img.ndim == 3 and img.shape[2] == color.size or color.size == 1\n    alpha, glow = setup_text_label(text, **kwargs)\n    xx, yy = int(np.rint(x - alpha.shape[1] * alignx)), int(np.rint(y - alpha.shape[0] * aligny))\n    xb, yb = max(-xx, 0), max(-yy, 0)\n    xe, ye = min(alpha.shape[1], img.shape[1] - xx), min(alpha.shape[0], img.shape[0] - yy)\n    img = np.array(img)\n    slice = img[yy+yb : yy+ye, xx+xb : xx+xe, :]\n    slice[:] = slice * (1.0 - (1.0 - (1.0 - alpha[yb:ye, xb:xe]) * (1.0 - glow[yb:ye, xb:xe] * glow_opacity)) * opacity)[:, :, np.newaxis]\n    slice[:] = slice + alpha[yb:ye, xb:xe, np.newaxis] * (color * opacity)[np.newaxis, np.newaxis, :]\n    return img\n\ndef setup_text_label(text, font='Calibri', fontsize=32, padding=6, glow_size=2.0, glow_coef=3.0, glow_exp=2.0, cache_size=100): # => (alpha, glow)\n    # Lookup from cache.\n    key = (text, font, fontsize, padding, glow_size, glow_coef, glow_exp)\n    if key in _text_label_cache:\n        value = _text_label_cache[key]\n        del _text_label_cache[key] # LRU policy\n        _text_label_cache[key] = value\n        return value\n\n    # Limit cache size.\n    while len(_text_label_cache) >= cache_size:\n        _text_label_cache.popitem(last=False)\n\n    # Render text.\n    import moviepy.editor # pip install moviepy\n    alpha = moviepy.editor.TextClip(text, font=font, fontsize=fontsize).mask.make_frame(0)\n    alpha = np.pad(alpha, padding, mode='constant', constant_values=0.0)\n    glow = scipy.ndimage.gaussian_filter(alpha, glow_size)\n    glow = 1.0 - np.maximum(1.0 - glow * glow_coef, 0.0) ** glow_exp\n\n    # Add to cache.\n    value = (alpha, glow)\n    _text_label_cache[key] = value\n    return value\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "myutil.py",
    "content": "import numpy as np\nimport os\nimport PIL.Image\nfrom menpo.image import Image\n\ndef crop_im(img):\n    img = img.crop((41, 0, img.size[0] - 42, 377))\n    new_img = PIL.Image.new(\"RGB\", (512, 512), (0, 0, 0))\n    new_img.paste(img, ((512 - img.size[0]) // 2, (512 - img.size[1]) // 2))\n    return new_img\n\ndef crop_im_512(img_377):\n    img = img_377\n    if isinstance(img_377, Image):\n        img = img_377.pixels_with_channels_at_back()\n    if img.shape[0]==3:\n        np.transpose(img, [1, 2, 0])\n\n    img = img[:, 41:img.shape[1] - 42, :]\n    img = np.pad(img, ((67, 68),(0, 0) , (0, 0)), 'constant')\n    img = np.clip(img,0,1)\n    if isinstance(img_377, Image):\n        img = Image(np.transpose(img,[2,0,1]))\n    return img\n\ndef crop_im_377(img_512):\n    img = img_512\n    if isinstance(img_512, Image):\n        img = img_512.pixels_with_channels_at_back()\n    if img.shape[0]==3:\n        img = np.transpose(img, [1, 2, 0])\n\n    img = img[67:img.shape[1] - 68, :, :]\n    img = np.pad(img, ((0, 0),(41, 42) , (0, 0)), 'constant')\n    img = np.clip(img,0,1)\n    img[:, 0:42, :] = np.transpose(np.tile(img[:, 42, :], [42, 1, 1]), [1, 0, 2])\n    img[:, 552:, :] = np.transpose(np.tile(img[:, 552, :], [43, 1, 1]), [1, 0, 2])\n    if isinstance(img_512, Image):\n        img = Image(np.transpose(img,[2,0,1]))\n    return img\n\ndef concat_image(im1, im2):\n    if type(im1) is not PIL.Image.Image:\n        im1 = PIL.Image.fromarray(im1)\n    if type(im2) is not PIL.Image.Image:\n        im2 = PIL.Image.fromarray(im2)\n\n    new_im = PIL.Image.new('RGB', (512, 377 * 2))\n    new_im.paste(im1.crop((0, 67, im1.size[0], im1.size[1] - 68)), (0, 0))\n    new_im.paste(im2.crop((0, 67, im2.size[0], im2.size[1] - 68)), (0, 377))\n    return new_im\n\ndef rgb2tf(img):\n    return np.transpose(np.asarray(img)/127.5-1,(2,0,1))\n\ndef tf2rgb(img):\n    return (np.clip(np.transpose(img[0][0],(1,2,0))*127.5+127.5,0,255)).astype(np.uint8)\n\ndef files_gen(path):\n    for file in os.listdir(path):\n        if os.path.isfile(os.path.join(path, file)):\n            yield file\n\ndef files(path):\n    return list(files_gen(path))"
  },
  {
    "path": "networks.py",
    "content": "# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# This work is licensed under the Creative Commons Attribution-NonCommercial\n# 4.0 International License. To view a copy of this license, visit\n# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to\n# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\n\nimport numpy as np\nimport tensorflow as tf\n\n# NOTE: Do not import any application-specific modules here!\n\n#----------------------------------------------------------------------------\n\ndef lerp(a, b, t): return a + (b - a) * t\ndef lerp_clip(a, b, t): return a + (b - a) * tf.clip_by_value(t, 0.0, 1.0)\ndef cset(cur_lambda, new_cond, new_lambda): return lambda: tf.cond(new_cond, new_lambda, cur_lambda)\n\n#----------------------------------------------------------------------------\n# Get/create weight tensor for a convolutional or fully-connected layer.\n\ndef get_weight(shape, gain=np.sqrt(2), use_wscale=False, fan_in=None):\n    if fan_in is None: fan_in = np.prod(shape[:-1])\n    std = gain / np.sqrt(fan_in) # He init\n    if use_wscale:\n        wscale = tf.constant(np.float32(std), name='wscale')\n        return tf.get_variable('weight', shape=shape, initializer=tf.initializers.random_normal()) * wscale\n    else:\n        return tf.get_variable('weight', shape=shape, initializer=tf.initializers.random_normal(0, std))\n\n#----------------------------------------------------------------------------\n# Fully-connected layer.\n\ndef dense(x, fmaps, gain=np.sqrt(2), use_wscale=False):\n    if len(x.shape) > 2:\n        x = tf.reshape(x, [-1, np.prod([d.value for d in x.shape[1:]])])\n    w = get_weight([x.shape[1].value, fmaps], gain=gain, use_wscale=use_wscale)\n    w = tf.cast(w, x.dtype)\n    return tf.matmul(x, w)\n\n#----------------------------------------------------------------------------\n# Convolutional layer.\n\ndef conv2d(x, fmaps, kernel, gain=np.sqrt(2), use_wscale=False):\n    assert kernel >= 1 and kernel % 2 == 1\n    w = get_weight([kernel, kernel, x.shape[1].value, fmaps], gain=gain, use_wscale=use_wscale)\n    w = tf.cast(w, x.dtype)\n    return tf.nn.conv2d(x, w, strides=[1,1,1,1], padding='SAME', data_format='NCHW')\n\n#----------------------------------------------------------------------------\n# Apply bias to the given activation tensor.\n\ndef apply_bias(x):\n    b = tf.get_variable('bias', shape=[x.shape[1]], initializer=tf.initializers.zeros())\n    b = tf.cast(b, x.dtype)\n    if len(x.shape) == 2:\n        return x + b\n    else:\n        return x + tf.reshape(b, [1, -1, 1, 1])\n\n#----------------------------------------------------------------------------\n# Leaky ReLU activation. Same as tf.nn.leaky_relu, but supports FP16.\n\ndef leaky_relu(x, alpha=0.2):\n    with tf.name_scope('LeakyRelu'):\n        alpha = tf.constant(alpha, dtype=x.dtype, name='alpha')\n        return tf.maximum(x * alpha, x)\n\n#----------------------------------------------------------------------------\n# Nearest-neighbor upscaling layer.\n\ndef upscale2d(x, factor=2):\n    assert isinstance(factor, int) and factor >= 1\n    if factor == 1: return x\n    with tf.variable_scope('Upscale2D'):\n        s = x.shape\n        x = tf.reshape(x, [-1, s[1], s[2], 1, s[3], 1])\n        x = tf.tile(x, [1, 1, 1, factor, 1, factor])\n        x = tf.reshape(x, [-1, s[1], s[2] * factor, s[3] * factor])\n        return x\n\n#----------------------------------------------------------------------------\n# Fused upscale2d + conv2d.\n# Faster and uses less memory than performing the operations separately.\n\ndef upscale2d_conv2d(x, fmaps, kernel, gain=np.sqrt(2), use_wscale=False):\n    assert kernel >= 1 and kernel % 2 == 1\n    w = get_weight([kernel, kernel, fmaps, x.shape[1].value], gain=gain, use_wscale=use_wscale, fan_in=(kernel**2)*x.shape[1].value)\n    w = tf.pad(w, [[1,1], [1,1], [0,0], [0,0]], mode='CONSTANT')\n    w = tf.add_n([w[1:, 1:], w[:-1, 1:], w[1:, :-1], w[:-1, :-1]])\n    w = tf.cast(w, x.dtype)\n    os = [tf.shape(x)[0], fmaps, x.shape[2] * 2, x.shape[3] * 2]\n    return tf.nn.conv2d_transpose(x, w, os, strides=[1,1,2,2], padding='SAME', data_format='NCHW')\n\n#----------------------------------------------------------------------------\n# Box filter downscaling layer.\n\ndef downscale2d(x, factor=2):\n    assert isinstance(factor, int) and factor >= 1\n    if factor == 1: return x\n    with tf.variable_scope('Downscale2D'):\n        ksize = [1, 1, factor, factor]\n        return tf.nn.avg_pool(x, ksize=ksize, strides=ksize, padding='VALID', data_format='NCHW') # NOTE: requires tf_config['graph_options.place_pruned_graph'] = True\n\n#----------------------------------------------------------------------------\n# Fused conv2d + downscale2d.\n# Faster and uses less memory than performing the operations separately.\n\ndef conv2d_downscale2d(x, fmaps, kernel, gain=np.sqrt(2), use_wscale=False):\n    assert kernel >= 1 and kernel % 2 == 1\n    w = get_weight([kernel, kernel, x.shape[1].value, fmaps], gain=gain, use_wscale=use_wscale)\n    w = tf.pad(w, [[1,1], [1,1], [0,0], [0,0]], mode='CONSTANT')\n    w = tf.add_n([w[1:, 1:], w[:-1, 1:], w[1:, :-1], w[:-1, :-1]]) * 0.25\n    w = tf.cast(w, x.dtype)\n    return tf.nn.conv2d(x, w, strides=[1,1,2,2], padding='SAME', data_format='NCHW')\n\n#----------------------------------------------------------------------------\n# Pixelwise feature vector normalization.\n\ndef pixel_norm(x, epsilon=1e-8):\n    with tf.variable_scope('PixelNorm'):\n        return x * tf.rsqrt(tf.reduce_mean(tf.square(x), axis=1, keepdims=True) + epsilon)\n\n#----------------------------------------------------------------------------\n# Minibatch standard deviation.\n\ndef minibatch_stddev_layer(x, group_size=4):\n    with tf.variable_scope('MinibatchStddev'):\n        group_size = tf.minimum(group_size, tf.shape(x)[0])     # Minibatch must be divisible by (or smaller than) group_size.\n        s = x.shape                                             # [NCHW]  Input shape.\n        y = tf.reshape(x, [group_size, -1, s[1], s[2], s[3]])   # [GMCHW] Split minibatch into M groups of size G.\n        y = tf.cast(y, tf.float32)                              # [GMCHW] Cast to FP32.\n        y -= tf.reduce_mean(y, axis=0, keepdims=True)           # [GMCHW] Subtract mean over group.\n        y = tf.reduce_mean(tf.square(y), axis=0)                # [MCHW]  Calc variance over group.\n        y = tf.sqrt(y + 1e-8)                                   # [MCHW]  Calc stddev over group.\n        y = tf.reduce_mean(y, axis=[1,2,3], keepdims=True)      # [M111]  Take average over fmaps and pixels.\n        y = tf.cast(y, x.dtype)                                 # [M111]  Cast back to original data type.\n        y = tf.tile(y, [group_size, 1, s[2], s[3]])             # [N1HW]  Replicate over group and pixels.\n        return tf.concat([x, y], axis=1)                        # [NCHW]  Append as new fmap.\n\n#----------------------------------------------------------------------------\n# Generator network used in the paper.\n\ndef G_paper(\n    latents_in,                         # First input: Latent vectors [minibatch, latent_size].\n    labels_in,                          # Second input: Labels [minibatch, label_size].\n    num_channels        = 1,            # Number of output color channels. Overridden based on dataset.\n    resolution          = 32,           # Output resolution. Overridden based on dataset.\n    label_size          = 0,            # Dimensionality of the labels, 0 if no labels. Overridden based on dataset.\n    fmap_base           = 8192,         # Overall multiplier for the number of feature maps.\n    fmap_decay          = 1.0,          # log2 feature map reduction when doubling the resolution.\n    fmap_max            = 512,          # Maximum number of feature maps in any layer.\n    latent_size         = None,         # Dimensionality of the latent vectors. None = min(fmap_base, fmap_max).\n    normalize_latents   = True,         # Normalize latent vectors before feeding them to the network?\n    use_wscale          = True,         # Enable equalized learning rate?\n    use_pixelnorm       = True,         # Enable pixelwise feature vector normalization?\n    pixelnorm_epsilon   = 1e-8,         # Constant epsilon for pixelwise feature vector normalization.\n    use_leakyrelu       = True,         # True = leaky ReLU, False = ReLU.\n    dtype               = 'float32',    # Data type to use for activations and outputs.\n    fused_scale         = True,         # True = use fused upscale2d + conv2d, False = separate upscale2d layers.\n    structure           = None,         # 'linear' = human-readable, 'recursive' = efficient, None = select automatically.\n    is_template_graph   = False,        # True = template graph constructed by the Network class, False = actual evaluation.\n    lod_sep             = 9,\n    **kwargs):                          # Ignore unrecognized keyword args.\n    \n    resolution_log2 = int(np.log2(resolution))\n    assert resolution == 2**resolution_log2 and resolution >= 4\n    def nf(stage): return 3*int((min(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_max))) #*(int(stage>=lod_sep)+1)\n    def PN(x): return pixel_norm(x, epsilon=pixelnorm_epsilon) if use_pixelnorm else x\n    if latent_size is None: latent_size = nf(0)\n    if structure is None: structure = 'linear' if is_template_graph else 'recursive'\n    act = leaky_relu if use_leakyrelu else tf.nn.relu\n    \n    latents_in.set_shape([None, latent_size])\n    labels_in.set_shape([None, label_size])\n    combo_in = tf.cast(tf.concat([latents_in, labels_in], axis=1), dtype)\n    lod_in = tf.cast(tf.get_variable('lod', initializer=np.float32(0.0), trainable=False), dtype)\n\n    # Building blocks.\n    def block(x, res): # res = 2..resolution_log2\n        with tf.variable_scope('%dx%d' % (2**res, 2**res)):\n            if res == 2: # 4x4\n                if normalize_latents: x = pixel_norm(x, epsilon=pixelnorm_epsilon)\n                with tf.variable_scope('Dense'):\n                    x = dense(x, fmaps=nf(res-1)*16, gain=np.sqrt(2)/4, use_wscale=use_wscale) # override gain to match the original Theano implementation\n                    x = tf.reshape(x, [-1, nf(res-1), 4, 4])\n                    x = PN(act(apply_bias(x)))\n                with tf.variable_scope('Conv'):\n                    x = PN(act(apply_bias(conv2d(x, fmaps=nf(res-1), kernel=3, use_wscale=use_wscale))))\n            elif res <= lod_sep: # 8x8 and upto seperation\n                if fused_scale:\n                    with tf.variable_scope('Conv0_up'):\n                        x = PN(act(apply_bias(upscale2d_conv2d(x, fmaps=nf(res-1), kernel=3, use_wscale=use_wscale))))\n                else:\n                    x = upscale2d(x)\n                    with tf.variable_scope('Conv0'):\n                        x = PN(act(apply_bias(conv2d(x, fmaps=nf(res-1), kernel=3, use_wscale=use_wscale))))\n                with tf.variable_scope('Conv1'):\n                    x = PN(act(apply_bias(conv2d(x, fmaps=nf(res-1), kernel=3, use_wscale=use_wscale))))\n            else:\n                inputs = tf.split(x, 3, 1)\n                with tf.variable_scope('tex'):\n                    if fused_scale:\n                        with tf.variable_scope('Conv0_up'):\n                            x = PN(act(apply_bias(upscale2d_conv2d(inputs[0], fmaps=int(nf(res-1)/3), kernel=3, use_wscale=use_wscale))))\n                    else:\n                        x = upscale2d(inputs[0])\n                        with tf.variable_scope('Conv0'):\n                            x = PN(act(apply_bias(conv2d(x, fmaps=nf(res-1)/3, kernel=3, use_wscale=use_wscale))))\n                    with tf.variable_scope('Conv1'):\n                        tex = PN(act(apply_bias(conv2d(x, fmaps=nf(res-1)/3, kernel=3, use_wscale=use_wscale))))\n                with tf.variable_scope('shp'):\n                    if fused_scale:\n                        with tf.variable_scope('Conv0_up'):\n                            x = PN(act(apply_bias(upscale2d_conv2d(inputs[1], fmaps=int(nf(res-1)/3), kernel=3, use_wscale=use_wscale))))\n                    else:\n                        x = upscale2d(inputs[1])\n                        with tf.variable_scope('Conv0'):\n                            x = PN(act(apply_bias(conv2d(x, fmaps=nf(res-1)/3, kernel=3, use_wscale=use_wscale))))\n                    with tf.variable_scope('Conv1'):\n                        shp = PN(act(apply_bias(conv2d(x, fmaps=nf(res-1)/3, kernel=3, use_wscale=use_wscale))))\n                with tf.variable_scope('nor'):\n                    if fused_scale:\n                        with tf.variable_scope('Conv0_up'):\n                            x = PN(act(apply_bias(upscale2d_conv2d(inputs[2], fmaps=int(nf(res-1)/3), kernel=3, use_wscale=use_wscale))))\n                    else:\n                        x = upscale2d(inputs[2])\n                        with tf.variable_scope('Conv0'):\n                            x = PN(act(apply_bias(conv2d(x, fmaps=nf(res-1)/3, kernel=3, use_wscale=use_wscale))))\n                    with tf.variable_scope('Conv1'):\n                        nor = PN(act(apply_bias(conv2d(x, fmaps=nf(res-1)/3, kernel=3, use_wscale=use_wscale))))\n                x = tf.concat([tex,shp,nor],1)\n            return x\n    def torgb(x, res): # res = 2..resolution_log2\n        lod = resolution_log2 - res\n        with tf.variable_scope('ToRGB_lod%d' % lod):\n            if res <= lod_sep:\n                return apply_bias(conv2d(x, fmaps=num_channels, kernel=1, gain=1, use_wscale=use_wscale))\n            else:\n                inputs = tf.split(x, 3, 1)\n                with tf.variable_scope('tex'):\n                    tex = apply_bias(conv2d(inputs[0], fmaps=num_channels/3, kernel=1, gain=1, use_wscale=use_wscale))\n                with tf.variable_scope('shp'):\n                    shp = apply_bias(conv2d(inputs[1], fmaps=num_channels/3, kernel=1, gain=1, use_wscale=use_wscale))\n                with tf.variable_scope('nor'):\n                    nor = apply_bias(conv2d(inputs[2], fmaps=num_channels/3, kernel=1, gain=1, use_wscale=use_wscale))\n                return tf.concat([tex,shp,nor],1)\n\n    # Linear structure: simple but inefficient.\n    if structure == 'linear':\n        x = block(combo_in, 2)\n        images_out = torgb(x, 2)\n        for res in range(3, resolution_log2 + 1):\n            lod = resolution_log2 - res\n            x = block(x, res)\n            img = torgb(x, res)\n            images_out = upscale2d(images_out)\n            with tf.variable_scope('Grow_lod%d' % lod):\n                images_out = lerp_clip(img, images_out, lod_in - lod)\n\n    # Recursive structure: complex but efficient.\n    if structure == 'recursive':\n        def grow(x, res, lod):\n            y = block(x, res)\n            img = lambda: upscale2d(torgb(y, res), 2**lod)\n            if res > 2: img = cset(img, (lod_in > lod), lambda: upscale2d(lerp(torgb(y, res), upscale2d(torgb(x, res - 1)), lod_in - lod), 2**lod))\n            if lod > 0: img = cset(img, (lod_in < lod), lambda: grow(y, res + 1, lod - 1))\n            return img()\n        images_out = grow(combo_in, 2, resolution_log2 - 2)\n        \n    assert images_out.dtype == tf.as_dtype(dtype)\n    images_out = tf.identity(images_out, name='images_out')\n    return images_out\n\n#----------------------------------------------------------------------------\n# Discriminator network used in the paper.\n\ndef D_paper(\n    images_in,                          # Input: Images [minibatch, channel, height, width].\n    num_channels        = 1,            # Number of input color channels. Overridden based on dataset.\n    resolution          = 32,           # Input resolution. Overridden based on dataset.\n    label_size          = 0,            # Dimensionality of the labels, 0 if no labels. Overridden based on dataset.\n    fmap_base           = 8192,         # Overall multiplier for the number of feature maps.\n    fmap_decay          = 1.0,          # log2 feature map reduction when doubling the resolution.\n    fmap_max            = 512,          # Maximum number of feature maps in any layer.\n    use_wscale          = True,         # Enable equalized learning rate?\n    mbstd_group_size    = 4,            # Group size for the minibatch standard deviation layer, 0 = disable.\n    dtype               = 'float32',    # Data type to use for activations and outputs.\n    fused_scale         = True,         # True = use fused conv2d + downscale2d, False = separate downscale2d layers.\n    structure           = None,         # 'linear' = human-readable, 'recursive' = efficient, None = select automatically\n    is_template_graph   = False,        # True = template graph constructed by the Network class, False = actual evaluation.\n    lod_sep             = 9,\n    **kwargs):                          # Ignore unrecognized keyword args.\n    \n    resolution_log2 = int(np.log2(resolution))\n    assert resolution == 2**resolution_log2 and resolution >= 4\n    def nf(stage):\n        return 3*int((min(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_max))) #*(int(stage>=lod_sep)+1)\n    if structure is None: structure = 'linear' if is_template_graph else 'recursive'\n    act = leaky_relu\n\n    images_in.set_shape([None, num_channels, resolution, resolution])\n    images_in = tf.cast(images_in, dtype)\n    lod_in = tf.cast(tf.get_variable('lod', initializer=np.float32(0.0), trainable=False), dtype)\n\n    # Building blocks.\n    def fromrgb(x, res): # res = 2..resolution_log2\n        with tf.variable_scope('FromRGB_lod%d' % (resolution_log2 - res)):\n            if res >= lod_sep-1:\n                inputs = tf.split(x, 3, 1)\n                with tf.variable_scope('tex'):\n                    tex = act(apply_bias(conv2d(inputs[0], fmaps=int(nf(res-1)/3), kernel=1, use_wscale=use_wscale)))\n                with tf.variable_scope('shp'):\n                    shp = act(apply_bias(conv2d(inputs[1], fmaps=int(nf(res-1)/3), kernel=1, use_wscale=use_wscale)))\n                with tf.variable_scope('nor'):\n                    nor = act(apply_bias(conv2d(inputs[2], fmaps=int(nf(res-1)/3), kernel=1, use_wscale=use_wscale)))\n                return tf.concat([tex, shp,nor], 1)\n            else:\n                return act(apply_bias(conv2d(x, fmaps=nf(res-1), kernel=1, use_wscale=use_wscale)))\n    def block(x, res): # res = 2..resolution_log2\n        with tf.variable_scope('%dx%d' % (2**res, 2**res)):\n            if res >= lod_sep:\n                inputs = tf.split(x, 3, 1)\n                with tf.variable_scope('tex'):\n                    with tf.variable_scope('Conv0'):\n                        x = act(apply_bias(conv2d(inputs[0], fmaps=int(nf(res-1)/3), kernel=3, use_wscale=use_wscale)))\n                    if fused_scale:\n                        with tf.variable_scope('Conv1_down'):\n                            x = act(apply_bias(conv2d_downscale2d(x, fmaps=int(nf(res-2)/3), kernel=3, use_wscale=use_wscale)))\n                    else:\n                        with tf.variable_scope('Conv1'):\n                            x = act(apply_bias(conv2d(x, fmaps=int(nf(res-2)/3), kernel=3, use_wscale=use_wscale)))\n                        x = downscale2d(x)\n                    tex = x\n                with tf.variable_scope('shp'):\n                    with tf.variable_scope('Conv0'):\n                        x = act(apply_bias(conv2d(inputs[1], fmaps=int(nf(res-1)/3), kernel=3, use_wscale=use_wscale)))\n                    if fused_scale:\n                        with tf.variable_scope('Conv1_down'):\n                            x = act(apply_bias(conv2d_downscale2d(x, fmaps=int(nf(res-2)/3), kernel=3, use_wscale=use_wscale)))\n                    else:\n                        with tf.variable_scope('Conv1'):\n                            x = act(apply_bias(conv2d(x, fmaps=int(nf(res-2)/3), kernel=3, use_wscale=use_wscale)))\n                        x = downscale2d(x)\n                    shp =x\n                with tf.variable_scope('nor'):\n                    with tf.variable_scope('Conv0'):\n                        x = act(apply_bias(conv2d(inputs[2], fmaps=int(nf(res-1)/3), kernel=3, use_wscale=use_wscale)))\n                    if fused_scale:\n                        with tf.variable_scope('Conv1_down'):\n                            x = act(apply_bias(conv2d_downscale2d(x, fmaps=int(nf(res-2)/3), kernel=3, use_wscale=use_wscale)))\n                    else:\n                        with tf.variable_scope('Conv1'):\n                            x = act(apply_bias(conv2d(x, fmaps=int(nf(res-2)/3), kernel=3, use_wscale=use_wscale)))\n                        x = downscale2d(x)\n                    nor =x\n                x = tf.concat([tex,shp,nor],1)\n            elif res >= 3: # 8x8 and up\n                with tf.variable_scope('Conv0'):\n                    x = act(apply_bias(conv2d(x, fmaps=nf(res-1), kernel=3, use_wscale=use_wscale)))\n                if fused_scale:\n                    with tf.variable_scope('Conv1_down'):\n                        x = act(apply_bias(conv2d_downscale2d(x, fmaps=nf(res-2), kernel=3, use_wscale=use_wscale)))\n                else:\n                    with tf.variable_scope('Conv1'):\n                        x = act(apply_bias(conv2d(x, fmaps=nf(res-2), kernel=3, use_wscale=use_wscale)))\n                    x = downscale2d(x)\n            else: # 4x4\n                if mbstd_group_size > 1:\n                    x = minibatch_stddev_layer(x, mbstd_group_size)\n                with tf.variable_scope('Conv'):\n                    x = act(apply_bias(conv2d(x, fmaps=nf(res-1), kernel=3, use_wscale=use_wscale)))\n                with tf.variable_scope('Dense0'):\n                    x = act(apply_bias(dense(x, fmaps=nf(res-2), use_wscale=use_wscale)))\n                with tf.variable_scope('Dense1'):\n                    x = apply_bias(dense(x, fmaps=1+label_size, gain=1, use_wscale=use_wscale))\n            return x\n    \n    # Linear structure: simple but inefficient.\n    if structure == 'linear':\n        img = images_in\n        x = fromrgb(img, resolution_log2)\n        for res in range(resolution_log2, 2, -1):\n            lod = resolution_log2 - res\n            x = block(x, res)\n            img = downscale2d(img)\n            y = fromrgb(img, res - 1)\n            with tf.variable_scope('Grow_lod%d' % lod):\n                x = lerp_clip(x, y, lod_in - lod)\n        combo_out = block(x, 2)\n\n    # Recursive structure: complex but efficient.\n    if structure == 'recursive':\n        def grow(res, lod):\n            x = lambda: fromrgb(downscale2d(images_in, 2**lod), res)\n            if lod > 0: x = cset(x, (lod_in < lod), lambda: grow(res + 1, lod - 1))\n            x = block(x(), res); y = lambda: x\n            if res > 2: y = cset(y, (lod_in > lod), lambda: lerp(x, fromrgb(downscale2d(images_in, 2**(lod+1)), res - 1), lod_in - lod))\n            return y()\n        combo_out = grow(2, resolution_log2 - 2)\n\n    assert combo_out.dtype == tf.as_dtype(dtype)\n    scores_out = tf.identity(combo_out[:, :1], name='scores_out')\n    labels_out = tf.identity(combo_out[:, 1:], name='labels_out')\n    return scores_out, labels_out\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "requirements-pip.txt",
    "content": "numpy>=1.13.3\nscipy>=1.0.0\ntensorflow-gpu>=1.6.0\nmoviepy>=0.2.3.2\nPillow>=3.1.1\nlmdb>=0.93\nopencv-python>=3.4.0.12\ncryptography>=2.1.4\nh5py>=2.7.1\nsix>=1.11.0\n"
  },
  {
    "path": "test.py",
    "content": "# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# This work is licensed under the Creative Commons Attribution-NonCommercial\n# 4.0 International License. To view a copy of this license, visit\n# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to\n# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\n\nimport os\nimport time\nimport numpy as np\nimport tensorflow as tf\n\nimport config_test\nimport tfutil\nimport dataset\nimport misc\n\n#----------------------------------------------------------------------------\n# Choose the size and contents of the image snapshot grids that are exported\n# periodically during training.\n\ndef setup_snapshot_image_grid(G, training_set,\n    size    = '1080p',      # '1080p' = to be viewed on 1080p display, '4k' = to be viewed on 4k display.\n    layout  = 'random'):    # 'random' = grid contents are selected randomly, 'row_per_class' = each row corresponds to one class label.\n\n    # Select size.\n    gw = 1; gh = 1\n    if size == '1080p':\n        gw = np.clip(1920 // G.output_shape[3], 3, 32)\n        gh = np.clip(1080 // G.output_shape[2], 2, 32)\n    if size == '4k':\n        gw = np.clip(3840 // G.output_shape[3], 7, 32)\n        gh = np.clip(2160 // G.output_shape[2], 4, 32)\n\n    # Fill in reals and labels.\n    reals = np.zeros([gw * gh] + training_set.shape, dtype=training_set.dtype)\n    labels = np.zeros([gw * gh, training_set.label_size], dtype=training_set.label_dtype)\n    for idx in range(gw * gh):\n        x = idx % gw; y = idx // gw\n        while True:\n            real, label = training_set.get_minibatch_np(1)\n            if layout == 'row_per_class' and training_set.label_size > 0:\n                if label[0, y % training_set.label_size] == 0.0:\n                    continue\n            reals[idx] = real[0]\n            labels[idx] = label[0]\n            break\n\n    # Generate latents.\n    latents = misc.random_latents(gw * gh, G)\n    return (gw, gh), reals, labels, latents\n\n#----------------------------------------------------------------------------\n# Just-in-time processing of training images before feeding them to the networks.\n\ndef process_reals(x, lod, mirror_augment, drange_data, drange_net):\n    with tf.name_scope('ProcessReals'):\n        if drange_data != drange_net:\n            with tf.name_scope('DynamicRange'):\n                x = tf.cast(x, tf.float32)\n                x = misc.adjust_dynamic_range(x, drange_data, drange_net)\n        if mirror_augment:\n            with tf.name_scope('MirrorAugment'):\n                s = tf.shape(x)\n                mask = tf.random_uniform([s[0], 1, 1, 1], 0.0, 1.0)\n                mask = tf.tile(mask, [1, s[1], s[2], s[3]])\n                x = tf.where(mask < 0.5, x, tf.reverse(x, axis=[3]))\n        with tf.name_scope('FadeLOD'): # Smooth crossfade between consecutive levels-of-detail.\n            s = tf.shape(x)\n            y = tf.reshape(x, [-1, s[1], s[2]//2, 2, s[3]//2, 2])\n            y = tf.reduce_mean(y, axis=[3, 5], keepdims=True)\n            y = tf.tile(y, [1, 1, 1, 2, 1, 2])\n            y = tf.reshape(y, [-1, s[1], s[2], s[3]])\n            x = tfutil.lerp(x, y, lod - tf.floor(lod))\n        with tf.name_scope('UpscaleLOD'): # Upscale to match the expected input/output size of the networks.\n            s = tf.shape(x)\n            factor = tf.cast(2 ** tf.floor(lod), tf.int32)\n            x = tf.reshape(x, [-1, s[1], s[2], 1, s[3], 1])\n            x = tf.tile(x, [1, 1, 1, factor, 1, factor])\n            x = tf.reshape(x, [-1, s[1], s[2] * factor, s[3] * factor])\n        return x\n\n#----------------------------------------------------------------------------\n# Class for evaluating and storing the values of time-varying training parameters.\n\nclass TrainingSchedule:\n    def __init__(\n        self,\n        cur_nimg,\n        training_set,\n        lod_initial_resolution  = 4,        # Image resolution used at the beginning.\n        lod_training_kimg       = 1000,      # Thousands of real images to show before doubling the resolution.\n        lod_transition_kimg     = 1000,      # Thousands of real images to show when fading in new layers.\n        minibatch_base          = 16,       # Maximum minibatch size, divided evenly among GPUs.\n        minibatch_dict          = {},       # Resolution-specific overrides.\n        max_minibatch_per_gpu   = {},       # Resolution-specific maximum minibatch size per GPU.\n        G_lrate_base            = 0.001,    # Learning rate for the generator.\n        G_lrate_dict            = {},       # Resolution-specific overrides.\n        D_lrate_base            = 0.001,    # Learning rate for the discriminator.\n        D_lrate_dict            = {},       # Resolution-specific overrides.\n        tick_kimg_base          = 160,      # Default interval of progress snapshots.\n        tick_kimg_dict          = {4: 160, 8:140, 16:120, 32:100, 64:80, 128:60, 256:40, 512:20, 1024:10}): # Resolution-specific overrides.\n\n        # Training phase.\n        self.kimg = cur_nimg / 1000.0\n        phase_dur = lod_training_kimg + lod_transition_kimg\n        phase_idx = int(np.floor(self.kimg / phase_dur)) if phase_dur > 0 else 0\n        phase_kimg = self.kimg - phase_idx * phase_dur\n\n        # Level-of-detail and resolution.\n        self.lod = training_set.resolution_log2\n        self.lod -= np.floor(np.log2(lod_initial_resolution))\n        self.lod -= phase_idx\n        if lod_transition_kimg > 0:\n            self.lod -= max(phase_kimg - lod_training_kimg, 0.0) / lod_transition_kimg\n        self.lod = max(self.lod, 0.0)\n        self.resolution = 2 ** (training_set.resolution_log2 - int(np.floor(self.lod)))\n\n        # Minibatch size.\n        self.minibatch = minibatch_dict.get(self.resolution, minibatch_base)\n        self.minibatch -= self.minibatch % config_test.num_gpus\n        if self.resolution in max_minibatch_per_gpu:\n            self.minibatch = min(self.minibatch, max_minibatch_per_gpu[self.resolution] * config_test.num_gpus)\n\n        # Other parameters.\n        self.G_lrate = G_lrate_dict.get(self.resolution, G_lrate_base)\n        self.D_lrate = D_lrate_dict.get(self.resolution, D_lrate_base)\n        self.tick_kimg = tick_kimg_dict.get(self.resolution, tick_kimg_base)\n\n#----------------------------------------------------------------------------\n# Main training script.\n# To run, comment/uncomment appropriate lines in config.py and launch train.py.\n\ndef train_progressive_gan(\n    G_smoothing             = 0.999,        # Exponential running average of generator weights.\n    D_repeats               = 1,            # How many times the discriminator is trained per G iteration.\n    minibatch_repeats       = 4,            # Number of minibatches to run before adjusting training parameters.\n    reset_opt_for_new_lod   = True,         # Reset optimizer internal state (e.g. Adam moments) when new layers are introduced?\n    total_kimg              = 15000,        # Total length of the training, measured in thousands of real images.\n    mirror_augment          = False,        # Enable mirror augment?\n    drange_net              = [-1,1],       # Dynamic range used when feeding image data to the networks.\n    image_snapshot_ticks    = 1,            # How often to export image snapshots?\n    network_snapshot_ticks  = 10,           # How often to export network snapshots?\n    save_tf_graph           = False,        # Include full TensorFlow computation graph in the tfevents file?\n    save_weight_histograms  = False,        # Include weight histograms in the tfevents file?\n    resume_run_id           = None,         # Run ID or network pkl to resume training from, None = start from scratch.\n    resume_snapshot         = None,         # Snapshot index to resume training from, None = autodetect.\n    resume_kimg             = 0.0,          # Assumed training progress at the beginning. Affects reporting and training schedule.\n    resume_time             = 0.0):         # Assumed wallclock time at the beginning. Affects reporting.\n\n    maintenance_start_time = time.time()\n    training_set = dataset.load_dataset(data_dir=config_test.data_dir, verbose=True, **config_test.dataset)\n\n    # Construct networks.\n    with tf.device('/gpu:0'):\n        if resume_run_id is not None:\n            network_pkl = misc.locate_network_pkl(resume_run_id, resume_snapshot)\n            print('Loading networks from \"%s\"...' % network_pkl)\n            G, D, Gs = misc.load_pkl(network_pkl)\n        else:\n            print('Constructing networks...')\n            G = tfutil.Network('G', num_channels=training_set.shape[0], resolution=training_set.shape[1], label_size=training_set.label_size, **config_test.G)\n            D = tfutil.Network('D', num_channels=training_set.shape[0], resolution=training_set.shape[1], label_size=training_set.label_size, **config_test.D)\n            Gs = G.clone('Gs')\n        Gs_update_op = Gs.setup_as_moving_average_of(G, beta=G_smoothing)\n    G.print_layers(); D.print_layers()\n\n    print('Building TensorFlow graph...')\n    with tf.name_scope('Inputs'):\n        lod_in          = tf.placeholder(tf.float32, name='lod_in', shape=[])\n        lrate_in        = tf.placeholder(tf.float32, name='lrate_in', shape=[])\n        minibatch_in    = tf.placeholder(tf.int32, name='minibatch_in', shape=[])\n        minibatch_split = minibatch_in // config_test.num_gpus\n        reals, labels   = training_set.get_minibatch_tf()\n        reals_split     = tf.split(reals, config_test.num_gpus)\n        labels_split    = tf.split(labels, config_test.num_gpus)\n    G_opt = tfutil.Optimizer(name='TrainG', learning_rate=lrate_in, **config_test.G_opt)\n    D_opt = tfutil.Optimizer(name='TrainD', learning_rate=lrate_in, **config_test.D_opt)\n    for gpu in range(config_test.num_gpus):\n        with tf.name_scope('GPU%d' % gpu), tf.device('/gpu:%d' % gpu):\n            G_gpu = G if gpu == 0 else G.clone(G.name + '_shadow')\n            D_gpu = D if gpu == 0 else D.clone(D.name + '_shadow')\n            lod_assign_ops = [tf.assign(G_gpu.find_var('lod'), lod_in), tf.assign(D_gpu.find_var('lod'), lod_in)]\n            reals_gpu = process_reals(reals_split[gpu], lod_in, mirror_augment, training_set.dynamic_range, drange_net)\n            labels_gpu = labels_split[gpu]\n            with tf.name_scope('G_loss'), tf.control_dependencies(lod_assign_ops):\n                G_loss = tfutil.call_func_by_name(G=G_gpu, D=D_gpu, opt=G_opt, training_set=training_set, minibatch_size=minibatch_split, **config_test.G_loss)\n            with tf.name_scope('D_loss'), tf.control_dependencies(lod_assign_ops):\n                D_loss = tfutil.call_func_by_name(G=G_gpu, D=D_gpu, opt=D_opt, training_set=training_set, minibatch_size=minibatch_split, reals=reals_gpu, labels=labels_gpu, **config_test.D_loss)\n            G_opt.register_gradients(tf.reduce_mean(G_loss), G_gpu.trainables)\n            D_opt.register_gradients(tf.reduce_mean(D_loss), D_gpu.trainables)\n    G_train_op = G_opt.apply_updates()\n    D_train_op = D_opt.apply_updates()\n\n    print('Setting up snapshot image grid...')\n    grid_size, grid_reals, grid_labels, grid_latents = setup_snapshot_image_grid(G, training_set, **config_test.grid)\n    sched = TrainingSchedule(total_kimg * 1000, training_set, **config_test.sched)\n    grid_fakes = Gs.run(grid_latents, grid_labels, minibatch_size=sched.minibatch // config_test.num_gpus)\n\n    print('Setting up result dir...')\n    result_subdir = misc.create_result_subdir(config_test.result_dir, config_test.desc)\n    misc.save_image_grid(grid_reals, os.path.join(result_subdir, 'reals.png'), drange=training_set.dynamic_range, grid_size=grid_size)\n    misc.save_image_grid(grid_fakes, os.path.join(result_subdir, 'fakes%06d.png' % 0), drange=drange_net, grid_size=grid_size)\n    summary_log = tf.summary.FileWriter(result_subdir)\n    if save_tf_graph:\n        summary_log.add_graph(tf.get_default_graph())\n    if save_weight_histograms:\n        G.setup_weight_histograms(); D.setup_weight_histograms()\n\n    print('Training...')\n    cur_nimg = int(resume_kimg * 1000)\n    cur_tick = 0\n    tick_start_nimg = cur_nimg\n    tick_start_time = time.time()\n    train_start_time = tick_start_time - resume_time\n    prev_lod = -1.0\n    while cur_nimg < total_kimg * 1000:\n\n        # Choose training parameters and configure training ops.\n        sched = TrainingSchedule(cur_nimg, training_set, **config_test.sched)\n        training_set.configure(sched.minibatch, sched.lod)\n        if reset_opt_for_new_lod:\n            if np.floor(sched.lod) != np.floor(prev_lod) or np.ceil(sched.lod) != np.ceil(prev_lod):\n                G_opt.reset_optimizer_state(); D_opt.reset_optimizer_state()\n        prev_lod = sched.lod\n\n        # Run training ops.\n        for repeat in range(minibatch_repeats):\n            for _ in range(D_repeats):\n                tfutil.run([D_train_op, Gs_update_op], {lod_in: sched.lod, lrate_in: sched.D_lrate, minibatch_in: sched.minibatch})\n                cur_nimg += sched.minibatch\n            tfutil.run([G_train_op], {lod_in: sched.lod, lrate_in: sched.G_lrate, minibatch_in: sched.minibatch})\n\n        # Perform maintenance tasks once per tick.\n        done = (cur_nimg >= total_kimg * 1000)\n        if cur_nimg >= tick_start_nimg + sched.tick_kimg * 1000 or done:\n            cur_tick += 1\n            cur_time = time.time()\n            tick_kimg = (cur_nimg - tick_start_nimg) / 1000.0\n            tick_start_nimg = cur_nimg\n            tick_time = cur_time - tick_start_time\n            total_time = cur_time - train_start_time\n            maintenance_time = tick_start_time - maintenance_start_time\n            maintenance_start_time = cur_time\n\n            # Report progress.\n            print('tick %-5d kimg %-8.1f lod %-5.2f minibatch %-4d time %-12s sec/tick %-7.1f sec/kimg %-7.2f maintenance %.1f' % (\n                tfutil.autosummary('Progress/tick', cur_tick),\n                tfutil.autosummary('Progress/kimg', cur_nimg / 1000.0),\n                tfutil.autosummary('Progress/lod', sched.lod),\n                tfutil.autosummary('Progress/minibatch', sched.minibatch),\n                misc.format_time(tfutil.autosummary('Timing/total_sec', total_time)),\n                tfutil.autosummary('Timing/sec_per_tick', tick_time),\n                tfutil.autosummary('Timing/sec_per_kimg', tick_time / tick_kimg),\n                tfutil.autosummary('Timing/maintenance_sec', maintenance_time)))\n            tfutil.autosummary('Timing/total_hours', total_time / (60.0 * 60.0))\n            tfutil.autosummary('Timing/total_days', total_time / (24.0 * 60.0 * 60.0))\n            tfutil.save_summaries(summary_log, cur_nimg)\n\n            # Save snapshots.\n            if cur_tick % image_snapshot_ticks == 0 or done:\n                grid_fakes = Gs.run(grid_latents, grid_labels, minibatch_size=sched.minibatch // config_test.num_gpus)\n                misc.save_image_grid(grid_fakes, os.path.join(result_subdir, 'fakes%06d.png' % (cur_nimg // 1000)), drange=drange_net, grid_size=grid_size)\n            if cur_tick % network_snapshot_ticks == 0 or done:\n                misc.save_pkl((G, D, Gs), os.path.join(result_subdir, 'network-snapshot-%06d.pkl' % (cur_nimg // 1000)))\n\n            # Record start time of the next tick.\n            tick_start_time = time.time()\n\n    # Write final results.\n    misc.save_pkl((G, D, Gs), os.path.join(result_subdir, 'network-final.pkl'))\n    summary_log.close()\n    open(os.path.join(result_subdir, '_training-done.txt'), 'wt').close()\n\n#----------------------------------------------------------------------------\n# Main entry point.\n# Calls the function indicated in config.py.\n\nif __name__ == \"__main__\":\n    misc.init_output_logging()\n    np.random.seed(config_test.random_seed)\n    print('Initializing TensorFlow...')\n    os.environ.update(config_test.env)\n    tfutil.init_tf(config_test.tf_config)\n    print('Running %s()...' % config_test.train['func'])\n    tfutil.call_func_by_name(**config_test.train)\n    print('Exiting...')\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "tfutil.py",
    "content": "# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# This work is licensed under the Creative Commons Attribution-NonCommercial\n# 4.0 International License. To view a copy of this license, visit\n# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to\n# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\n\nimport inspect\nimport importlib\nimport imp\nimport numpy as np\nfrom collections import OrderedDict\nimport tensorflow as tf\n\n#----------------------------------------------------------------------------\n# Convenience.\n\ndef run(*args, **kwargs): # Run the specified ops in the default session.\n    return tf.get_default_session().run(*args, **kwargs)\n\ndef is_tf_expression(x):\n    return isinstance(x, tf.Tensor) or isinstance(x, tf.Variable) or isinstance(x, tf.Operation)\n\ndef shape_to_list(shape):\n    return [dim.value for dim in shape]\n\ndef flatten(x):\n    with tf.name_scope('Flatten'):\n        return tf.reshape(x, [-1])\n\ndef log2(x):\n    with tf.name_scope('Log2'):\n        return tf.log(x) * np.float32(1.0 / np.log(2.0))\n\ndef exp2(x):\n    with tf.name_scope('Exp2'):\n        return tf.exp(x * np.float32(np.log(2.0)))\n\ndef lerp(a, b, t):\n    with tf.name_scope('Lerp'):\n        return a + (b - a) * t\n\ndef lerp_clip(a, b, t):\n    with tf.name_scope('LerpClip'):\n        return a + (b - a) * tf.clip_by_value(t, 0.0, 1.0)\n\ndef absolute_name_scope(scope): # Forcefully enter the specified name scope, ignoring any surrounding scopes.\n    return tf.name_scope(scope + '/')\n\n#----------------------------------------------------------------------------\n# Initialize TensorFlow graph and session using good default settings.\n\ndef init_tf(config_dict=dict()):\n    if tf.get_default_session() is None:\n        tf.set_random_seed(np.random.randint(1 << 31))\n        create_session(config_dict, force_as_default=True)\n\n#----------------------------------------------------------------------------\n# Create tf.Session based on config dict of the form\n# {'gpu_options.allow_growth': True}\n\ndef create_session(config_dict=dict(), force_as_default=False):\n    config = tf.ConfigProto()\n    for key, value in config_dict.items():\n        fields = key.split('.')\n        obj = config\n        for field in fields[:-1]:\n            obj = getattr(obj, field)\n        setattr(obj, fields[-1], value)\n    session = tf.Session(config=config)\n    if force_as_default:\n        session._default_session = session.as_default()\n        session._default_session.enforce_nesting = False\n        session._default_session.__enter__()\n    return session\n\n#----------------------------------------------------------------------------\n# Initialize all tf.Variables that have not already been initialized.\n# Equivalent to the following, but more efficient and does not bloat the tf graph:\n#   tf.variables_initializer(tf.report_unitialized_variables()).run()\n\ndef init_uninited_vars(vars=None):\n    if vars is None: vars = tf.global_variables()\n    test_vars = []; test_ops = []\n    with tf.control_dependencies(None): # ignore surrounding control_dependencies\n        for var in vars:\n            assert is_tf_expression(var)\n            try:\n                tf.get_default_graph().get_tensor_by_name(var.name.replace(':0', '/IsVariableInitialized:0'))\n            except KeyError:\n                # Op does not exist => variable may be uninitialized.\n                test_vars.append(var)\n                with absolute_name_scope(var.name.split(':')[0]):\n                    test_ops.append(tf.is_variable_initialized(var))\n    init_vars = [var for var, inited in zip(test_vars, run(test_ops)) if not inited]\n    run([var.initializer for var in init_vars])\n\n#----------------------------------------------------------------------------\n# Set the values of given tf.Variables.\n# Equivalent to the following, but more efficient and does not bloat the tf graph:\n#   tfutil.run([tf.assign(var, value) for var, value in var_to_value_dict.items()]\n\ndef set_vars(var_to_value_dict):\n    ops = []\n    feed_dict = {}\n    for var, value in var_to_value_dict.items():\n        assert is_tf_expression(var)\n        try:\n            setter = tf.get_default_graph().get_tensor_by_name(var.name.replace(':0', '/setter:0')) # look for existing op\n        except KeyError:\n            with absolute_name_scope(var.name.split(':')[0]):\n                with tf.control_dependencies(None): # ignore surrounding control_dependencies\n                    setter = tf.assign(var, tf.placeholder(var.dtype, var.shape, 'new_value'), name='setter') # create new setter\n        ops.append(setter)\n        feed_dict[setter.op.inputs[1]] = value\n    run(ops, feed_dict)\n\n#----------------------------------------------------------------------------\n# Autosummary creates an identity op that internally keeps track of the input\n# values and automatically shows up in TensorBoard. The reported value\n# represents an average over input components. The average is accumulated\n# constantly over time and flushed when save_summaries() is called.\n#\n# Notes:\n# - The output tensor must be used as an input for something else in the\n#   graph. Otherwise, the autosummary op will not get executed, and the average\n#   value will not get accumulated.\n# - It is perfectly fine to include autosummaries with the same name in\n#   several places throughout the graph, even if they are executed concurrently.\n# - It is ok to also pass in a python scalar or numpy array. In this case, it\n#   is added to the average immediately.\n\n_autosummary_vars = OrderedDict() # name => [var, ...]\n_autosummary_immediate = OrderedDict() # name => update_op, update_value\n_autosummary_finalized = False\n\ndef autosummary(name, value):\n    id = name.replace('/', '_')\n    if is_tf_expression(value):\n        with tf.name_scope('summary_' + id), tf.device(value.device):\n            update_op = _create_autosummary_var(name, value)\n            with tf.control_dependencies([update_op]):\n                return tf.identity(value)\n    else: # python scalar or numpy array\n        if name not in _autosummary_immediate:\n            with absolute_name_scope('Autosummary/' + id), tf.device(None), tf.control_dependencies(None):\n                update_value = tf.placeholder(tf.float32)\n                update_op = _create_autosummary_var(name, update_value)\n                _autosummary_immediate[name] = update_op, update_value\n        update_op, update_value = _autosummary_immediate[name]\n        run(update_op, {update_value: np.float32(value)})\n        return value\n\n# Create the necessary ops to include autosummaries in TensorBoard report.\n# Note: This should be done only once per graph.\ndef finalize_autosummaries():\n    global _autosummary_finalized\n    if _autosummary_finalized:\n        return\n    _autosummary_finalized = True\n    init_uninited_vars([var for vars in _autosummary_vars.values() for var in vars])\n    with tf.device(None), tf.control_dependencies(None):\n        for name, vars in _autosummary_vars.items():\n            id = name.replace('/', '_')\n            with absolute_name_scope('Autosummary/' + id):\n                sum = tf.add_n(vars)\n                avg = sum[0] / sum[1]\n                with tf.control_dependencies([avg]): # read before resetting\n                    reset_ops = [tf.assign(var, tf.zeros(2)) for var in vars]\n                    with tf.name_scope(None), tf.control_dependencies(reset_ops): # reset before reporting\n                        tf.summary.scalar(name, avg)\n\n# Internal helper for creating autosummary accumulators.\ndef _create_autosummary_var(name, value_expr):\n    assert not _autosummary_finalized\n    v = tf.cast(value_expr, tf.float32)\n    if v.shape.ndims is 0:\n        v = [v, np.float32(1.0)]\n    elif v.shape.ndims is 1:\n        v = [tf.reduce_sum(v), tf.cast(tf.shape(v)[0], tf.float32)]\n    else:\n        v = [tf.reduce_sum(v), tf.reduce_prod(tf.cast(tf.shape(v), tf.float32))]\n    v = tf.cond(tf.is_finite(v[0]), lambda: tf.stack(v), lambda: tf.zeros(2))\n    with tf.control_dependencies(None):\n        var = tf.Variable(tf.zeros(2)) # [numerator, denominator]\n    update_op = tf.cond(tf.is_variable_initialized(var), lambda: tf.assign_add(var, v), lambda: tf.assign(var, v))\n    if name in _autosummary_vars:\n        _autosummary_vars[name].append(var)\n    else:\n        _autosummary_vars[name] = [var]\n    return update_op\n\n#----------------------------------------------------------------------------\n# Call filewriter.add_summary() with all summaries in the default graph,\n# automatically finalizing and merging them on the first call.\n\n_summary_merge_op = None\n\ndef save_summaries(filewriter, global_step=None):\n    global _summary_merge_op\n    if _summary_merge_op is None:\n        finalize_autosummaries()\n        with tf.device(None), tf.control_dependencies(None):\n            _summary_merge_op = tf.summary.merge_all()\n    filewriter.add_summary(_summary_merge_op.eval(), global_step)\n\n#----------------------------------------------------------------------------\n# Utilities for importing modules and objects by name.\n\ndef import_module(module_or_obj_name):\n    parts = module_or_obj_name.split('.')\n    parts[0] = {'np': 'numpy', 'tf': 'tensorflow'}.get(parts[0], parts[0])\n    for i in range(len(parts), 0, -1):\n        try:\n            module = importlib.import_module('.'.join(parts[:i]))\n            relative_obj_name = '.'.join(parts[i:])\n            return module, relative_obj_name\n        except ImportError:\n            pass\n    raise ImportError(module_or_obj_name)\n\ndef find_obj_in_module(module, relative_obj_name):\n    obj = module\n    for part in relative_obj_name.split('.'):\n        obj = getattr(obj, part)\n    return obj\n\ndef import_obj(obj_name):\n    module, relative_obj_name = import_module(obj_name)\n    return find_obj_in_module(module, relative_obj_name)\n\ndef call_func_by_name(*args, func=None, **kwargs):\n    assert func is not None\n    return import_obj(func)(*args, **kwargs)\n\n#----------------------------------------------------------------------------\n# Wrapper for tf.train.Optimizer that automatically takes care of:\n# - Gradient averaging for multi-GPU training.\n# - Dynamic loss scaling and typecasts for FP16 training.\n# - Ignoring corrupted gradients that contain NaNs/Infs.\n# - Reporting statistics.\n# - Well-chosen default settings.\n\nclass Optimizer:\n    def __init__(\n        self,\n        name                = 'Train',\n        tf_optimizer        = 'tf.train.AdamOptimizer',\n        learning_rate       = 0.001,\n        use_loss_scaling    = False,\n        loss_scaling_init   = 64.0,\n        loss_scaling_inc    = 0.0005,\n        loss_scaling_dec    = 1.0,\n        **kwargs):\n\n        # Init fields.\n        self.name               = name\n        self.learning_rate      = tf.convert_to_tensor(learning_rate)\n        self.id                 = self.name.replace('/', '.')\n        self.scope              = tf.get_default_graph().unique_name(self.id)\n        self.optimizer_class    = import_obj(tf_optimizer)\n        self.optimizer_kwargs   = dict(kwargs)\n        self.use_loss_scaling   = use_loss_scaling\n        self.loss_scaling_init  = loss_scaling_init\n        self.loss_scaling_inc   = loss_scaling_inc\n        self.loss_scaling_dec   = loss_scaling_dec\n        self._grad_shapes       = None          # [shape, ...]\n        self._dev_opt           = OrderedDict() # device => optimizer\n        self._dev_grads         = OrderedDict() # device => [[(grad, var), ...], ...]\n        self._dev_ls_var        = OrderedDict() # device => variable (log2 of loss scaling factor)\n        self._updates_applied   = False\n\n    # Register the gradients of the given loss function with respect to the given variables.\n    # Intended to be called once per GPU.\n    def register_gradients(self, loss, vars):\n        assert not self._updates_applied\n\n        # Validate arguments.\n        if isinstance(vars, dict):\n            vars = list(vars.values()) # allow passing in Network.trainables as vars\n        assert isinstance(vars, list) and len(vars) >= 1\n        assert all(is_tf_expression(expr) for expr in vars + [loss])\n        if self._grad_shapes is None:\n            self._grad_shapes = [shape_to_list(var.shape) for var in vars]\n        assert len(vars) == len(self._grad_shapes)\n        assert all(shape_to_list(var.shape) == var_shape for var, var_shape in zip(vars, self._grad_shapes))\n        dev = loss.device\n        assert all(var.device == dev for var in vars)\n\n        # Register device and compute gradients.\n        with tf.name_scope(self.id + '_grad'), tf.device(dev):\n            if dev not in self._dev_opt:\n                opt_name = self.scope.replace('/', '_') + '_opt%d' % len(self._dev_opt)\n                self._dev_opt[dev] = self.optimizer_class(name=opt_name, learning_rate=self.learning_rate, **self.optimizer_kwargs)\n                self._dev_grads[dev] = []\n            loss = self.apply_loss_scaling(tf.cast(loss, tf.float32))\n            grads = self._dev_opt[dev].compute_gradients(loss, vars, gate_gradients=tf.train.Optimizer.GATE_NONE) # disable gating to reduce memory usage\n            grads = [(g, v) if g is not None else (tf.zeros_like(v), v) for g, v in grads] # replace disconnected gradients with zeros\n            self._dev_grads[dev].append(grads)\n\n    # Construct training op to update the registered variables based on their gradients.\n    def apply_updates(self):\n        assert not self._updates_applied\n        self._updates_applied = True\n        devices = list(self._dev_grads.keys())\n        total_grads = sum(len(grads) for grads in self._dev_grads.values())\n        assert len(devices) >= 1 and total_grads >= 1\n        ops = []\n        with absolute_name_scope(self.scope):\n\n            # Cast gradients to FP32 and calculate partial sum within each device.\n            dev_grads = OrderedDict() # device => [(grad, var), ...]\n            for dev_idx, dev in enumerate(devices):\n                with tf.name_scope('ProcessGrads%d' % dev_idx), tf.device(dev):\n                    sums = []\n                    for gv in zip(*self._dev_grads[dev]):\n                        assert all(v is gv[0][1] for g, v in gv)\n                        g = [tf.cast(g, tf.float32) for g, v in gv]\n                        g = g[0] if len(g) == 1 else tf.add_n(g)\n                        sums.append((g, gv[0][1]))\n                    dev_grads[dev] = sums\n\n            # Sum gradients across devices.\n            if len(devices) > 1:\n                with tf.name_scope('SumAcrossGPUs'), tf.device(None):\n                    for var_idx, grad_shape in enumerate(self._grad_shapes):\n                        g = [dev_grads[dev][var_idx][0] for dev in devices]\n                        if np.prod(grad_shape): # nccl does not support zero-sized tensors\n                            g = tf.contrib.nccl.all_sum(g)\n                        for dev, gg in zip(devices, g):\n                            dev_grads[dev][var_idx] = (gg, dev_grads[dev][var_idx][1])\n\n            # Apply updates separately on each device.\n            for dev_idx, (dev, grads) in enumerate(dev_grads.items()):\n                with tf.name_scope('ApplyGrads%d' % dev_idx), tf.device(dev):\n\n                    # Scale gradients as needed.\n                    if self.use_loss_scaling or total_grads > 1:\n                        with tf.name_scope('Scale'):\n                            coef = tf.constant(np.float32(1.0 / total_grads), name='coef')\n                            coef = self.undo_loss_scaling(coef)\n                            grads = [(g * coef, v) for g, v in grads]\n\n                    # Check for overflows.\n                    with tf.name_scope('CheckOverflow'):\n                        grad_ok = tf.reduce_all(tf.stack([tf.reduce_all(tf.is_finite(g)) for g, v in grads]))\n\n                    # Update weights and adjust loss scaling.\n                    with tf.name_scope('UpdateWeights'):\n                        opt = self._dev_opt[dev]\n                        ls_var = self.get_loss_scaling_var(dev)\n                        if not self.use_loss_scaling:\n                            ops.append(tf.cond(grad_ok, lambda: opt.apply_gradients(grads), tf.no_op))\n                        else:\n                            ops.append(tf.cond(grad_ok,\n                                lambda: tf.group(tf.assign_add(ls_var, self.loss_scaling_inc), opt.apply_gradients(grads)),\n                                lambda: tf.group(tf.assign_sub(ls_var, self.loss_scaling_dec))))\n\n                    # Report statistics on the last device.\n                    if dev == devices[-1]:\n                        with tf.name_scope('Statistics'):\n                            ops.append(autosummary(self.id + '/learning_rate', self.learning_rate))\n                            ops.append(autosummary(self.id + '/overflow_frequency', tf.where(grad_ok, 0, 1)))\n                            if self.use_loss_scaling:\n                                ops.append(autosummary(self.id + '/loss_scaling_log2', ls_var))\n\n            # Initialize variables and group everything into a single op.\n            self.reset_optimizer_state()\n            init_uninited_vars(list(self._dev_ls_var.values()))\n            return tf.group(*ops, name='TrainingOp')\n\n    # Reset internal state of the underlying optimizer.\n    def reset_optimizer_state(self):\n        run([var.initializer for opt in self._dev_opt.values() for var in opt.variables()])\n\n    # Get or create variable representing log2 of the current dynamic loss scaling factor.\n    def get_loss_scaling_var(self, device):\n        if not self.use_loss_scaling:\n            return None\n        if device not in self._dev_ls_var:\n            with absolute_name_scope(self.scope + '/LossScalingVars'), tf.control_dependencies(None):\n                self._dev_ls_var[device] = tf.Variable(np.float32(self.loss_scaling_init), name='loss_scaling_var')\n        return self._dev_ls_var[device]\n\n    # Apply dynamic loss scaling for the given expression.\n    def apply_loss_scaling(self, value):\n        assert is_tf_expression(value)\n        if not self.use_loss_scaling:\n            return value\n        return value * exp2(self.get_loss_scaling_var(value.device))\n\n    # Undo the effect of dynamic loss scaling for the given expression.\n    def undo_loss_scaling(self, value):\n        assert is_tf_expression(value)\n        if not self.use_loss_scaling:\n            return value\n        return value * exp2(-self.get_loss_scaling_var(value.device))\n\n#----------------------------------------------------------------------------\n# Generic network abstraction.\n#\n# Acts as a convenience wrapper for a parameterized network construction\n# function, providing several utility methods and convenient access to\n# the inputs/outputs/weights.\n#\n# Network objects can be safely pickled and unpickled for long-term\n# archival purposes. The pickling works reliably as long as the underlying\n# network construction function is defined in a standalone Python module\n# that has no side effects or application-specific imports.\n\nnetwork_import_handlers = []    # Custom import handlers for dealing with legacy data in pickle import.\n_network_import_modules = []    # Temporary modules create during pickle import.\n\nclass Network:\n    def __init__(self,\n        name=None,          # Network name. Used to select TensorFlow name and variable scopes.\n        func=None,          # Fully qualified name of the underlying network construction function.\n        **static_kwargs):   # Keyword arguments to be passed in to the network construction function.\n\n        self._init_fields()\n        self.name = name\n        self.static_kwargs = dict(static_kwargs)\n\n        # Init build func.\n        module, self._build_func_name = import_module(func)\n        self._build_module_src = inspect.getsource(module)\n        self._build_func = find_obj_in_module(module, self._build_func_name)\n\n        # Init graph.\n        self._init_graph()\n        self.reset_vars()\n\n    def _init_fields(self):\n        self.name               = None          # User-specified name, defaults to build func name if None.\n        self.scope              = None          # Unique TF graph scope, derived from the user-specified name.\n        self.static_kwargs      = dict()        # Arguments passed to the user-supplied build func.\n        self.num_inputs         = 0             # Number of input tensors.\n        self.num_outputs        = 0             # Number of output tensors.\n        self.input_shapes       = [[]]          # Input tensor shapes (NC or NCHW), including minibatch dimension.\n        self.output_shapes      = [[]]          # Output tensor shapes (NC or NCHW), including minibatch dimension.\n        self.input_shape        = []            # Short-hand for input_shapes[0].\n        self.output_shape       = []            # Short-hand for output_shapes[0].\n        self.input_templates    = []            # Input placeholders in the template graph.\n        self.output_templates   = []            # Output tensors in the template graph.\n        self.input_names        = []            # Name string for each input.\n        self.output_names       = []            # Name string for each output.\n        self.vars               = OrderedDict() # All variables (localname => var).\n        self.trainables         = OrderedDict() # Trainable variables (localname => var).\n        self._build_func        = None          # User-supplied build function that constructs the network.\n        self._build_func_name   = None          # Name of the build function.\n        self._build_module_src  = None          # Full source code of the module containing the build function.\n        self._run_cache         = dict()        # Cached graph data for Network.run().\n        \n    def _init_graph(self):\n        # Collect inputs.\n        self.input_names = []\n        for param in inspect.signature(self._build_func).parameters.values():\n            if param.kind == param.POSITIONAL_OR_KEYWORD and param.default is param.empty:\n                self.input_names.append(param.name)\n        self.num_inputs = len(self.input_names)\n        assert self.num_inputs >= 1\n\n        # Choose name and scope.\n        if self.name is None:\n            self.name = self._build_func_name\n        self.scope = tf.get_default_graph().unique_name(self.name.replace('/', '_'), mark_as_used=False)\n        \n        # Build template graph.\n        with tf.variable_scope(self.scope, reuse=tf.AUTO_REUSE):\n            assert tf.get_variable_scope().name == self.scope\n            with absolute_name_scope(self.scope): # ignore surrounding name_scope\n                with tf.control_dependencies(None): # ignore surrounding control_dependencies\n                    self.input_templates = [tf.placeholder(tf.float32, name=name) for name in self.input_names]\n                    out_expr = self._build_func(*self.input_templates, is_template_graph=True, **self.static_kwargs)\n            \n        # Collect outputs.\n        assert is_tf_expression(out_expr) or isinstance(out_expr, tuple)\n        self.output_templates = [out_expr] if is_tf_expression(out_expr) else list(out_expr)\n        self.output_names = [t.name.split('/')[-1].split(':')[0] for t in self.output_templates]\n        self.num_outputs = len(self.output_templates)\n        assert self.num_outputs >= 1\n        \n        # Populate remaining fields.\n        self.input_shapes   = [shape_to_list(t.shape) for t in self.input_templates]\n        self.output_shapes  = [shape_to_list(t.shape) for t in self.output_templates]\n        self.input_shape    = self.input_shapes[0]\n        self.output_shape   = self.output_shapes[0]\n        self.vars           = OrderedDict([(self.get_var_localname(var), var) for var in tf.global_variables(self.scope + '/')])\n        self.trainables     = OrderedDict([(self.get_var_localname(var), var) for var in tf.trainable_variables(self.scope + '/')])\n\n    # Run initializers for all variables defined by this network.\n    def reset_vars(self):\n        run([var.initializer for var in self.vars.values()])\n\n    # Run initializers for all trainable variables defined by this network.\n    def reset_trainables(self):\n        run([var.initializer for var in self.trainables.values()])\n\n    # Get TensorFlow expression(s) for the output(s) of this network, given the inputs.\n    def get_output_for(self, *in_expr, return_as_list=False, **dynamic_kwargs):\n        assert len(in_expr) == self.num_inputs\n        all_kwargs = dict(self.static_kwargs)\n        all_kwargs.update(dynamic_kwargs)\n        with tf.variable_scope(self.scope, reuse=True):\n            assert tf.get_variable_scope().name == self.scope\n            named_inputs = [tf.identity(expr, name=name) for expr, name in zip(in_expr, self.input_names)]\n            out_expr = self._build_func(*named_inputs, **all_kwargs)\n        assert is_tf_expression(out_expr) or isinstance(out_expr, tuple)\n        if return_as_list:\n            out_expr = [out_expr] if is_tf_expression(out_expr) else list(out_expr)\n        return out_expr\n\n    # Get the local name of a given variable, excluding any surrounding name scopes.\n    def get_var_localname(self, var_or_globalname):\n        assert is_tf_expression(var_or_globalname) or isinstance(var_or_globalname, str)\n        globalname = var_or_globalname if isinstance(var_or_globalname, str) else var_or_globalname.name\n        assert globalname.startswith(self.scope + '/')\n        localname = globalname[len(self.scope) + 1:]\n        localname = localname.split(':')[0]\n        return localname\n\n    # Find variable by local or global name.\n    def find_var(self, var_or_localname):\n        assert is_tf_expression(var_or_localname) or isinstance(var_or_localname, str)\n        return self.vars[var_or_localname] if isinstance(var_or_localname, str) else var_or_localname\n\n    # Get the value of a given variable as NumPy array.\n    # Note: This method is very inefficient -- prefer to use tfutil.run(list_of_vars) whenever possible.\n    def get_var(self, var_or_localname):\n        return self.find_var(var_or_localname).eval()\n        \n    # Set the value of a given variable based on the given NumPy array.\n    # Note: This method is very inefficient -- prefer to use tfutil.set_vars() whenever possible.\n    def set_var(self, var_or_localname, new_value):\n        return set_vars({self.find_var(var_or_localname): new_value})\n\n    # Pickle export.\n    def __getstate__(self):\n        return {\n            'version':          2,\n            'name':             self.name,\n            'static_kwargs':    self.static_kwargs,\n            'build_module_src': self._build_module_src,\n            'build_func_name':  self._build_func_name,\n            'variables':        list(zip(self.vars.keys(), run(list(self.vars.values()))))}\n\n    # Pickle import.\n    def __setstate__(self, state):\n        self._init_fields()\n\n        # Execute custom import handlers.\n        for handler in network_import_handlers:\n            state = handler(state)\n\n        # Set basic fields.\n        assert state['version'] == 2\n        self.name = state['name']\n        self.static_kwargs = state['static_kwargs']\n        self._build_module_src = state['build_module_src']\n        self._build_func_name = state['build_func_name']\n        \n        # Parse imported module.\n        module = imp.new_module('_tfutil_network_import_module_%d' % len(_network_import_modules))\n        exec(self._build_module_src, module.__dict__)\n        self._build_func = find_obj_in_module(module, self._build_func_name)\n        _network_import_modules.append(module) # avoid gc\n        \n        # Init graph.\n        self._init_graph()\n        self.reset_vars()\n        set_vars({self.find_var(name): value for name, value in state['variables']})\n\n    # Create a clone of this network with its own copy of the variables.\n    def clone(self, name=None):\n        net = object.__new__(Network)\n        net._init_fields()\n        net.name = name if name is not None else self.name\n        net.static_kwargs = dict(self.static_kwargs)\n        net._build_module_src = self._build_module_src\n        net._build_func_name = self._build_func_name\n        net._build_func = self._build_func\n        net._init_graph()\n        net.copy_vars_from(self)\n        return net\n\n    # Copy the values of all variables from the given network.\n    def copy_vars_from(self, src_net):\n        assert isinstance(src_net, Network)\n        name_to_value = run({name: src_net.find_var(name) for name in self.vars.keys()})\n        set_vars({self.find_var(name): value for name, value in name_to_value.items()})\n\n    # Copy the values of all trainable variables from the given network.\n    def copy_trainables_from(self, src_net):\n        assert isinstance(src_net, Network)\n        name_to_value = run({name: src_net.find_var(name) for name in self.trainables.keys()})\n        set_vars({self.find_var(name): value for name, value in name_to_value.items()})\n\n    # Create new network with the given parameters, and copy all variables from this network.\n    def convert(self, name=None, func=None, **static_kwargs):\n        net = Network(name, func, **static_kwargs)\n        net.copy_vars_from(self)\n        return net\n\n    # Construct a TensorFlow op that updates the variables of this network\n    # to be slightly closer to those of the given network.\n    def setup_as_moving_average_of(self, src_net, beta=0.99, beta_nontrainable=0.0):\n        assert isinstance(src_net, Network)\n        with absolute_name_scope(self.scope):\n            with tf.name_scope('MovingAvg'):\n                ops = []\n                for name, var in self.vars.items():\n                    if name in src_net.vars:\n                        cur_beta = beta if name in self.trainables else beta_nontrainable\n                        new_value = lerp(src_net.vars[name], var, cur_beta)\n                        ops.append(var.assign(new_value))\n                return tf.group(*ops)\n\n    # Run this network for the given NumPy array(s), and return the output(s) as NumPy array(s).\n    def fit(self, *in_arrays,\n        return_as_list  = False,    # True = return a list of NumPy arrays, False = return a single NumPy array, or a tuple if there are multiple outputs.\n        print_progress  = False,    # Print progress to the console? Useful for very large input arrays.\n        minibatch_size  = None,     # Maximum minibatch size to use, None = disable batching.\n        num_gpus        = 1,        # Number of GPUs to use.\n        out_mul         = 1.0,      # Multiplicative constant to apply to the output(s).\n        out_add         = 0.0,      # Additive constant to apply to the output(s).\n        out_shrink      = 1,        # Shrink the spatial dimensions of the output(s) by the given factor.\n        out_dtype       = None,     # Convert the output to the specified data type.\n        **dynamic_kwargs):          # Additional keyword arguments to pass into the network construction function.\n\n        assert len(in_arrays) == self.num_inputs\n        num_items = in_arrays[0].shape[0]\n        if minibatch_size is None:\n            minibatch_size = num_items\n        key = str([list(sorted(dynamic_kwargs.items())), num_gpus, out_mul, out_add, out_shrink, out_dtype])\n\n        # Build graph.\n        if key not in self._run_cache:\n            with absolute_name_scope(self.scope + '/Fit'), tf.control_dependencies(None):\n                in_split = list(zip(*[tf.split(x, num_gpus) for x in in_arrays]))\n                out_split = []\n                for gpu in range(num_gpus):\n                    with tf.device('/gpu:%d' % gpu):\n                        out_expr = self.get_output_for(*in_split[gpu], return_as_list=True, **dynamic_kwargs)\n                        if out_mul != 1.0:\n                            out_expr = [x * out_mul for x in out_expr]\n                        if out_add != 0.0:\n                            out_expr = [x + out_add for x in out_expr]\n                        if out_shrink > 1:\n                            ksize = [1, 1, out_shrink, out_shrink]\n                            out_expr = [tf.nn.avg_pool(x, ksize=ksize, strides=ksize, padding='VALID', data_format='NCHW') for x in out_expr]\n                        if out_dtype is not None:\n                            if tf.as_dtype(out_dtype).is_integer:\n                                out_expr = [tf.round(x) for x in out_expr]\n                            out_expr = [tf.saturate_cast(x, out_dtype) for x in out_expr]\n                        out_split.append(out_expr)\n                self._run_cache[key] = [tf.concat(outputs, axis=0) for outputs in zip(*out_split)]\n\n        # Run minibatches.\n        out_expr = self._run_cache[key]\n        return out_expr\n\n    # Run this network for the given NumPy array(s), and return the output(s) as NumPy array(s).\n    def run(self, *in_arrays,\n        return_as_list  = False,    # True = return a list of NumPy arrays, False = return a single NumPy array, or a tuple if there are multiple outputs.\n        print_progress  = False,    # Print progress to the console? Useful for very large input arrays.\n        minibatch_size  = None,     # Maximum minibatch size to use, None = disable batching.\n        num_gpus        = 1,        # Number of GPUs to use.\n        out_mul         = 1.0,      # Multiplicative constant to apply to the output(s).\n        out_add         = 0.0,      # Additive constant to apply to the output(s).\n        out_shrink      = 1,        # Shrink the spatial dimensions of the output(s) by the given factor.\n        out_dtype       = None,     # Convert the output to the specified data type.\n        **dynamic_kwargs):          # Additional keyword arguments to pass into the network construction function.\n\n        assert len(in_arrays) == self.num_inputs\n        num_items = in_arrays[0].shape[0]\n        if minibatch_size is None:\n            minibatch_size = num_items\n        key = str([list(sorted(dynamic_kwargs.items())), num_gpus, out_mul, out_add, out_shrink, out_dtype])\n\n        # Build graph.\n        if key not in self._run_cache:\n            with absolute_name_scope(self.scope + '/Run'), tf.control_dependencies(None):\n                in_split = list(zip(*[tf.split(x, num_gpus) for x in self.input_templates]))\n                out_split = []\n                for gpu in range(num_gpus):\n                    with tf.device('/gpu:%d' % gpu):\n                        out_expr = self.get_output_for(*in_split[gpu], return_as_list=True, **dynamic_kwargs)\n                        if out_mul != 1.0:\n                            out_expr = [x * out_mul for x in out_expr]\n                        if out_add != 0.0:\n                            out_expr = [x + out_add for x in out_expr]\n                        if out_shrink > 1:\n                            ksize = [1, 1, out_shrink, out_shrink]\n                            out_expr = [tf.nn.avg_pool(x, ksize=ksize, strides=ksize, padding='VALID', data_format='NCHW') for x in out_expr]\n                        if out_dtype is not None:\n                            if tf.as_dtype(out_dtype).is_integer:\n                                out_expr = [tf.round(x) for x in out_expr]\n                            out_expr = [tf.saturate_cast(x, out_dtype) for x in out_expr]\n                        out_split.append(out_expr)\n                self._run_cache[key] = [tf.concat(outputs, axis=0) for outputs in zip(*out_split)]\n\n        # Run minibatches.\n        out_expr = self._run_cache[key]\n        out_arrays = [np.empty([num_items] + shape_to_list(expr.shape)[1:], expr.dtype.name) for expr in out_expr]\n        for mb_begin in range(0, num_items, minibatch_size):\n            if print_progress:\n                print('\\r%d / %d' % (mb_begin, num_items), end='')\n            mb_end = min(mb_begin + minibatch_size, num_items)\n            mb_in = [src[mb_begin : mb_end] for src in in_arrays]\n            mb_out = tf.get_default_session().run(out_expr, dict(zip(self.input_templates, mb_in)))\n            for dst, src in zip(out_arrays, mb_out):\n                dst[mb_begin : mb_end] = src\n\n        # Done.\n        if print_progress:\n            print('\\r%d / %d' % (num_items, num_items))\n        if not return_as_list:\n            out_arrays = out_arrays[0] if len(out_arrays) == 1 else tuple(out_arrays)\n        return out_arrays\n\n    # Returns a list of (name, output_expr, trainable_vars) tuples corresponding to\n    # individual layers of the network. Mainly intended to be used for reporting.\n    def list_layers(self):\n        patterns_to_ignore = ['/Setter', '/new_value', '/Shape', '/strided_slice', '/Cast', '/concat']\n        all_ops = tf.get_default_graph().get_operations()\n        all_ops = [op for op in all_ops if not any(p in op.name for p in patterns_to_ignore)]\n        layers = []\n\n        def recurse(scope, parent_ops, level):\n            prefix = scope + '/'\n            ops = [op for op in parent_ops if op.name == scope or op.name.startswith(prefix)]\n\n            # Does not contain leaf nodes => expand immediate children.\n            if level == 0 or all('/' in op.name[len(prefix):] for op in ops):\n                visited = set()\n                for op in ops:\n                    suffix = op.name[len(prefix):]\n                    if '/' in suffix:\n                        suffix = suffix[:suffix.index('/')]\n                    if suffix not in visited:\n                        recurse(prefix + suffix, ops, level + 1)\n                        visited.add(suffix)\n\n            # Otherwise => interpret as a layer.\n            else:\n                layer_name = scope[len(self.scope)+1:]\n                layer_output = ops[-1].outputs[0]\n                layer_trainables = [op.outputs[0] for op in ops if op.type.startswith('Variable') and self.get_var_localname(op.name) in self.trainables]\n                layers.append((layer_name, layer_output, layer_trainables))\n\n        recurse(self.scope, all_ops, 0)\n        return layers\n\n    # Print a summary table of the network structure.\n    def print_layers(self, title=None, hide_layers_with_no_params=False):\n        if title is None: title = self.name\n        print()\n        print('%-28s%-12s%-24s%-24s' % (title, 'Params', 'OutputShape', 'WeightShape'))\n        print('%-28s%-12s%-24s%-24s' % (('---',) * 4))\n\n        total_params = 0\n        for layer_name, layer_output, layer_trainables in self.list_layers():\n            weights = [var for var in layer_trainables if var.name.endswith('/weight:0')]\n            num_params = sum(np.prod(shape_to_list(var.shape)) for var in layer_trainables)\n            total_params += num_params\n            if hide_layers_with_no_params and num_params == 0:\n                continue\n\n            print('%-28s%-12s%-24s%-24s' % (\n                layer_name,\n                num_params if num_params else '-',\n                layer_output.shape,\n                weights[0].shape if len(weights) == 1 else '-'))\n\n        print('%-28s%-12s%-24s%-24s' % (('---',) * 4))\n        print('%-28s%-12s%-24s%-24s' % ('Total', total_params, '', ''))\n        print()\n\n    # Construct summary ops to include histograms of all trainable parameters in TensorBoard.\n    def setup_weight_histograms(self, title=None):\n        if title is None: title = self.name\n        with tf.name_scope(None), tf.device(None), tf.control_dependencies(None):\n            for localname, var in self.trainables.items():\n                if '/' in localname:\n                    p = localname.split('/')\n                    name = title + '_' + p[-1] + '/' + '_'.join(p[:-1])\n                else:\n                    name = title + '_toplevel/' + localname\n                tf.summary.histogram(name, var)\n\n#----------------------------------------------------------------------------\n"
  },
  {
    "path": "train.py",
    "content": "# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\r\n#\r\n# This work is licensed under the Creative Commons Attribution-NonCommercial\r\n# 4.0 International License. To view a copy of this license, visit\r\n# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to\r\n# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\r\n\r\nimport os\r\nimport time\r\nimport numpy as np\r\nimport tensorflow as tf\r\n\r\nimport config\r\nimport tfutil\r\nimport dataset\r\nimport misc\r\n\r\n#----------------------------------------------------------------------------\r\n# Choose the size and contents of the image snapshot grids that are exported\r\n# periodically during training.\r\n\r\ndef setup_snapshot_image_grid(G, training_set,\r\n    size    = '1080p',      # '1080p' = to be viewed on 1080p display, '4k' = to be viewed on 4k display.\r\n    layout  = 'random'):    # 'random' = grid contents are selected randomly, 'row_per_class' = each row corresponds to one class label.\r\n\r\n    # Select size.\r\n    gw = 1; gh = 1\r\n    if size == '1080p':\r\n        gw = np.clip(1920 // G.output_shape[3], 3, 32)\r\n        gh = np.clip(1080 // G.output_shape[2], 2, 32)\r\n    if size == '4k':\r\n        gw = np.clip(3840 // G.output_shape[3], 7, 32)\r\n        gh = np.clip(2160 // G.output_shape[2], 4, 32)\r\n\r\n    # Fill in reals and labels.\r\n    reals = np.zeros([gw * gh] + training_set.shape, dtype=training_set.dtype)\r\n    labels = np.zeros([gw * gh, training_set.label_size], dtype=training_set.label_dtype)\r\n    for idx in range(gw * gh):\r\n        x = idx % gw; y = idx // gw\r\n        while True:\r\n            real, label = training_set.get_minibatch_np(1)\r\n            if layout == 'row_per_class' and training_set.label_size > 0:\r\n                if label[0, y % training_set.label_size] == 0.0:\r\n                    continue\r\n            reals[idx] = real[0]\r\n            labels[idx] = label[0]\r\n            break\r\n\r\n    # Generate latents.\r\n    latents = misc.random_latents(gw * gh, G)\r\n    return (gw, gh), reals, labels, latents\r\n\r\n#----------------------------------------------------------------------------\r\n# Just-in-time processing of training images before feeding them to the networks.\r\n\r\ndef process_reals(x, lod, mirror_augment, drange_data, drange_net):\r\n    with tf.name_scope('ProcessReals'):\r\n        if drange_data != drange_net:\r\n            with tf.name_scope('DynamicRange'):\r\n                x = tf.cast(x, tf.float32)\r\n                x = misc.adjust_dynamic_range(x, drange_data, drange_net)\r\n        if mirror_augment:\r\n            with tf.name_scope('MirrorAugment'):\r\n                s = tf.shape(x)\r\n                mask = tf.random_uniform([s[0], 1, 1, 1], 0.0, 1.0)\r\n                mask = tf.tile(mask, [1, s[1], s[2], s[3]])\r\n                x = tf.where(mask < 0.5, x, tf.reverse(x, axis=[3]))\r\n        with tf.name_scope('FadeLOD'): # Smooth crossfade between consecutive levels-of-detail.\r\n            s = tf.shape(x)\r\n            y = tf.reshape(x, [-1, s[1], s[2]//2, 2, s[3]//2, 2])\r\n            y = tf.reduce_mean(y, axis=[3, 5], keepdims=True)\r\n            y = tf.tile(y, [1, 1, 1, 2, 1, 2])\r\n            y = tf.reshape(y, [-1, s[1], s[2], s[3]])\r\n            x = tfutil.lerp(x, y, lod - tf.floor(lod))\r\n        with tf.name_scope('UpscaleLOD'): # Upscale to match the expected input/output size of the networks.\r\n            s = tf.shape(x)\r\n            factor = tf.cast(2 ** tf.floor(lod), tf.int32)\r\n            x = tf.reshape(x, [-1, s[1], s[2], 1, s[3], 1])\r\n            x = tf.tile(x, [1, 1, 1, factor, 1, factor])\r\n            x = tf.reshape(x, [-1, s[1], s[2] * factor, s[3] * factor])\r\n        return x\r\n\r\n#----------------------------------------------------------------------------\r\n# Class for evaluating and storing the values of time-varying training parameters.\r\n\r\nclass TrainingSchedule:\r\n    def __init__(\r\n        self,\r\n        cur_nimg,\r\n        training_set,\r\n        lod_initial_resolution  = 4,        # Image resolution used at the beginning.\r\n        lod_training_kimg       = 1000,      # Thousands of real images to show before doubling the resolution.\r\n        lod_transition_kimg     = 1000,      # Thousands of real images to show when fading in new layers.\r\n        minibatch_base          = 16,       # Maximum minibatch size, divided evenly among GPUs.\r\n        minibatch_dict          = {},       # Resolution-specific overrides.\r\n        max_minibatch_per_gpu   = {},       # Resolution-specific maximum minibatch size per GPU.\r\n        G_lrate_base            = 0.001,    # Learning rate for the generator.\r\n        G_lrate_dict            = {},       # Resolution-specific overrides.\r\n        D_lrate_base            = 0.001,    # Learning rate for the discriminator.\r\n        D_lrate_dict            = {},       # Resolution-specific overrides.\r\n        tick_kimg_base          = 160,      # Default interval of progress snapshots.\r\n        tick_kimg_dict          = {4: 160, 8:140, 16:120, 32:100, 64:80, 128:60, 256:40, 512:20, 1024:10}): # Resolution-specific overrides.\r\n\r\n        # Training phase.\r\n        self.kimg = cur_nimg / 1000.0\r\n        phase_dur = lod_training_kimg + lod_transition_kimg\r\n        phase_idx = int(np.floor(self.kimg / phase_dur)) if phase_dur > 0 else 0\r\n        phase_kimg = self.kimg - phase_idx * phase_dur\r\n\r\n        # Level-of-detail and resolution.\r\n        self.lod = training_set.resolution_log2\r\n        self.lod -= np.floor(np.log2(lod_initial_resolution))\r\n        self.lod -= phase_idx\r\n        if lod_transition_kimg > 0:\r\n            self.lod -= max(phase_kimg - lod_training_kimg, 0.0) / lod_transition_kimg\r\n        self.lod = max(self.lod, 0.0)\r\n        self.resolution = 2 ** (training_set.resolution_log2 - int(np.floor(self.lod)))\r\n\r\n        # Minibatch size.\r\n        self.minibatch = minibatch_dict.get(self.resolution, minibatch_base)\r\n        self.minibatch -= self.minibatch % config.num_gpus\r\n        if self.resolution in max_minibatch_per_gpu:\r\n            self.minibatch = min(self.minibatch, max_minibatch_per_gpu[self.resolution] * config.num_gpus)\r\n\r\n        # Other parameters.\r\n        self.G_lrate = G_lrate_dict.get(self.resolution, G_lrate_base)\r\n        self.D_lrate = D_lrate_dict.get(self.resolution, D_lrate_base)\r\n        self.tick_kimg = tick_kimg_dict.get(self.resolution, tick_kimg_base)\r\n\r\n#----------------------------------------------------------------------------\r\n# Main training script.\r\n# To run, comment/uncomment appropriate lines in config.py and launch train.py.\r\n\r\ndef train_progressive_gan(\r\n    G_smoothing             = 0.999,        # Exponential running average of generator weights.\r\n    D_repeats               = 1,            # How many times the discriminator is trained per G iteration.\r\n    minibatch_repeats       = 4,            # Number of minibatches to run before adjusting training parameters.\r\n    reset_opt_for_new_lod   = True,         # Reset optimizer internal state (e.g. Adam moments) when new layers are introduced?\r\n    total_kimg              = 15000,        # Total length of the training, measured in thousands of real images.\r\n    mirror_augment          = False,        # Enable mirror augment?\r\n    drange_net              = [-1,1],       # Dynamic range used when feeding image data to the networks.\r\n    image_snapshot_ticks    = 1,            # How often to export image snapshots?\r\n    network_snapshot_ticks  = 10,           # How often to export network snapshots?\r\n    save_tf_graph           = False,        # Include full TensorFlow computation graph in the tfevents file?\r\n    save_weight_histograms  = False,        # Include weight histograms in the tfevents file?\r\n    resume_run_id           = None,         # Run ID or network pkl to resume training from, None = start from scratch.\r\n    resume_snapshot         = None,         # Snapshot index to resume training from, None = autodetect.\r\n    resume_kimg             = 0.0,          # Assumed training progress at the beginning. Affects reporting and training schedule.\r\n    resume_time             = 0.0):         # Assumed wallclock time at the beginning. Affects reporting.\r\n\r\n    maintenance_start_time = time.time()\r\n    training_set = dataset.load_dataset(data_dir=config.data_dir, verbose=True, **config.dataset)\r\n\r\n    # Construct networks.\r\n    with tf.device('/gpu:0'):\r\n        if resume_run_id is not None:\r\n            network_pkl = misc.locate_network_pkl(resume_run_id, resume_snapshot)\r\n            print('Loading networks from \"%s\"...' % network_pkl)\r\n            G, D, Gs = misc.load_pkl(network_pkl)\r\n        else:\r\n            print('Constructing networks...')\r\n            G = tfutil.Network('G', num_channels=training_set.shape[0], resolution=training_set.shape[1], label_size=training_set.label_size, **config.G)\r\n            D = tfutil.Network('D', num_channels=training_set.shape[0], resolution=training_set.shape[1], label_size=training_set.label_size, **config.D)\r\n            Gs = G.clone('Gs')\r\n        Gs_update_op = Gs.setup_as_moving_average_of(G, beta=G_smoothing)\r\n    G.print_layers(); D.print_layers()\r\n\r\n    print('Building TensorFlow graph...')\r\n    with tf.name_scope('Inputs'):\r\n        lod_in          = tf.placeholder(tf.float32, name='lod_in', shape=[])\r\n        lrate_in        = tf.placeholder(tf.float32, name='lrate_in', shape=[])\r\n        minibatch_in    = tf.placeholder(tf.int32, name='minibatch_in', shape=[])\r\n        minibatch_split = minibatch_in // config.num_gpus\r\n        reals, labels   = training_set.get_minibatch_tf()\r\n        reals_split     = tf.split(reals, config.num_gpus)\r\n        labels_split    = tf.split(labels, config.num_gpus)\r\n    G_opt = tfutil.Optimizer(name='TrainG', learning_rate=lrate_in, **config.G_opt)\r\n    D_opt = tfutil.Optimizer(name='TrainD', learning_rate=lrate_in, **config.D_opt)\r\n    for gpu in range(config.num_gpus):\r\n        with tf.name_scope('GPU%d' % gpu), tf.device('/gpu:%d' % gpu):\r\n            G_gpu = G if gpu == 0 else G.clone(G.name + '_shadow')\r\n            D_gpu = D if gpu == 0 else D.clone(D.name + '_shadow')\r\n            lod_assign_ops = [tf.assign(G_gpu.find_var('lod'), lod_in), tf.assign(D_gpu.find_var('lod'), lod_in)]\r\n            reals_gpu = process_reals(reals_split[gpu], lod_in, mirror_augment, training_set.dynamic_range, drange_net)\r\n            labels_gpu = labels_split[gpu]\r\n            with tf.name_scope('G_loss'), tf.control_dependencies(lod_assign_ops):\r\n                G_loss = tfutil.call_func_by_name(G=G_gpu, D=D_gpu, opt=G_opt, training_set=training_set, minibatch_size=minibatch_split, **config.G_loss)\r\n            with tf.name_scope('D_loss'), tf.control_dependencies(lod_assign_ops):\r\n                D_loss = tfutil.call_func_by_name(G=G_gpu, D=D_gpu, opt=D_opt, training_set=training_set, minibatch_size=minibatch_split, reals=reals_gpu, labels=labels_gpu, **config.D_loss)\r\n            G_opt.register_gradients(tf.reduce_mean(G_loss), G_gpu.trainables)\r\n            D_opt.register_gradients(tf.reduce_mean(D_loss), D_gpu.trainables)\r\n    G_train_op = G_opt.apply_updates()\r\n    D_train_op = D_opt.apply_updates()\r\n\r\n    print('Setting up snapshot image grid...')\r\n    grid_size, grid_reals, grid_labels, grid_latents = setup_snapshot_image_grid(G, training_set, **config.grid)\r\n    sched = TrainingSchedule(total_kimg * 1000, training_set, **config.sched)\r\n    grid_fakes = Gs.run(grid_latents, grid_labels, minibatch_size=sched.minibatch//config.num_gpus)\r\n\r\n    print('Setting up result dir...')\r\n    result_subdir = misc.create_result_subdir(config.result_dir, config.desc)\r\n    misc.save_image_grid(grid_reals, os.path.join(result_subdir, 'reals.png'), drange=training_set.dynamic_range, grid_size=grid_size)\r\n    misc.save_image_grid(grid_fakes, os.path.join(result_subdir, 'fakes%06d.png' % 0), drange=drange_net, grid_size=grid_size)\r\n    summary_log = tf.summary.FileWriter(result_subdir)\r\n    if save_tf_graph:\r\n        summary_log.add_graph(tf.get_default_graph())\r\n    if save_weight_histograms:\r\n        G.setup_weight_histograms(); D.setup_weight_histograms()\r\n\r\n    print('Training...')\r\n    cur_nimg = int(resume_kimg * 1000)\r\n    cur_tick = 0\r\n    tick_start_nimg = cur_nimg\r\n    tick_start_time = time.time()\r\n    train_start_time = tick_start_time - resume_time\r\n    prev_lod = -1.0\r\n    while cur_nimg < total_kimg * 1000:\r\n\r\n        # Choose training parameters and configure training ops.\r\n        sched = TrainingSchedule(cur_nimg, training_set, **config.sched)\r\n        training_set.configure(sched.minibatch, sched.lod)\r\n        if reset_opt_for_new_lod:\r\n            if np.floor(sched.lod) != np.floor(prev_lod) or np.ceil(sched.lod) != np.ceil(prev_lod):\r\n                G_opt.reset_optimizer_state(); D_opt.reset_optimizer_state()\r\n        prev_lod = sched.lod\r\n\r\n        # Run training ops.\r\n        for repeat in range(minibatch_repeats):\r\n            for _ in range(D_repeats):\r\n                tfutil.run([D_train_op, Gs_update_op], {lod_in: sched.lod, lrate_in: sched.D_lrate, minibatch_in: sched.minibatch})\r\n                cur_nimg += sched.minibatch\r\n            tfutil.run([G_train_op], {lod_in: sched.lod, lrate_in: sched.G_lrate, minibatch_in: sched.minibatch})\r\n\r\n        # Perform maintenance tasks once per tick.\r\n        done = (cur_nimg >= total_kimg * 1000)\r\n        if cur_nimg >= tick_start_nimg + sched.tick_kimg * 1000 or done:\r\n            cur_tick += 1\r\n            cur_time = time.time()\r\n            tick_kimg = (cur_nimg - tick_start_nimg) / 1000.0\r\n            tick_start_nimg = cur_nimg\r\n            tick_time = cur_time - tick_start_time\r\n            total_time = cur_time - train_start_time\r\n            maintenance_time = tick_start_time - maintenance_start_time\r\n            maintenance_start_time = cur_time\r\n\r\n            # Report progress.\r\n            print('tick %-5d kimg %-8.1f lod %-5.2f minibatch %-4d time %-12s sec/tick %-7.1f sec/kimg %-7.2f maintenance %.1f' % (\r\n                tfutil.autosummary('Progress/tick', cur_tick),\r\n                tfutil.autosummary('Progress/kimg', cur_nimg / 1000.0),\r\n                tfutil.autosummary('Progress/lod', sched.lod),\r\n                tfutil.autosummary('Progress/minibatch', sched.minibatch),\r\n                misc.format_time(tfutil.autosummary('Timing/total_sec', total_time)),\r\n                tfutil.autosummary('Timing/sec_per_tick', tick_time),\r\n                tfutil.autosummary('Timing/sec_per_kimg', tick_time / tick_kimg),\r\n                tfutil.autosummary('Timing/maintenance_sec', maintenance_time)))\r\n            tfutil.autosummary('Timing/total_hours', total_time / (60.0 * 60.0))\r\n            tfutil.autosummary('Timing/total_days', total_time / (24.0 * 60.0 * 60.0))\r\n            tfutil.save_summaries(summary_log, cur_nimg)\r\n\r\n            # Save snapshots.\r\n            if cur_tick % image_snapshot_ticks == 0 or done:\r\n                grid_fakes = Gs.run(grid_latents, grid_labels, minibatch_size=sched.minibatch//config.num_gpus)\r\n                misc.save_image_grid(grid_fakes, os.path.join(result_subdir, 'fakes%06d.png' % (cur_nimg // 1000)), drange=drange_net, grid_size=grid_size)\r\n            if cur_tick % network_snapshot_ticks == 0 or done:\r\n                misc.save_pkl((G, D, Gs), os.path.join(result_subdir, 'network-snapshot-%06d.pkl' % (cur_nimg // 1000)))\r\n\r\n            # Record start time of the next tick.\r\n            tick_start_time = time.time()\r\n\r\n    # Write final results.\r\n    misc.save_pkl((G, D, Gs), os.path.join(result_subdir, 'network-final.pkl'))\r\n    summary_log.close()\r\n    open(os.path.join(result_subdir, '_training-done.txt'), 'wt').close()\r\n\r\n#----------------------------------------------------------------------------\r\n# Main entry point.\r\n# Calls the function indicated in config.py.\r\n\r\nif __name__ == \"__main__\":\r\n    misc.init_output_logging()\r\n    np.random.seed(config.random_seed)\r\n    print('Initializing TensorFlow...')\r\n    os.environ.update(config.env)\r\n    tfutil.init_tf(config.tf_config)\r\n    print('Running %s()...' % config.train['func'])\r\n    tfutil.call_func_by_name(**config.train)\r\n    print('Exiting...')\r\n\r\n#----------------------------------------------------------------------------\r\n"
  },
  {
    "path": "util_scripts.py",
    "content": "# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# This work is licensed under the Creative Commons Attribution-NonCommercial\n# 4.0 International License. To view a copy of this license, visit\n# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to\n# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.\n\nimport os\nimport time\nimport re\nimport bisect\nimport numpy as np\nimport tensorflow as tf\nimport scipy.ndimage\nimport scipy.misc\nfrom scipy.spatial.distance import cdist\nfrom sklearn.utils.extmath import softmax\nimport scipy.ndimage as ndimage\n\nimport config_test\nimport misc\nimport tfutil\nimport myutil\nimport menpo.io as mio\n\nimport menpo3d.io as m3io\nfrom menpo.shape import TexturedTriMesh, TriMesh, ColouredTriMesh\nfrom UV_manipulation_2 import from_UV_2_3D\nfrom menpo.image import Image\n\n#----------------------------------------------------------------------------\n# Generate random images or image grids using a previously trained network.\n# To run, uncomment the appropriate line in config_test.py and launch train.py.\n\ndef get_generator(run_id, snapshot=None, image_shrink=1, minibatch_size=8):\n    network_pkl = misc.locate_network_pkl(run_id, snapshot)\n\n    print('Loading network from \"%s\"...' % network_pkl)\n    G, D, Gs = misc.load_network_pkl(run_id, snapshot)\n    latent = tf.get_variable('latent',shape=(1,512),trainable=True)\n    label = tf.get_variable('label',shape=(1,0),trainable=True,initializer=tf.zeros_initializer)\n    images = Gs.fit(latent, label, minibatch_size=minibatch_size, num_gpus=config_test.num_gpus, out_mul=0.5, out_add=0.5, out_shrink=image_shrink, out_dtype=np.float32)\n    sess = tf.get_default_session()\n\n    sess.run(tf.variables_initializer([latent, label]))\n\n    return images, latent, sess\n\ndef fit_real_images(run_id, snapshot=None, num_pngs=1, image_shrink=1, png_prefix=None, random_seed=1000, minibatch_size=8):\n    network_pkl = misc.locate_network_pkl(run_id, snapshot)\n    if png_prefix is None:\n        png_prefix = misc.get_id_string_for_network_pkl(network_pkl) + '-'\n    random_state = np.random.RandomState(random_seed)\n\n    print('Loading network from \"%s\"...' % network_pkl)\n    G, D, Gs = misc.load_network_pkl(run_id, snapshot)\n    latent = tf.get_variable('latent',shape=(1,512),trainable=True)\n    label = tf.get_variable('label',shape=(1,0),trainable=True)\n    images = Gs.fit(latent, label, minibatch_size=minibatch_size, num_gpus=config_test.num_gpus)\n    sess = tf.get_default_session()\n\n    target = tf.placeholder(tf.float32,name='target')\n    lr = tf.placeholder(tf.float32,name='lr')\n    #loss = tf.reduce_sum(tf.abs(images[0][0] - target))\n    loss = tf.nn.l2_loss(images[0][0] - target)\n    with tf.variable_scope('adam'):\n        opt = tf.train.AdamOptimizer(lr).minimize(loss,var_list=latent)\n\n    sess.run(tf.variables_initializer([latent, label]))\n    sess.run(tf.variables_initializer(tf.global_variables('adam')))\n\n\n    # real_path = '/vol/phoebe/3DMD_SCIENCE_MUSEUM/Colour_UV_maps'\n    # real_path = '/home/baris/data/mein3d_600x600'\n    real_path = '/media/gen/pca_alone'\n    save_path = '/media/gen/gan-pca'\n    #target_im = PIL.Image.open('/media/logs-nvidia/002-fake-images-0/000-pgan-mein3d_tf-preset-v2-2gpus-fp32-VERBOSE-HIST-network-final-000001.png')\n\n    for ind, real in enumerate(myutil.files(real_path)):\n        target_im = myutil.crop_im(PIL.Image.open(os.path.join(real_path,real)))\n        for j in [0.1,0.01,0.001]:\n            for i in range(500):\n                l2,_ = sess.run([loss,opt],{target: myutil.rgb2tf(target_im),lr:j})\n                if i % 100 == 0:\n                    print(l2)\n\n        myutil.concat_image(np.asarray(target_im),myutil.tf2rgb(sess.run(images))).save(os.path.join(save_path,real))\n\n    sess.close()\n\ndef generate_fake_images_glob(run_id, snapshot=None, grid_size=[1,1], num_pngs=1, image_shrink=1, png_prefix=None, random_seed=1000, minibatch_size=8):\n    network_pkl = misc.locate_network_pkl(run_id, snapshot)\n    if png_prefix is None:\n        png_prefix = misc.get_id_string_for_network_pkl(network_pkl) + '-'\n    random_state = np.random.RandomState(random_seed)\n\n    print('Loading network from \"%s\"...' % network_pkl)\n    G, D, Gs = misc.load_network_pkl(run_id, snapshot)\n    latents = random_state.randn(num_pngs, *G.input_shape[1:]).astype(np.float32)\n    dist = cdist(latents,latents)\n    np.fill_diagonal(dist,100)\n    result_subdir = misc.create_result_subdir(config_test.result_dir, config_test.desc)\n    for png_idx in range(num_pngs):\n        print('Generating png %d / %d...' % (png_idx, num_pngs))\n        latents = misc.random_latents(np.prod(grid_size), Gs, random_state=random_state)\n        labels = np.zeros([latents.shape[0], 0], np.float32)\n        images = Gs.run(latents, labels, minibatch_size=minibatch_size, num_gpus=config_test.num_gpus, out_mul=127.5, out_add=127.5, out_shrink=image_shrink, out_dtype=np.uint8)\n        misc.save_image_grid(images, os.path.join(result_subdir, '%s%06d.png' % (png_prefix, png_idx)), [0,255], grid_size)\n    open(os.path.join(result_subdir, '_done.txt'), 'wt').close()\n\ndef generate_fake_images(run_id, snapshot=None, grid_size=[1,1],batch_size=8, num_pngs=1, image_shrink=1, png_prefix=None, random_seed=1000, minibatch_size=8):\n    network_pkl = misc.locate_network_pkl(run_id, snapshot)\n    if png_prefix is None:\n        png_prefix = misc.get_id_string_for_network_pkl(network_pkl) + '-'\n    random_state = np.random.RandomState(random_seed)\n\n    print('Loading network from \"%s\"...' % network_pkl)\n    G, D, Gs = misc.load_network_pkl(run_id, snapshot)\n\n    result_subdir = misc.create_result_subdir(config_test.result_dir, config_test.desc)\n    for png_idx in range(int(num_pngs/batch_size)):\n        start = time.time()\n        print('Generating png %d-%d / %d... in ' % (png_idx*batch_size,(png_idx+1)*batch_size, num_pngs),end='')\n        latents = misc.random_latents(np.prod(grid_size)*batch_size, Gs, random_state=random_state)\n        labels = np.zeros([latents.shape[0], 7], np.float32)\n        images = Gs.run(latents, labels, minibatch_size=minibatch_size, num_gpus=config_test.num_gpus, out_shrink=image_shrink)\n        for i in range(batch_size):\n            if images.shape[1]==3:\n                mio.export_pickle(images[i],os.path.join(result_subdir, '%s%06d.pkl' % (png_prefix, png_idx*batch_size+i)))\n                # misc.save_image(images[i], os.path.join(result_subdir, '%s%06d.png' % (png_prefix, png_idx*batch_size+i)), [0,255], grid_size)\n            elif images.shape[1]==6:\n                mio.export_pickle(images[i][3:6],\n                                  os.path.join(result_subdir, '%s%06d.pkl' % (png_prefix, png_idx * batch_size + i)),overwrite=True)\n                misc.save_image(images[i][0:3], os.path.join(result_subdir, '%s%06d.png' % (png_prefix, png_idx*batch_size+i)), [-1,1], grid_size)\n            elif images.shape[1]==9:\n                mio.export_pickle(images[i][3:6],\n                                  os.path.join(result_subdir, '%s%06d_shp.pkl' % (png_prefix, png_idx * batch_size + i)),overwrite=True)\n                mio.export_pickle(images[i][6:9],\n                                  os.path.join(result_subdir, '%s%06d_nor.pkl' % (png_prefix, png_idx * batch_size + i)),overwrite=True)\n                misc.save_image(images[i][0:3], os.path.join(result_subdir, '%s%06d.png' % (png_prefix, png_idx*batch_size+i)), [-1,1], grid_size)\n        print('%0.2f seconds' % (time.time() - start))\n\n    open(os.path.join(result_subdir, '_done.txt'), 'wt').close()\n\n#----------------------------------------------------------------------------\n# Generate MP4 video of random interpolations using a previously trained network.\n# To run, uncomment the appropriate line in config_test.py and launch train.py.\n\ndef generate_interpolation_video(run_id, snapshot=None, grid_size=[1,1], image_shrink=1, image_zoom=1, duration_sec=60.0, smoothing_sec=1.0, mp4=None, mp4_fps=30, mp4_codec='libx265', mp4_bitrate='16M', random_seed=1000, minibatch_size=8):\n    network_pkl = misc.locate_network_pkl(run_id, snapshot)\n    if mp4 is None:\n        mp4 = misc.get_id_string_for_network_pkl(network_pkl) + '-lerp.mp4'\n    num_frames = int(np.rint(duration_sec * mp4_fps))\n    random_state = np.random.RandomState(random_seed)\n\n    print('Loading network from \"%s\"...' % network_pkl)\n    G, D, Gs = misc.load_network_pkl(run_id, snapshot)\n\n    print('Generating latent vectors...')\n    shape = [num_frames, np.prod(grid_size)] + Gs.input_shape[1:] # [frame, image, channel, component]\n    all_latents = random_state.randn(*shape).astype(np.float32)\n    all_latents = scipy.ndimage.gaussian_filter(all_latents, [smoothing_sec * mp4_fps] + [0] * len(Gs.input_shape), mode='wrap')\n    all_latents /= np.sqrt(np.mean(np.square(all_latents)))\n\n    # Frame generation func for moviepy.\n    def make_frame(t):\n        frame_idx = int(np.clip(np.round(t * mp4_fps), 0, num_frames - 1))\n        latents = all_latents[frame_idx]\n        labels = np.zeros([latents.shape[0], 0], np.float32)\n        images = Gs.run(latents, labels, minibatch_size=minibatch_size, num_gpus=config_test.num_gpus, out_mul=127.5, out_add=127.5, out_shrink=image_shrink, out_dtype=np.uint8)\n        grid = misc.create_image_grid(images, grid_size).transpose(1, 2, 0) # HWC\n        if image_zoom > 1:\n            grid = scipy.ndimage.zoom(grid, [image_zoom, image_zoom, 1], order=0)\n        if grid.shape[2] == 1:\n            grid = grid.repeat(3, 2) # grayscale => RGB\n        return grid\n\n    # Generate video.\n    import moviepy.editor # pip install moviepy\n    result_subdir = misc.create_result_subdir(config_test.result_dir, config_test.desc)\n    moviepy.editor.VideoClip(make_frame, duration=duration_sec).write_videofile(os.path.join(result_subdir, mp4), fps=mp4_fps, codec='libx264', bitrate=mp4_bitrate)\n    open(os.path.join(result_subdir, '_done.txt'), 'wt').close()\n\n#----------------------------------------------------------------------------\n# Generate MP4 video of random interpolations using a previously trained network.\n# To run, uncomment the appropriate line in config_test.py and launch train.py.\n\ndef generate_interpolation_images(run_id, snapshot=None, grid_size=[1,1], image_shrink=1, image_zoom=1, duration_sec=60.0, smoothing_sec=1.0, mp4=None, mp4_fps=30, mp4_codec='libx265', mp4_bitrate='16M', random_seed=1000, minibatch_size=8):\n\n    network_pkl = misc.locate_network_pkl(run_id, snapshot)\n    if mp4 is None:\n        mp4 = misc.get_id_string_for_network_pkl(network_pkl) + '-lerp.mp4'\n    num_frames = int(np.rint(duration_sec * mp4_fps))\n    random_state = np.random.RandomState(random_seed)\n\n    print('Loading network from \"%s\"...' % network_pkl)\n    G, D, Gs = misc.load_network_pkl(run_id, snapshot)\n\n    print('Generating latent vectors...')\n    shape = [num_frames, np.prod(grid_size)] + [Gs.input_shape[1:][0]+Gs.input_shapes[1][1:][0]] # [frame, image, channel, component]\n    all_latents = random_state.randn(*shape).astype(np.float32)\n    all_latents = scipy.ndimage.gaussian_filter(all_latents, [smoothing_sec * mp4_fps] + [0] * len(Gs.input_shape), mode='wrap')\n    all_latents /= np.sqrt(np.mean(np.square(all_latents)))\n\n    #10 10 10 10 5 3 10\n    # model = mio.import_pickle('../models/lsfm_shape_model_fw.pkl')\n    # facesoft_model = mio.import_pickle('../models/facesoft_id_and_exp_3d_face_model.pkl')['shape_model']\n    # lsfm_model = m3io.import_lsfm_model('/home/baris/Projects/faceganhd/models/all_all_all.mat')\n    # model_mean = lsfm_model.mean().copy()\n    # mask = mio.import_pickle('../UV_spaces_V2/mask_full_2_crop.pkl')\n    lsfm_tcoords = \\\n    mio.import_pickle('512_UV_dict.pkl')['tcoords']\n    lsfm_params = []\n    result_subdir = misc.create_result_subdir(config_test.result_dir, config_test.desc)\n    for png_idx in range(int(num_frames/minibatch_size)):\n        start = time.time()\n        print('Generating png %d-%d / %d... in ' % (png_idx*minibatch_size,(png_idx+1)*minibatch_size, num_frames),end='')\n        latents = all_latents[png_idx*minibatch_size:(png_idx+1)*minibatch_size,0,:Gs.input_shape[1:][0]]\n        labels = all_latents[png_idx*minibatch_size:(png_idx+1)*minibatch_size,0,Gs.input_shape[1:][0]:]\n        labels_softmax = softmax(labels) *np.array([10,10,10,10,5,3,10])\n        images = Gs.run(latents, labels_softmax, minibatch_size=minibatch_size, num_gpus=config_test.num_gpus, out_shrink=image_shrink)\n        for i in range(minibatch_size):\n            texture = Image(np.clip(images[i,0:3]/2+0.5,0,1))\n            img_shape = ndimage.gaussian_filter(images[i,3:6], sigma=(0, 3, 3), order=0)\n            mesh_raw = from_UV_2_3D(Image(img_shape),topology='full',uv_layout='oval')\n            # model_mean.points[mask,:] = mesh_raw.points\n            normals = images[i,6:9]\n            normals_norm = (normals - normals.min()) / (normals.max() - normals.min())\n            mesh = mesh_raw#facesoft_model.reconstruct(model_mean).from_mask(mask)\n            # lsfm_params.append(lsfm_model.project(mesh_raw))\n            t_mesh = TexturedTriMesh(mesh.points, lsfm_tcoords.points, texture, mesh.trilist)\n            m3io.export_textured_mesh(t_mesh, os.path.join(result_subdir, '%06d.obj' % (png_idx * minibatch_size + i)),texture_extension='.png')\n            fix_obj(os.path.join(result_subdir, '%06d.obj' % (png_idx * minibatch_size + i)))\n            mio.export_image(Image(normals_norm), os.path.join(result_subdir, '%06d_nor.png' % (png_idx * minibatch_size + i)))\n        print('%0.2f seconds' % (time.time() - start))\n    mio.export_pickle(lsfm_params,os.path.join(result_subdir, 'lsfm_params.pkl'))\n    open(os.path.join(result_subdir, '_done.txt'), 'wt').close()\n\ndef generate_interpolation_video_bydim(run_id, snapshot=None, grid_size=[1,1], image_shrink=1, image_zoom=1, duration_sec=60.0, smoothing_sec=1.0, mp4=None, mp4_fps=30, mp4_codec='libx265', mp4_bitrate='16M', random_seed=1000, minibatch_size=8, dim=0):\n    network_pkl = misc.locate_network_pkl(run_id, snapshot)\n    if mp4 is None:\n        mp4 = misc.get_id_string_for_network_pkl(network_pkl) + '-lerp.mp4'\n    num_frames = int(np.rint(duration_sec * mp4_fps))\n    random_state = np.random.RandomState(random_seed)\n\n    print('Loading network from \"%s\"...' % network_pkl)\n    G, D, Gs = misc.load_network_pkl(run_id, snapshot)\n\n    print('Generating latent vectors...')\n    shape = [num_frames, np.prod(grid_size)] + Gs.input_shape[1:] # [frame, image, channel, component]\n    all_latents = np.tile(random_state.randn(*shape[1:3]).astype(np.float32),[shape[0],1,1])\n    #all_latents = random_state.randn(*shape).astype(np.float32)\n    #all_latents = scipy.ndimage.gaussian_filter(all_latents, [smoothing_sec * mp4_fps] + [0] * len(Gs.input_shape), mode='wrap')\n    all_latents[:,0,dim]=np.linspace(-4.0,4.0,shape[0])\n    all_latents /= np.sqrt(np.mean(np.square(all_latents)))\n\n    # Frame generation func for moviepy.\n    def make_frame(t):\n        frame_idx = int(np.clip(np.round(t * mp4_fps), 0, num_frames - 1))\n        latents = all_latents[frame_idx]\n        labels = np.zeros([latents.shape[0], 0], np.float32)\n        images = Gs.run(latents, labels, minibatch_size=minibatch_size, num_gpus=config_test.num_gpus, out_mul=127.5, out_add=127.5, out_shrink=image_shrink, out_dtype=np.uint8)\n        grid = misc.create_image_grid(images, grid_size).transpose(1, 2, 0) # HWC\n        if image_zoom > 1:\n            grid = scipy.ndimage.zoom(grid, [image_zoom, image_zoom, 1], order=0)\n        if grid.shape[2] == 1:\n            grid = grid.repeat(3, 2) # grayscale => RGB\n        return grid\n\n    # Generate video.\n    import moviepy.editor # pip install moviepy\n    result_subdir = misc.create_result_subdir(config_test.result_dir, config_test.desc)\n    moviepy.editor.VideoClip(make_frame, duration=duration_sec).write_videofile(os.path.join(result_subdir, mp4), fps=mp4_fps, codec='libx264', bitrate=mp4_bitrate)\n    open(os.path.join(result_subdir, '_done.txt'), 'wt').close()\n\n#----------------------------------------------------------------------------\n# Generate MP4 video of training progress for a previous training run.\n# To run, uncomment the appropriate line in config_test.py and launch train.py.\n\ndef generate_training_video(run_id, duration_sec=20.0, time_warp=1.5, mp4=None, mp4_fps=30, mp4_codec='libx265', mp4_bitrate='16M'):\n    src_result_subdir = misc.locate_result_subdir(run_id)\n    if mp4 is None:\n        mp4 = os.path.basename(src_result_subdir) + '-train.mp4'\n\n    # Parse log.\n    times = []\n    snaps = [] # [(png, kimg, lod), ...]\n    with open(os.path.join(src_result_subdir, 'log.txt'), 'rt') as log:\n        for line in log:\n            k = re.search(r'kimg ([\\d\\.]+) ', line)\n            l = re.search(r'lod ([\\d\\.]+) ', line)\n            t = re.search(r'time (\\d+d)? *(\\d+h)? *(\\d+m)? *(\\d+s)? ', line)\n            if k and l and t:\n                k = float(k.group(1))\n                l = float(l.group(1))\n                t = [int(t.group(i)[:-1]) if t.group(i) else 0 for i in range(1, 5)]\n                t = t[0] * 24*60*60 + t[1] * 60*60 + t[2] * 60 + t[3]\n                png = os.path.join(src_result_subdir, 'fakes%06d.png' % int(np.floor(k)))\n                if os.path.isfile(png):\n                    times.append(t)\n                    snaps.append((png, k, l))\n    assert len(times)\n\n    # Frame generation func for moviepy.\n    png_cache = [None, None] # [png, img]\n    def make_frame(t):\n        wallclock = ((t / duration_sec) ** time_warp) * times[-1]\n        png, kimg, lod = snaps[max(bisect.bisect(times, wallclock) - 1, 0)]\n        if png_cache[0] == png:\n            img = png_cache[1]\n        else:\n            img = scipy.misc.imread(png)\n            while img.shape[1] > 1920 or img.shape[0] > 1080:\n                img = img.astype(np.float32).reshape(img.shape[0]//2, 2, img.shape[1]//2, 2, -1).mean(axis=(1,3))\n            png_cache[:] = [png, img]\n        img = misc.draw_text_label(img, 'lod %.2f' % lod, 16, img.shape[0]-4, alignx=0.0, aligny=1.0)\n        img = misc.draw_text_label(img, misc.format_time(int(np.rint(wallclock))), img.shape[1]//2, img.shape[0]-4, alignx=0.5, aligny=1.0)\n        img = misc.draw_text_label(img, '%.0f kimg' % kimg, img.shape[1]-16, img.shape[0]-4, alignx=1.0, aligny=1.0)\n        return img\n\n    # Generate video.\n    import moviepy.editor # pip install moviepy\n    result_subdir = misc.create_result_subdir(config_test.result_dir, config_test.desc)\n    moviepy.editor.VideoClip(make_frame, duration=duration_sec).write_videofile(os.path.join(result_subdir, mp4), fps=mp4_fps, codec='libx264', bitrate=mp4_bitrate)\n    open(os.path.join(result_subdir, '_done.txt'), 'wt').close()\n\n#----------------------------------------------------------------------------\n# Evaluate one or more metrics for a previous training run.\n# To run, uncomment one of the appropriate lines in config_test.py and launch train.py.\n\ndef evaluate_metrics(run_id, log, metrics, num_images, real_passes, minibatch_size=None):\n    metric_class_names = {\n        'swd':      'metrics.sliced_wasserstein.API',\n        'fid':      'metrics.frechet_inception_distance.API',\n        'is':       'metrics.inception_score.API',\n        'msssim':   'metrics.ms_ssim.API',\n    }\n\n    # Locate training run and initialize logging.\n    result_subdir = misc.locate_result_subdir(run_id)\n    snapshot_pkls = misc.list_network_pkls(result_subdir, include_final=False)\n    assert len(snapshot_pkls) >= 1\n    log_file = os.path.join(result_subdir, log)\n    print('Logging output to', log_file)\n    misc.set_output_log_file(log_file)\n\n    # Initialize dataset and select minibatch size.\n    dataset_obj, mirror_augment = misc.load_dataset_for_previous_run(result_subdir, verbose=True, shuffle_mb=0)\n    if minibatch_size is None:\n        minibatch_size = np.clip(8192 // dataset_obj.shape[1], 4, 256)\n\n    # Initialize metrics.\n    metric_objs = []\n    for name in metrics:\n        class_name = metric_class_names.get(name, name)\n        print('Initializing %s...' % class_name)\n        class_def = tfutil.import_obj(class_name)\n        image_shape = [3] + dataset_obj.shape[1:]\n        obj = class_def(num_images=num_images, image_shape=image_shape, image_dtype=np.uint8, minibatch_size=minibatch_size)\n        tfutil.init_uninited_vars()\n        mode = 'warmup'\n        obj.begin(mode)\n        for idx in range(10):\n            obj.feed(mode, np.random.randint(0, 256, size=[minibatch_size]+image_shape, dtype=np.uint8))\n        obj.end(mode)\n        metric_objs.append(obj)\n\n    # Print table header.\n    print()\n    print('%-10s%-12s' % ('Snapshot', 'Time_eval'), end='')\n    for obj in metric_objs:\n        for name, fmt in zip(obj.get_metric_names(), obj.get_metric_formatting()):\n            print('%-*s' % (len(fmt % 0), name), end='')\n    print()\n    print('%-10s%-12s' % ('---', '---'), end='')\n    for obj in metric_objs:\n        for fmt in obj.get_metric_formatting():\n            print('%-*s' % (len(fmt % 0), '---'), end='')\n    print()\n\n    # Feed in reals.\n    for title, mode in [('Reals', 'reals'), ('Reals2', 'fakes')][:real_passes]:\n        print('%-10s' % title, end='')\n        time_begin = time.time()\n        labels = np.zeros([num_images, dataset_obj.label_size], dtype=np.float32)\n        [obj.begin(mode) for obj in metric_objs]\n        for begin in range(0, num_images, minibatch_size):\n            end = min(begin + minibatch_size, num_images)\n            images, labels[begin:end] = dataset_obj.get_minibatch_np(end - begin)\n            if mirror_augment:\n                images = misc.apply_mirror_augment(images)\n            if images.shape[1] == 1:\n                images = np.tile(images, [1, 3, 1, 1]) # grayscale => RGB\n            [obj.feed(mode, images) for obj in metric_objs]\n        results = [obj.end(mode) for obj in metric_objs]\n        print('%-12s' % misc.format_time(time.time() - time_begin), end='')\n        for obj, vals in zip(metric_objs, results):\n            for val, fmt in zip(vals, obj.get_metric_formatting()):\n                print(fmt % val, end='')\n        print()\n\n    # Evaluate each network snapshot.\n    for snapshot_idx, snapshot_pkl in enumerate(reversed(snapshot_pkls)):\n        prefix = 'network-snapshot-'; postfix = '.pkl'\n        snapshot_name = os.path.basename(snapshot_pkl)\n        assert snapshot_name.startswith(prefix) and snapshot_name.endswith(postfix)\n        snapshot_kimg = int(snapshot_name[len(prefix) : -len(postfix)])\n\n        print('%-10d' % snapshot_kimg, end='')\n        mode ='fakes'\n        [obj.begin(mode) for obj in metric_objs]\n        time_begin = time.time()\n        with tf.Graph().as_default(), tfutil.create_session(config_test.tf_config).as_default():\n            G, D, Gs = misc.load_pkl(snapshot_pkl)\n            for begin in range(0, num_images, minibatch_size):\n                end = min(begin + minibatch_size, num_images)\n                latents = misc.random_latents(end - begin, Gs)\n                images = Gs.run(latents, labels[begin:end], num_gpus=config_test.num_gpus, out_mul=127.5, out_add=127.5, out_dtype=np.uint8)\n                if images.shape[1] == 1:\n                    images = np.tile(images, [1, 3, 1, 1]) # grayscale => RGB\n                [obj.feed(mode, images) for obj in metric_objs]\n        results = [obj.end(mode) for obj in metric_objs]\n        print('%-12s' % misc.format_time(time.time() - time_begin), end='')\n        for obj, vals in zip(metric_objs, results):\n            for val, fmt in zip(vals, obj.get_metric_formatting()):\n                print(fmt % val, end='')\n        print()\n    print()\n\n\ndef fix_obj(fp):\n    os.path.dirname(fp)\n    template = \"\"\"# Produced by Dimensional Imaging OBJ exporter\n# http://www.di3d.com\n#\n#\nnewmtl merged_material\nKa  0.5 0.5 0.5\nKd  0.5 0.5 0.5\nKs  0.47 0.47 0.47\nd 1\nNs 0\nillum 2\nmap_Kd {}.png\n#\n#\n# EOF\"\"\".format(os.path.splitext(os.path.basename(fp))[0])\n    with open(os.path.join(os.path.dirname(fp), os.path.splitext(os.path.basename(fp))[0] + '.mtl'), 'w') as f:\n        f.write(template)\n\n    with open(fp, 'r+')  as f:\n        content = f.read()\n        f.seek(0, 0)\n        f.write('mtllib ' + os.path.splitext(os.path.basename(fp))[0] + '.mtl' + '\\n' + content)\n\n\n#----------------------------------------------------------------------------\n"
  }
]